dm-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [dm-devel] [PATCH v2 0/4] add support for inline encryption to device mapper
@ 2020-10-15 21:46 Satya Tangirala
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager Satya Tangirala
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Satya Tangirala @ 2020-10-15 21:46 UTC (permalink / raw)
  To: linux-block, linux-kernel, dm-devel
  Cc: Jens Axboe, Satya Tangirala, Mike Snitzer, Alasdair Kergon, Eric Biggers

This patch series adds support for inline encryption to the device mapper.

Patch 1 introduces the "passthrough" keyslot manager.

The regular keyslot manager is designed for inline encryption hardware that
have only a small fixed number of keyslots. A DM device itself does not
actually have only a small fixed number of keyslots - it doesn't actually
have any keyslots in the first place, and programming an encryption context
into a DM device doesn't make much semantic sense. It is possible for a DM
device to set up a keyslot manager with some "sufficiently large" number of
keyslots in its request queue, so that upper layers can use the inline
encryption capabilities of the DM device's underlying devices, but the
memory being allocated for the DM device's keyslots is a waste since they
won't actually be used by the DM device.

The passthrough keyslot manager solves this issue - when the block layer
sees that a request queue has a passthrough keyslot manager, it doesn't
attempt to program any encryption context into the keyslot manager. The
passthrough keyslot manager only allows the device to expose its inline
encryption capabilities, and a way for upper layers to evict keys if
necessary.

There also exist inline encryption hardware that can handle encryption
contexts directly, and allow users to pass them a data request along with
the encryption context (as opposed to inline encryption hardware that
require users to first program a keyslot with an encryption context, and
then require the users to pass the keyslot index with the data request).
Such devices can also make use of the passthrough keyslot manager.

Patch 2 introduces a private field to struct blk_keyslot_manager that
owners of the struct can use for any purpose. The struct
blk_keyslot_manager has been embedded within other structures directly
(like in struct ufs_hba in drivers/scsi/ufs/ufshcd.h), but we don't
want to do that with struct mapped_device. So, the device mapper patches
later in this series use the private field to hold a pointer to the
associated struct mapped_device, since we can't use container_of() anymore.

Patch 3 introduces the changes for inline encryption support for the device
mapper. A DM device only exposes the intersection of the crypto
capabilities of its underlying devices. This is so that in case a bio with
an encryption context is eventually mapped to an underlying device that
doesn't support that encryption context, the blk-crypto-fallback's cipher
tfms are allocated ahead of time by the call to blk_crypto_start_using_key.

Each DM target can now also specify that it "may_passthrough_inline_crypto"
to opt-in to supporting passing through the underlying inline encryption
capabilities.  This flag is needed because it doesn't make much semantic
sense for certain targets like dm-crypt to expose the underlying inline
encryption capabilities to the upper layers. Again, the DM exposes inline
encryption capabilities of the underlying devices only if all of them
opt-in to passing through inline encryption support.

A DM device's keyslot manager is set up whenever a new table is swapped in.
This patch only allows the keyslot manager's capabilities to *expand*
because of table changes. Any attempts to load a new table that would cause
crypto capabilities to be dropped are rejected. The crypto capabilities of
a new table are also verified when the table is loaded (and the load is
rejected if crypto capabilities will be dropped because of the new table),
but the keyslot manager for the DM device is only modified when the table
is actually swapped in.

This patch also only exposes the intersection of the underlying
device's capabilities, which has the effect of causing en/decryption of a
bio to fall back to the kernel crypto API (if the fallback is enabled)
whenever any of the underlying devices doesn't support the encryption
context of the bio - it might be possible to make the bio only fall back to
the kernel crypto API if the bio's target underlying device doesn't support
the bio's encryption context, but the use case may be uncommon enough in
the first place not to warrant worrying about it right now.

Patch 4 makes some DM targets opt-in to passing through inline encryption
support. It does not (yet) try to enable this option with dm-raid, since
users can "hot add" disks to a raid device, which makes this not completely
straightforward (we'll need to ensure that any "hot added" disks must have
a superset of the inline encryption capabilities of the rest of the disks
in the raid device, due to the way Patch 2 of this series works).

Changes v1 => v2:
 - Introduce private field to struct blk_keyslot_manager
 - Allow the DM keyslot manager to expand its crypto capabilities if the
   table is changed.
 - Make DM reject table changes that would otherwise cause crypto
   capabilities to be dropped.
 - Allocate the DM device's keyslot manager only when at least one crypto
   capability is supported (since a NULL value for q->ksm represents "no
   crypto support" anyway).
 - Remove the struct blk_keyslot_manager field from struct mapped_device.
   This patch now relies on just directly setting up the keyslot manager
   in the request queue, since each DM device is tied to only 1 queue.

Satya Tangirala (4):
  block: keyslot-manager: Introduce passthrough keyslot manager
  block: add private field to struct keyslot_manager
  dm: add support for passing through inline crypto support
  dm: enable may_passthrough_inline_crypto on some targets

 block/blk-crypto.c              |   1 +
 block/keyslot-manager.c         | 130 +++++++++++++++++++
 drivers/md/dm-flakey.c          |   1 +
 drivers/md/dm-ioctl.c           |   8 ++
 drivers/md/dm-linear.c          |   1 +
 drivers/md/dm.c                 | 217 +++++++++++++++++++++++++++++++-
 drivers/md/dm.h                 |  19 +++
 include/linux/device-mapper.h   |   6 +
 include/linux/keyslot-manager.h |  22 ++++
 9 files changed, 404 insertions(+), 1 deletion(-)

-- 
2.29.0.rc1.297.gfa9743e501-goog

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager
  2020-10-15 21:46 [dm-devel] [PATCH v2 0/4] add support for inline encryption to device mapper Satya Tangirala
@ 2020-10-15 21:46 ` Satya Tangirala
  2020-10-16  7:20   ` Christoph Hellwig
  2020-10-27 20:04   ` Eric Biggers
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 2/4] block: add private field to struct keyslot_manager Satya Tangirala
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 18+ messages in thread
From: Satya Tangirala @ 2020-10-15 21:46 UTC (permalink / raw)
  To: linux-block, linux-kernel, dm-devel
  Cc: Jens Axboe, Satya Tangirala, Mike Snitzer, Alasdair Kergon, Eric Biggers

The device mapper may map over devices that have inline encryption
capabilities, and to make use of those capabilities, the DM device must
itself advertise those inline encryption capabilities. One way to do this
would be to have the DM device set up a keyslot manager with a
"sufficiently large" number of keyslots, but that would use a lot of
memory. Also, the DM device itself has no "keyslots", and it doesn't make
much sense to talk about "programming a key into a DM device's keyslot
manager", so all that extra memory used to represent those keyslots is just
wasted. All a DM device really needs to be able to do is advertise the
crypto capabilities of the underlying devices in a coherent manner and
expose a way to evict keys from the underlying devices.

There are also devices with inline encryption hardware that do not
have a limited number of keyslots. One can send a raw encryption key along
with a bio to these devices (as opposed to typical inline encryption
hardware that require users to first program a raw encryption key into a
keyslot, and send the index of that keyslot along with the bio). These
devices also only need the same things from the keyslot manager that DM
devices need - a way to advertise crypto capabilities and potentially a way
to expose a function to evict keys from hardware.

So we introduce a "passthrough" keyslot manager that provides a way to
represent a keyslot manager that doesn't have just a limited number of
keyslots, and for which do not require keys to be programmed into keyslots.
DM devices can set up a passthrough keyslot manager in their request
queues, and advertise appropriate crypto capabilities based on those of the
underlying devices. Blk-crypto does not attempt to program keys into any
keyslots in the passthrough keyslot manager. Instead, if/when the bio is
resubmitted to the underlying device, blk-crypto will try to program the
key into the underlying device's keyslot manager.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 block/keyslot-manager.c         | 41 +++++++++++++++++++++++++++++++++
 include/linux/keyslot-manager.h |  2 ++
 2 files changed, 43 insertions(+)

diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
index 35abcb1ec051..5ad476dafeab 100644
--- a/block/keyslot-manager.c
+++ b/block/keyslot-manager.c
@@ -62,6 +62,11 @@ static inline void blk_ksm_hw_exit(struct blk_keyslot_manager *ksm)
 		pm_runtime_put_sync(ksm->dev);
 }
 
+static inline bool blk_ksm_is_passthrough(struct blk_keyslot_manager *ksm)
+{
+	return ksm->num_slots == 0;
+}
+
 /**
  * blk_ksm_init() - Initialize a keyslot manager
  * @ksm: The keyslot_manager to initialize.
@@ -198,6 +203,10 @@ blk_status_t blk_ksm_get_slot_for_key(struct blk_keyslot_manager *ksm,
 	int err;
 
 	*slot_ptr = NULL;
+
+	if (blk_ksm_is_passthrough(ksm))
+		return BLK_STS_OK;
+
 	down_read(&ksm->lock);
 	slot = blk_ksm_find_and_grab_keyslot(ksm, key);
 	up_read(&ksm->lock);
@@ -318,6 +327,16 @@ int blk_ksm_evict_key(struct blk_keyslot_manager *ksm,
 	struct blk_ksm_keyslot *slot;
 	int err = 0;
 
+	if (blk_ksm_is_passthrough(ksm)) {
+		if (ksm->ksm_ll_ops.keyslot_evict) {
+			blk_ksm_hw_enter(ksm);
+			err = ksm->ksm_ll_ops.keyslot_evict(ksm, key, -1);
+			blk_ksm_hw_exit(ksm);
+			return err;
+		}
+		return 0;
+	}
+
 	blk_ksm_hw_enter(ksm);
 	slot = blk_ksm_find_keyslot(ksm, key);
 	if (!slot)
@@ -353,6 +372,9 @@ void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm)
 {
 	unsigned int slot;
 
+	if (blk_ksm_is_passthrough(ksm))
+		return;
+
 	/* This is for device initialization, so don't resume the device */
 	down_write(&ksm->lock);
 	for (slot = 0; slot < ksm->num_slots; slot++) {
@@ -394,3 +416,22 @@ void blk_ksm_unregister(struct request_queue *q)
 {
 	q->ksm = NULL;
 }
+
+/**
+ * blk_ksm_init_passthrough() - Init a passthrough keyslot manager
+ * @ksm: The keyslot manager to init
+ *
+ * Initialize a passthrough keyslot manager.
+ * Called by e.g. storage drivers to set up a keyslot manager in their
+ * request_queue, when the storage driver wants to manage its keys by itself.
+ * This is useful for inline encryption hardware that doesn't have the concept
+ * of keyslots, and for layered devices.
+ *
+ * See blk_ksm_init() for more details about the parameters.
+ */
+void blk_ksm_init_passthrough(struct blk_keyslot_manager *ksm)
+{
+	memset(ksm, 0, sizeof(*ksm));
+	init_rwsem(&ksm->lock);
+}
+EXPORT_SYMBOL_GPL(blk_ksm_init_passthrough);
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
index 18f3f5346843..323e15dd6fa7 100644
--- a/include/linux/keyslot-manager.h
+++ b/include/linux/keyslot-manager.h
@@ -103,4 +103,6 @@ void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm);
 
 void blk_ksm_destroy(struct blk_keyslot_manager *ksm);
 
+void blk_ksm_init_passthrough(struct blk_keyslot_manager *ksm);
+
 #endif /* __LINUX_KEYSLOT_MANAGER_H */
-- 
2.29.0.rc1.297.gfa9743e501-goog

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dm-devel] [PATCH v2 2/4] block: add private field to struct keyslot_manager
  2020-10-15 21:46 [dm-devel] [PATCH v2 0/4] add support for inline encryption to device mapper Satya Tangirala
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager Satya Tangirala
@ 2020-10-15 21:46 ` Satya Tangirala
  2020-10-16  7:19   ` Christoph Hellwig
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Satya Tangirala
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 4/4] dm: enable may_passthrough_inline_crypto on some targets Satya Tangirala
  3 siblings, 1 reply; 18+ messages in thread
From: Satya Tangirala @ 2020-10-15 21:46 UTC (permalink / raw)
  To: linux-block, linux-kernel, dm-devel
  Cc: Jens Axboe, Satya Tangirala, Mike Snitzer, Alasdair Kergon, Eric Biggers

Add a (void *) pointer to struct keyslot_manager that the owner of the
struct can use for any purpose it wants.

Right now, the struct keyslot_manager is expected to be embedded directly
into other structs (and the owner of the keyslot_manager would use
container_of() to access any other data the owner needs). However, this
might take up more space than is acceptable, and it would be better to be
able to add only a pointer to a struct keyslot_manager into other structs
rather than embed the entire struct directly. But container_of() can't be
used when only the pointer to the keyslot_manager is embded. The primary
motivation of this patch is to get around that issue.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 include/linux/keyslot-manager.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
index 323e15dd6fa7..37f1022b256f 100644
--- a/include/linux/keyslot-manager.h
+++ b/include/linux/keyslot-manager.h
@@ -59,6 +59,9 @@ struct blk_keyslot_manager {
 	/* Device for runtime power management (NULL if none) */
 	struct device *dev;
 
+	/* Private data for owner */
+	void *priv;
+
 	/* Here onwards are *private* fields for internal keyslot manager use */
 
 	unsigned int num_slots;
-- 
2.29.0.rc1.297.gfa9743e501-goog

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support
  2020-10-15 21:46 [dm-devel] [PATCH v2 0/4] add support for inline encryption to device mapper Satya Tangirala
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager Satya Tangirala
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 2/4] block: add private field to struct keyslot_manager Satya Tangirala
@ 2020-10-15 21:46 ` Satya Tangirala
  2020-10-25 21:02   ` kernel test robot
                     ` (2 more replies)
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 4/4] dm: enable may_passthrough_inline_crypto on some targets Satya Tangirala
  3 siblings, 3 replies; 18+ messages in thread
From: Satya Tangirala @ 2020-10-15 21:46 UTC (permalink / raw)
  To: linux-block, linux-kernel, dm-devel
  Cc: Jens Axboe, Satya Tangirala, Mike Snitzer, Alasdair Kergon, Eric Biggers

Update the device-mapper core to support exposing the inline crypto
support of the underlying device(s) through the device-mapper device.

This works by creating a "passthrough keyslot manager" for the dm
device, which declares support for encryption settings which all
underlying devices support.  When a supported setting is used, the bio
cloning code handles cloning the crypto context to the bios for all the
underlying devices.  When an unsupported setting is used, the blk-crypto
fallback is used as usual.

Crypto support on each underlying device is ignored unless the
corresponding dm target opts into exposing it.  This is needed because
for inline crypto to semantically operate on the original bio, the data
must not be transformed by the dm target.  Thus, targets like dm-linear
can expose crypto support of the underlying device, but targets like
dm-crypt can't.  (dm-crypt could use inline crypto itself, though.)

When a key is evicted from the dm device, it is evicted from all
underlying devices.

A DM device's table can only be changed if the "new" inline encryption
capabilities are a superset of the "old" inline encryption capabilities.
Attempts to make changes to the table that result in some inline encryption
capability becoming no longer supported will be rejected.

Co-developed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Satya Tangirala <satyat@google.com>
---
 block/blk-crypto.c              |   1 +
 block/keyslot-manager.c         |  89 +++++++++++++
 drivers/md/dm-ioctl.c           |   8 ++
 drivers/md/dm.c                 | 217 +++++++++++++++++++++++++++++++-
 drivers/md/dm.h                 |  19 +++
 include/linux/device-mapper.h   |   6 +
 include/linux/keyslot-manager.h |  17 +++
 7 files changed, 356 insertions(+), 1 deletion(-)

diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 5da43f0973b4..c2be8f15006c 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -409,3 +409,4 @@ int blk_crypto_evict_key(struct request_queue *q,
 	 */
 	return blk_crypto_fallback_evict_key(key);
 }
+EXPORT_SYMBOL_GPL(blk_crypto_evict_key);
diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
index 5ad476dafeab..e16e4a074765 100644
--- a/block/keyslot-manager.c
+++ b/block/keyslot-manager.c
@@ -416,6 +416,95 @@ void blk_ksm_unregister(struct request_queue *q)
 {
 	q->ksm = NULL;
 }
+EXPORT_SYMBOL_GPL(blk_ksm_unregister);
+
+/**
+ * blk_ksm_intersect_modes() - restrict supported modes by child device
+ * @parent: The keyslot manager for parent device
+ * @child: The keyslot manager for child device, or NULL
+ *
+ * Clear any crypto mode support bits in @parent that aren't set in @child.
+ * If @child is NULL, then all parent bits are cleared.
+ *
+ * Only use this when setting up the keyslot manager for a layered device,
+ * before it's been exposed yet.
+ */
+void blk_ksm_intersect_modes(struct blk_keyslot_manager *parent,
+			     const struct blk_keyslot_manager *child)
+{
+	if (child) {
+		unsigned int i;
+
+		parent->max_dun_bytes_supported =
+			min(parent->max_dun_bytes_supported,
+			    child->max_dun_bytes_supported);
+		for (i = 0; i < ARRAY_SIZE(child->crypto_modes_supported);
+		     i++) {
+			parent->crypto_modes_supported[i] &=
+				child->crypto_modes_supported[i];
+		}
+	} else {
+		parent->max_dun_bytes_supported = 0;
+		memset(parent->crypto_modes_supported, 0,
+		       sizeof(parent->crypto_modes_supported));
+	}
+}
+EXPORT_SYMBOL_GPL(blk_ksm_intersect_modes);
+
+/**
+ * blk_ksm_is_superset() - Check if a KSM supports a superset of crypto modes
+ *			   and DUN bytes that another KSM supports.
+ * @ksm_superset: The KSM that we want to verify is a superset
+ * @ksm_subset: The KSM that we want to verify is a subset
+ *
+ * Return: True if @ksm_superset supports a superset of the crypto modes and DUN
+ *	   bytes that @ksm_subset supports.
+ */
+bool blk_ksm_is_superset(struct blk_keyslot_manager *ksm_superset,
+			 struct blk_keyslot_manager *ksm_subset)
+{
+	int i;
+
+	if (!ksm_subset)
+		return true;
+
+	if (!ksm_superset)
+		return false;
+
+	for (i = 0; i < ARRAY_SIZE(ksm_superset->crypto_modes_supported); i++) {
+		if (ksm_subset->crypto_modes_supported[i] &
+		    (~ksm_superset->crypto_modes_supported[i])) {
+			return false;
+		}
+	}
+
+	if (ksm_subset->max_dun_bytes_supported >
+	    ksm_superset->max_dun_bytes_supported) {
+		return false;
+	}
+
+	return true;
+}
+EXPORT_SYMBOL_GPL(blk_ksm_is_superset);
+
+/**
+ * blk_ksm_update_capabilities() - Update the restrictions of a KSM to those of
+ *				   another KSM
+ * @target_ksm: The KSM whose restrictions to update.
+ * @reference_ksm: The KSM to whose restrictions this function will update
+ *		   @target_ksm's restrictions to,
+ */
+void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
+				 struct blk_keyslot_manager *reference_ksm)
+{
+	memcpy(target_ksm->crypto_modes_supported,
+	       reference_ksm->crypto_modes_supported,
+	       sizeof(target_ksm->crypto_modes_supported));
+
+	target_ksm->max_dun_bytes_supported =
+				reference_ksm->max_dun_bytes_supported;
+}
+EXPORT_SYMBOL_GPL(blk_ksm_update_capabilities);
 
 /**
  * blk_ksm_init_passthrough() - Init a passthrough keyslot manager
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index cd0478d44058..2b3efa9f9fae 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1358,6 +1358,10 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si
 		goto err_unlock_md_type;
 	}
 
+	r = dm_verify_inline_encryption(md, t);
+	if (r)
+		goto err_unlock_md_type;
+
 	if (dm_get_md_type(md) == DM_TYPE_NONE) {
 		/* Initial table load: acquire type of table. */
 		dm_set_md_type(md, dm_table_get_type(t));
@@ -2114,6 +2118,10 @@ int __init dm_early_create(struct dm_ioctl *dmi,
 	if (r)
 		goto err_destroy_table;
 
+	r = dm_verify_inline_encryption(md, t);
+	if (r)
+		goto err_destroy_table;
+
 	md->type = dm_table_get_type(t);
 	/* setup md->queue to reflect md's type (may block) */
 	r = dm_setup_md_queue(md, t);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c18fc2548518..22bb2c90583d 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -28,6 +28,7 @@
 #include <linux/refcount.h>
 #include <linux/part_stat.h>
 #include <linux/blk-crypto.h>
+#include <linux/keyslot-manager.h>
 
 #define DM_MSG_PREFIX "core"
 
@@ -1721,6 +1722,8 @@ static const struct dax_operations dm_dax_ops;
 
 static void dm_wq_work(struct work_struct *work);
 
+static void dm_destroy_inline_encryption(struct mapped_device *md);
+
 static void cleanup_mapped_device(struct mapped_device *md)
 {
 	if (md->wq)
@@ -1742,8 +1745,10 @@ static void cleanup_mapped_device(struct mapped_device *md)
 		put_disk(md->disk);
 	}
 
-	if (md->queue)
+	if (md->queue) {
+		dm_destroy_inline_encryption(md);
 		blk_cleanup_queue(md->queue);
+	}
 
 	cleanup_srcu_struct(&md->io_barrier);
 
@@ -1949,6 +1954,206 @@ static void event_callback(void *context)
 	dm_issue_global_event();
 }
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+struct dm_keyslot_evict_args {
+	const struct blk_crypto_key *key;
+	int err;
+};
+
+static int dm_keyslot_evict_callback(struct dm_target *ti, struct dm_dev *dev,
+				     sector_t start, sector_t len, void *data)
+{
+	struct dm_keyslot_evict_args *args = data;
+	int err;
+
+	err = blk_crypto_evict_key(bdev_get_queue(dev->bdev), args->key);
+	if (!args->err)
+		args->err = err;
+	/* Always try to evict the key from all devices. */
+	return 0;
+}
+
+/*
+ * When an inline encryption key is evicted from a device-mapper device, evict
+ * it from all the underlying devices.
+ */
+static int dm_keyslot_evict(struct blk_keyslot_manager *ksm,
+			    const struct blk_crypto_key *key, unsigned int slot)
+{
+	struct mapped_device *md = ksm->priv;
+	struct dm_keyslot_evict_args args = { key };
+	struct dm_table *t;
+	int srcu_idx;
+	int i;
+	struct dm_target *ti;
+
+	t = dm_get_live_table(md, &srcu_idx);
+	if (!t)
+		return 0;
+	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+		ti = dm_table_get_target(t, i);
+		if (!ti->type->iterate_devices)
+			continue;
+		ti->type->iterate_devices(ti, dm_keyslot_evict_callback, &args);
+	}
+	dm_put_live_table(md, srcu_idx);
+	return args.err;
+}
+
+static struct blk_ksm_ll_ops dm_ksm_ll_ops = {
+	.keyslot_evict = dm_keyslot_evict,
+};
+
+static int device_intersect_crypto_modes(struct dm_target *ti,
+					 struct dm_dev *dev, sector_t start,
+					 sector_t len, void *data)
+{
+	struct blk_keyslot_manager *parent = data;
+	struct blk_keyslot_manager *child = bdev_get_queue(dev->bdev)->ksm;
+
+	blk_ksm_intersect_modes(parent, child);
+	return 0;
+}
+
+/*
+ * Constructs and returns a keyslot manager that represents the crypto
+ * capabilities of the devices described by the dm_table. However, if the
+ * constructed keyslot manager does not support a superset of the crypto
+ * capabilities supported by the currect keyslot manager of the mapped_device,
+ * it returns an error instead, since we don't support restricting crypto
+ * capabilities on table changes.
+ */
+static struct blk_keyslot_manager *
+dm_init_inline_encryption(struct mapped_device *md, struct dm_table *t)
+{
+	struct blk_keyslot_manager *ksm;
+	struct dm_target *ti;
+	unsigned int i;
+
+	ksm = kmalloc(sizeof(*ksm), GFP_KERNEL);
+	if (!ksm)
+		return ERR_PTR(-EINVAL);
+	blk_ksm_init_passthrough(ksm);
+	ksm->ksm_ll_ops = dm_ksm_ll_ops;
+	ksm->max_dun_bytes_supported = UINT_MAX;
+	memset(ksm->crypto_modes_supported, 0xFF,
+	       sizeof(ksm->crypto_modes_supported));
+	ksm->priv = md;
+
+	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+		ti = dm_table_get_target(t, i);
+
+		if (!ti->may_passthrough_inline_crypto) {
+			blk_ksm_intersect_modes(ksm, NULL);
+			break;
+		}
+		if (!ti->type->iterate_devices)
+			continue;
+		ti->type->iterate_devices(ti, device_intersect_crypto_modes,
+					  ksm);
+	}
+
+	if (!blk_ksm_is_superset(ksm, md->queue->ksm)) {
+		DMWARN("Inline encryption capabilities of new DM table were more restrictive than the old table's. This is not supported!");
+		blk_ksm_destroy(ksm);
+		return ERR_PTR(-EOPNOTSUPP);
+	}
+
+	return ksm;
+}
+
+/**
+ * dm_verify_inline_encryption() - Verifies that the current keyslot manager of
+ *				   the mapped_device can be replaced by the
+ *				   keyslot manager of a given dm_table.
+ * @md: The mapped_device
+ * @t: The dm_table
+ *
+ * In particular, this function checks that the keyslot manager that will be
+ * constructed for the dm_table will support a superset of the capabilities that
+ * the current keyslot manager of the mapped_device supports.
+ *
+ * Return: 0 if the table's keyslot_manager can replace the current keyslot
+ *	   manager of the mapped_device. Negative value otherwise.
+ */
+int dm_verify_inline_encryption(struct mapped_device *md, struct dm_table *t)
+{
+	struct blk_keyslot_manager *ksm = dm_init_inline_encryption(md, t);
+
+	if (IS_ERR(ksm))
+		return PTR_ERR(ksm);
+	blk_ksm_destroy(ksm);
+
+	return 0;
+}
+
+static void dm_update_keyslot_manager(struct mapped_device *md,
+				      struct blk_keyslot_manager *ksm)
+{
+	bool ksm_is_empty = true;
+	int i;
+
+	/*
+	 * If the new KSM doesn't actually support any crypto modes, we may as
+	 * well set a NULL ksm.
+	 */
+	ksm_is_empty = true;
+	for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
+		if (ksm->crypto_modes_supported[i]) {
+			ksm_is_empty = false;
+			break;
+		}
+	}
+
+	if (ksm_is_empty) {
+		blk_ksm_destroy(ksm);
+
+		/* At this point, md->queue->ksm must also be NULL, since we're
+		 * guaranteed that ksm is a superset of md->queue->ksm, and we
+		 * never set md->queue->ksm to a non-null empty ksm.
+		 */
+		if (WARN_ON(md->queue->ksm))
+			blk_ksm_register(NULL, md->queue);
+		return;
+	}
+
+	/* Make the ksm less restrictive */
+	if (!md->queue->ksm) {
+		blk_ksm_register(ksm, md->queue);
+	} else {
+		blk_ksm_update_capabilities(md->queue->ksm, ksm);
+		blk_ksm_destroy(ksm);
+	}
+}
+
+static void dm_destroy_inline_encryption(struct mapped_device *md)
+{
+	if (!md->queue->ksm)
+		return;
+	blk_ksm_destroy(md->queue->ksm);
+	blk_ksm_unregister(md->queue);
+}
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline struct blk_keyslot_manager *
+dm_init_inline_encryption(struct mapped_device *md, struct dm_table *t)
+{
+	return NULL;
+}
+
+static void dm_update_keyslot_manager(struct mapped_device *md,
+				      struct blk_keyslot_manager *ksm)
+{
+}
+
+static inline void dm_destroy_inline_encryption(struct mapped_device *md)
+{
+}
+
+#endif /* !CONFIG_BLK_INLINE_ENCRYPTION */
+
 /*
  * Returns old map, which caller must destroy.
  */
@@ -1959,6 +2164,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
 	struct request_queue *q = md->queue;
 	bool request_based = dm_table_request_based(t);
 	sector_t size;
+	struct blk_keyslot_manager *ksm;
 	int ret;
 
 	lockdep_assert_held(&md->suspend_lock);
@@ -1994,12 +2200,21 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
 		md->immutable_target = dm_table_get_immutable_target(t);
 	}
 
+	ksm = dm_init_inline_encryption(md, t);
+	if (IS_ERR(ksm)) {
+		old_map = ERR_PTR(PTR_ERR(ksm));
+		goto out;
+	}
+
 	ret = __bind_mempools(md, t);
 	if (ret) {
+		blk_ksm_destroy(ksm);
 		old_map = ERR_PTR(ret);
 		goto out;
 	}
 
+	dm_update_keyslot_manager(md, ksm);
+
 	old_map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
 	rcu_assign_pointer(md->map, (void *)t);
 	md->immutable_target_type = dm_table_get_immutable_target_type(t);
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index fffe1e289c53..eaf92e4cbe70 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -208,4 +208,23 @@ void dm_free_md_mempools(struct dm_md_mempools *pools);
  */
 unsigned dm_get_reserved_bio_based_ios(void);
 
+/*
+ *  Inline Encryption
+ */
+struct blk_keyslot_manager;
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+int dm_verify_inline_encryption(struct mapped_device *md, struct dm_table *t);
+
+#else /* !CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline int dm_verify_inline_encryption(struct mapped_device *md,
+					      struct dm_table *t)
+{
+	return 0;
+}
+
+#endif /* !CONFIG_BLK_INLINE_ENCRYPTION */
+
 #endif
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 61a66fb8ebb3..6d05a6a8a129 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -325,6 +325,12 @@ struct dm_target {
 	 * whether or not its underlying devices have support.
 	 */
 	bool discards_supported:1;
+
+	/*
+	 * Set if inline crypto capabilities from this target's underlying
+	 * device(s) can be exposed via the device-mapper device.
+	 */
+	bool may_passthrough_inline_crypto:1;
 };
 
 void *dm_per_bio_data(struct bio *bio, size_t data_size);
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
index 37f1022b256f..4047f8cec01a 100644
--- a/include/linux/keyslot-manager.h
+++ b/include/linux/keyslot-manager.h
@@ -11,6 +11,8 @@
 
 struct blk_keyslot_manager;
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
 /**
  * struct blk_ksm_ll_ops - functions to manage keyslots in hardware
  * @keyslot_program:	Program the specified key into the specified slot in the
@@ -106,6 +108,21 @@ void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm);
 
 void blk_ksm_destroy(struct blk_keyslot_manager *ksm);
 
+void blk_ksm_intersect_modes(struct blk_keyslot_manager *parent,
+			     const struct blk_keyslot_manager *child);
+
 void blk_ksm_init_passthrough(struct blk_keyslot_manager *ksm);
 
+bool blk_ksm_is_superset(struct blk_keyslot_manager *ksm_superset,
+			 struct blk_keyslot_manager *ksm_subset);
+
+void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
+				 struct blk_keyslot_manager *reference_ksm);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline void blk_ksm_destroy(struct blk_keyslot_manager *ksm) { }
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
 #endif /* __LINUX_KEYSLOT_MANAGER_H */
-- 
2.29.0.rc1.297.gfa9743e501-goog

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dm-devel] [PATCH v2 4/4] dm: enable may_passthrough_inline_crypto on some targets
  2020-10-15 21:46 [dm-devel] [PATCH v2 0/4] add support for inline encryption to device mapper Satya Tangirala
                   ` (2 preceding siblings ...)
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Satya Tangirala
@ 2020-10-15 21:46 ` Satya Tangirala
  2020-10-27 21:10   ` Eric Biggers
  3 siblings, 1 reply; 18+ messages in thread
From: Satya Tangirala @ 2020-10-15 21:46 UTC (permalink / raw)
  To: linux-block, linux-kernel, dm-devel
  Cc: Jens Axboe, Satya Tangirala, Mike Snitzer, Alasdair Kergon, Eric Biggers

dm-linear and dm-flakey obviously can pass through inline crypto support.

Co-developed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Satya Tangirala <satyat@google.com>
---
 drivers/md/dm-flakey.c | 1 +
 drivers/md/dm-linear.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
index a2cc9e45cbba..655286dacc35 100644
--- a/drivers/md/dm-flakey.c
+++ b/drivers/md/dm-flakey.c
@@ -253,6 +253,7 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	ti->num_discard_bios = 1;
 	ti->per_io_data_size = sizeof(struct per_bio_data);
 	ti->private = fc;
+	ti->may_passthrough_inline_crypto = true;
 	return 0;
 
 bad:
diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
index 00774b5d7668..345e22b9be5d 100644
--- a/drivers/md/dm-linear.c
+++ b/drivers/md/dm-linear.c
@@ -62,6 +62,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	ti->num_secure_erase_bios = 1;
 	ti->num_write_same_bios = 1;
 	ti->num_write_zeroes_bios = 1;
+	ti->may_passthrough_inline_crypto = true;
 	ti->private = lc;
 	return 0;
 
-- 
2.29.0.rc1.297.gfa9743e501-goog

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 2/4] block: add private field to struct keyslot_manager
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 2/4] block: add private field to struct keyslot_manager Satya Tangirala
@ 2020-10-16  7:19   ` Christoph Hellwig
  2020-10-16  8:39     ` Satya Tangirala
  0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2020-10-16  7:19 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: Jens Axboe, Mike Snitzer, Eric Biggers, linux-kernel,
	linux-block, dm-devel, Alasdair Kergon

On Thu, Oct 15, 2020 at 09:46:30PM +0000, Satya Tangirala wrote:
> Add a (void *) pointer to struct keyslot_manager that the owner of the
> struct can use for any purpose it wants.
> 
> Right now, the struct keyslot_manager is expected to be embedded directly
> into other structs (and the owner of the keyslot_manager would use
> container_of() to access any other data the owner needs). However, this
> might take up more space than is acceptable, and it would be better to be
> able to add only a pointer to a struct keyslot_manager into other structs
> rather than embed the entire struct directly. But container_of() can't be
> used when only the pointer to the keyslot_manager is embded. The primary
> motivation of this patch is to get around that issue.

No, please don't bloat the structure.  If some weird caller doesn't
like the embedding it can create a container structure with the
blk_keyslot_manager structure and a backpointer.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager Satya Tangirala
@ 2020-10-16  7:20   ` Christoph Hellwig
  2020-10-21  4:44     ` Eric Biggers
  2020-10-27 20:04   ` Eric Biggers
  1 sibling, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2020-10-16  7:20 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: Jens Axboe, Mike Snitzer, Eric Biggers, linux-kernel,
	linux-block, dm-devel, Alasdair Kergon

And this just validates my argument that calling the inline crypto work
directly from the block layer instead of just down below in blk-mq was
wrong.  We should not require any support from stacking drivers at the
keyslot manager level.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 2/4] block: add private field to struct keyslot_manager
  2020-10-16  7:19   ` Christoph Hellwig
@ 2020-10-16  8:39     ` Satya Tangirala
  0 siblings, 0 replies; 18+ messages in thread
From: Satya Tangirala @ 2020-10-16  8:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Mike Snitzer, Eric Biggers, linux-kernel,
	linux-block, dm-devel, Alasdair Kergon

On Fri, Oct 16, 2020 at 08:19:41AM +0100, Christoph Hellwig wrote:
> On Thu, Oct 15, 2020 at 09:46:30PM +0000, Satya Tangirala wrote:
> > Add a (void *) pointer to struct keyslot_manager that the owner of the
> > struct can use for any purpose it wants.
> > 
> > Right now, the struct keyslot_manager is expected to be embedded directly
> > into other structs (and the owner of the keyslot_manager would use
> > container_of() to access any other data the owner needs). However, this
> > might take up more space than is acceptable, and it would be better to be
> > able to add only a pointer to a struct keyslot_manager into other structs
> > rather than embed the entire struct directly. But container_of() can't be
> > used when only the pointer to the keyslot_manager is embded. The primary
> > motivation of this patch is to get around that issue.
> 
> No, please don't bloat the structure.  If some weird caller doesn't
> like the embedding it can create a container structure with the
> blk_keyslot_manager structure and a backpointer.
Ah, ok. Thanks!

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager
  2020-10-16  7:20   ` Christoph Hellwig
@ 2020-10-21  4:44     ` Eric Biggers
  2020-10-21  5:27       ` Satya Tangirala
  0 siblings, 1 reply; 18+ messages in thread
From: Eric Biggers @ 2020-10-21  4:44 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Mike Snitzer, linux-kernel, Satya Tangirala,
	linux-block, dm-devel, Alasdair Kergon

On Fri, Oct 16, 2020 at 08:20:44AM +0100, Christoph Hellwig wrote:
> And this just validates my argument that calling the inline crypto work
> directly from the block layer instead of just down below in blk-mq was
> wrong.  We should not require any support from stacking drivers at the
> keyslot manager level.

I'm not sure what you're referring to here; could you clarify?

It's true that device-mapper devices don't need the actual keyslot management.
But they do need the ability to expose crypto capabilities as well as a key
eviction function.  And those are currently handled by
"struct blk_keyslot_manager".  Hence the need for a "passthrough keyslot
manager" that does those other things but not the actual keyslot management.

FWIW, I suggested splitting these up, but you disagreed and said you wanted the
crypto capabilities to remain part of the blk_keyslot_manager
(https://lkml.kernel.org/linux-block/20200327170047.GA24682@infradead.org/).
If you've now changed your mind, please be clear about it.

- Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager
  2020-10-21  4:44     ` Eric Biggers
@ 2020-10-21  5:27       ` Satya Tangirala
  0 siblings, 0 replies; 18+ messages in thread
From: Satya Tangirala @ 2020-10-21  5:27 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Jens Axboe, linux-block, Mike Snitzer, linux-kernel,
	Christoph Hellwig, dm-devel, Alasdair Kergon

On Tue, Oct 20, 2020 at 09:44:23PM -0700, Eric Biggers wrote:
> On Fri, Oct 16, 2020 at 08:20:44AM +0100, Christoph Hellwig wrote:
> > And this just validates my argument that calling the inline crypto work
> > directly from the block layer instead of just down below in blk-mq was
> > wrong.  We should not require any support from stacking drivers at the
> > keyslot manager level.
> 
> I'm not sure what you're referring to here; could you clarify?
> 
> It's true that device-mapper devices don't need the actual keyslot management.
> But they do need the ability to expose crypto capabilities as well as a key
> eviction function.  And those are currently handled by
> "struct blk_keyslot_manager".  Hence the need for a "passthrough keyslot
> manager" that does those other things but not the actual keyslot management.
> 
> FWIW, I suggested splitting these up, but you disagreed and said you wanted the
> crypto capabilities to remain part of the blk_keyslot_manager
> (https://lkml.kernel.org/linux-block/20200327170047.GA24682@infradead.org/).
> If you've now changed your mind, please be clear about it.
> 
I thought what Christoph meant (and of course, please let us know
if I'm misunderstanding you, Christoph) was that if blk-mq
handled all the blk-crypto stuff including deciding whether to
use the blk-crypto-fallback, and blk-mq was responsible for
calling out to blk-crypto-fallback if required, then the device
mapper wouldn't need to expose any capabilities at all... or at
least not for bio-based device mapper devices, since bios would
go through the device mapper and eventually hit blk-mq which
would then handle crypto appropriately.

We couldn't do that because the crypto ciphers for the
blk-crypto-fallback couldn't be allocated on the data path (so we
needed fscrypt to ask blk-crypto to check whether the underlying
device supported the crypto capabilities it required, and
allocate ciphers appropriately, before the data path required the
ciphers). I'm checking to see if anything has changed w.r.t
allocating crypto ciphers on the data path (and checking if
memalloc_noio_save/restore() helps with that).
> - Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Satya Tangirala
@ 2020-10-25 21:02   ` kernel test robot
  2020-10-25 21:02   ` [dm-devel] [PATCH] dm: fix err_cast.cocci warnings kernel test robot
  2020-10-27 21:31   ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Eric Biggers
  2 siblings, 0 replies; 18+ messages in thread
From: kernel test robot @ 2020-10-25 21:02 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-kernel, dm-devel
  Cc: Jens Axboe, kbuild-all, Mike Snitzer, Eric Biggers,
	Satya Tangirala, Alasdair Kergon

[-- Attachment #1: Type: text/plain, Size: 1106 bytes --]

Hi Satya,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on dm/for-next]
[also build test WARNING on block/for-next linus/master linux/master v5.9 next-20201023]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Satya-Tangirala/add-support-for-inline-encryption-to-device-mapper/20201016-054900
base:   https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git for-next
config: i386-randconfig-c001-20201025 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


"coccinelle warnings: (new ones prefixed by >>)"
>> drivers/md/dm.c:2204:12-19: WARNING: ERR_CAST can be used with ksm

Please review and possibly fold the followup patch.

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 37114 bytes --]

[-- Attachment #3: Type: text/plain, Size: 93 bytes --]

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dm-devel] [PATCH] dm: fix err_cast.cocci warnings
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Satya Tangirala
  2020-10-25 21:02   ` kernel test robot
@ 2020-10-25 21:02   ` kernel test robot
  2020-10-27 21:31   ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Eric Biggers
  2 siblings, 0 replies; 18+ messages in thread
From: kernel test robot @ 2020-10-25 21:02 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-kernel, dm-devel
  Cc: Jens Axboe, kbuild-all, Mike Snitzer, Eric Biggers,
	Satya Tangirala, Alasdair Kergon

From: kernel test robot <lkp@intel.com>

drivers/md/dm.c:2204:12-19: WARNING: ERR_CAST can be used with ksm


 Use ERR_CAST inlined function instead of ERR_PTR(PTR_ERR(...))

Generated by: scripts/coccinelle/api/err_cast.cocci

CC: Satya Tangirala <satyat@google.com>
Signed-off-by: kernel test robot <lkp@intel.com>
---

url:    https://github.com/0day-ci/linux/commits/Satya-Tangirala/add-support-for-inline-encryption-to-device-mapper/20201016-054900
base:   https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git for-next

 dm.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2201,7 +2201,7 @@ static struct dm_table *__bind(struct ma
 
 	ksm = dm_init_inline_encryption(md, t);
 	if (IS_ERR(ksm)) {
-		old_map = ERR_PTR(PTR_ERR(ksm));
+		old_map = ERR_CAST(ksm);
 		goto out;
 	}
 

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager Satya Tangirala
  2020-10-16  7:20   ` Christoph Hellwig
@ 2020-10-27 20:04   ` Eric Biggers
  1 sibling, 0 replies; 18+ messages in thread
From: Eric Biggers @ 2020-10-27 20:04 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: Jens Axboe, Mike Snitzer, linux-kernel, linux-block, dm-devel,
	Alasdair Kergon

On Thu, Oct 15, 2020 at 09:46:29PM +0000, Satya Tangirala wrote:
> +/**
> + * blk_ksm_init_passthrough() - Init a passthrough keyslot manager
> + * @ksm: The keyslot manager to init
> + *
> + * Initialize a passthrough keyslot manager.
> + * Called by e.g. storage drivers to set up a keyslot manager in their
> + * request_queue, when the storage driver wants to manage its keys by itself.
> + * This is useful for inline encryption hardware that doesn't have the concept
> + * of keyslots, and for layered devices.
> + *
> + * See blk_ksm_init() for more details about the parameters.
> + */
> +void blk_ksm_init_passthrough(struct blk_keyslot_manager *ksm)
> +{
> +	memset(ksm, 0, sizeof(*ksm));
> +	init_rwsem(&ksm->lock);
> +}
> +EXPORT_SYMBOL_GPL(blk_ksm_init_passthrough);

The last paragraph of the comment ("See blk_ksm_init() for more details about
the parameters.") isn't useful and should be removed.

Otherwise this patch looks fine.  You can add:

Reviewed-by: Eric Biggers <ebiggers@google.com>

- Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 4/4] dm: enable may_passthrough_inline_crypto on some targets
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 4/4] dm: enable may_passthrough_inline_crypto on some targets Satya Tangirala
@ 2020-10-27 21:10   ` Eric Biggers
  0 siblings, 0 replies; 18+ messages in thread
From: Eric Biggers @ 2020-10-27 21:10 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: Jens Axboe, Mike Snitzer, linux-kernel, linux-block, dm-devel,
	Alasdair Kergon

On Thu, Oct 15, 2020 at 09:46:32PM +0000, Satya Tangirala wrote:
> dm-linear and dm-flakey obviously can pass through inline crypto support.
> 
> Co-developed-by: Eric Biggers <ebiggers@google.com>
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> Signed-off-by: Satya Tangirala <satyat@google.com>
> ---
>  drivers/md/dm-flakey.c | 1 +
>  drivers/md/dm-linear.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
> index a2cc9e45cbba..655286dacc35 100644
> --- a/drivers/md/dm-flakey.c
> +++ b/drivers/md/dm-flakey.c
> @@ -253,6 +253,7 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv)
>  	ti->num_discard_bios = 1;
>  	ti->per_io_data_size = sizeof(struct per_bio_data);
>  	ti->private = fc;
> +	ti->may_passthrough_inline_crypto = true;
>  	return 0;
>  
>  bad:
> diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
> index 00774b5d7668..345e22b9be5d 100644
> --- a/drivers/md/dm-linear.c
> +++ b/drivers/md/dm-linear.c
> @@ -62,6 +62,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv)
>  	ti->num_secure_erase_bios = 1;
>  	ti->num_write_same_bios = 1;
>  	ti->num_write_zeroes_bios = 1;
> +	ti->may_passthrough_inline_crypto = true;
>  	ti->private = lc;
>  	return 0;

How about instead using a flag DM_TARGET_PASSES_CRYPTO in target_type::features,
analogous to DM_TARGET_PASSES_INTEGRITY?

- Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support
  2020-10-15 21:46 ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Satya Tangirala
  2020-10-25 21:02   ` kernel test robot
  2020-10-25 21:02   ` [dm-devel] [PATCH] dm: fix err_cast.cocci warnings kernel test robot
@ 2020-10-27 21:31   ` Eric Biggers
  2020-10-27 23:58     ` Satya Tangirala
  2 siblings, 1 reply; 18+ messages in thread
From: Eric Biggers @ 2020-10-27 21:31 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: Jens Axboe, Mike Snitzer, linux-kernel, linux-block, dm-devel,
	Alasdair Kergon

On Thu, Oct 15, 2020 at 09:46:31PM +0000, Satya Tangirala wrote:
> Update the device-mapper core to support exposing the inline crypto
> support of the underlying device(s) through the device-mapper device.
> 
> This works by creating a "passthrough keyslot manager" for the dm
> device, which declares support for encryption settings which all
> underlying devices support.  When a supported setting is used, the bio
> cloning code handles cloning the crypto context to the bios for all the
> underlying devices.  When an unsupported setting is used, the blk-crypto
> fallback is used as usual.
> 
> Crypto support on each underlying device is ignored unless the
> corresponding dm target opts into exposing it.  This is needed because
> for inline crypto to semantically operate on the original bio, the data
> must not be transformed by the dm target.  Thus, targets like dm-linear
> can expose crypto support of the underlying device, but targets like
> dm-crypt can't.  (dm-crypt could use inline crypto itself, though.)
> 
> When a key is evicted from the dm device, it is evicted from all
> underlying devices.
> 
> A DM device's table can only be changed if the "new" inline encryption
> capabilities are a superset of the "old" inline encryption capabilities.
> Attempts to make changes to the table that result in some inline encryption
> capability becoming no longer supported will be rejected.
> 
> Co-developed-by: Eric Biggers <ebiggers@google.com>
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> Signed-off-by: Satya Tangirala <satyat@google.com>
> ---
>  block/blk-crypto.c              |   1 +
>  block/keyslot-manager.c         |  89 +++++++++++++
>  drivers/md/dm-ioctl.c           |   8 ++
>  drivers/md/dm.c                 | 217 +++++++++++++++++++++++++++++++-
>  drivers/md/dm.h                 |  19 +++
>  include/linux/device-mapper.h   |   6 +
>  include/linux/keyslot-manager.h |  17 +++
>  7 files changed, 356 insertions(+), 1 deletion(-)

I'm having a hard time understanding what's going on in this patch now.  Besides
the simplifications I'm suggesting in other comments below, you should consider
splitting this into more than one patch.  The block layer changes could be a
separate patch, as could the key eviction support.

> 
> diff --git a/block/blk-crypto.c b/block/blk-crypto.c
> index 5da43f0973b4..c2be8f15006c 100644
> --- a/block/blk-crypto.c
> +++ b/block/blk-crypto.c
> @@ -409,3 +409,4 @@ int blk_crypto_evict_key(struct request_queue *q,
>  	 */
>  	return blk_crypto_fallback_evict_key(key);
>  }
> +EXPORT_SYMBOL_GPL(blk_crypto_evict_key);
> diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
> index 5ad476dafeab..e16e4a074765 100644
> --- a/block/keyslot-manager.c
> +++ b/block/keyslot-manager.c
> @@ -416,6 +416,95 @@ void blk_ksm_unregister(struct request_queue *q)
>  {
>  	q->ksm = NULL;
>  }
> +EXPORT_SYMBOL_GPL(blk_ksm_unregister);

blk_ksm_unregister() doesn't seem to be necessary, since it just sets a pointer
to NULL, which the callers could easily do themselves.

> +/**
> + * blk_ksm_intersect_modes() - restrict supported modes by child device
> + * @parent: The keyslot manager for parent device
> + * @child: The keyslot manager for child device, or NULL
> + *
> + * Clear any crypto mode support bits in @parent that aren't set in @child.
> + * If @child is NULL, then all parent bits are cleared.
> + *
> + * Only use this when setting up the keyslot manager for a layered device,
> + * before it's been exposed yet.
> + */
> +void blk_ksm_intersect_modes(struct blk_keyslot_manager *parent,
> +			     const struct blk_keyslot_manager *child)
> +{
> +	if (child) {
> +		unsigned int i;
> +
> +		parent->max_dun_bytes_supported =
> +			min(parent->max_dun_bytes_supported,
> +			    child->max_dun_bytes_supported);
> +		for (i = 0; i < ARRAY_SIZE(child->crypto_modes_supported);
> +		     i++) {
> +			parent->crypto_modes_supported[i] &=
> +				child->crypto_modes_supported[i];
> +		}
> +	} else {
> +		parent->max_dun_bytes_supported = 0;
> +		memset(parent->crypto_modes_supported, 0,
> +		       sizeof(parent->crypto_modes_supported));
> +	}
> +}
> +EXPORT_SYMBOL_GPL(blk_ksm_intersect_modes);
> +
> +/**
> + * blk_ksm_is_superset() - Check if a KSM supports a superset of crypto modes
> + *			   and DUN bytes that another KSM supports.
> + * @ksm_superset: The KSM that we want to verify is a superset
> + * @ksm_subset: The KSM that we want to verify is a subset
> + *
> + * Return: True if @ksm_superset supports a superset of the crypto modes and DUN
> + *	   bytes that @ksm_subset supports.
> + */
> +bool blk_ksm_is_superset(struct blk_keyslot_manager *ksm_superset,
> +			 struct blk_keyslot_manager *ksm_subset)

blk_ksm_is_superset() is confusing because it actually does "superset or the
same", not just "superset".  That *is* the mathematical definition of superset,
but it may not be what people expect when they read this...  Is there a better
name, or can the comment properly explain it?

> +/**
> + * blk_ksm_update_capabilities() - Update the restrictions of a KSM to those of
> + *				   another KSM
> + * @target_ksm: The KSM whose restrictions to update.
> + * @reference_ksm: The KSM to whose restrictions this function will update
> + *		   @target_ksm's restrictions to,
> + */
> +void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
> +				 struct blk_keyslot_manager *reference_ksm)
> +{
> +	memcpy(target_ksm->crypto_modes_supported,
> +	       reference_ksm->crypto_modes_supported,
> +	       sizeof(target_ksm->crypto_modes_supported));
> +
> +	target_ksm->max_dun_bytes_supported =
> +				reference_ksm->max_dun_bytes_supported;
> +}
> +EXPORT_SYMBOL_GPL(blk_ksm_update_capabilities);

Wouldn't it be easier to replace the original blk_keyslot_manager, rather than
modify it?  Then blk_ksm_update_capabilities() wouldn't be needed.

> diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
> index cd0478d44058..2b3efa9f9fae 100644
> --- a/drivers/md/dm-ioctl.c
> +++ b/drivers/md/dm-ioctl.c
> @@ -1358,6 +1358,10 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si
>  		goto err_unlock_md_type;
>  	}
>  
> +	r = dm_verify_inline_encryption(md, t);
> +	if (r)
> +		goto err_unlock_md_type;
> +
>  	if (dm_get_md_type(md) == DM_TYPE_NONE) {
>  		/* Initial table load: acquire type of table. */
>  		dm_set_md_type(md, dm_table_get_type(t));
> @@ -2114,6 +2118,10 @@ int __init dm_early_create(struct dm_ioctl *dmi,
>  	if (r)
>  		goto err_destroy_table;
>  
> +	r = dm_verify_inline_encryption(md, t);
> +	if (r)
> +		goto err_destroy_table;
> +
>  	md->type = dm_table_get_type(t);
>  	/* setup md->queue to reflect md's type (may block) */
>  	r = dm_setup_md_queue(md, t);

Both table_load() and dm_early_create() call dm_setup_md_queue().  Wouldn't it
be simpler to handle inline encryption in dm_setup_md_queue(), instead of doing
it in both table_load() and dm_early_create()?

> +/*
> + * Constructs and returns a keyslot manager that represents the crypto
> + * capabilities of the devices described by the dm_table. However, if the
> + * constructed keyslot manager does not support a superset of the crypto
> + * capabilities supported by the currect keyslot manager of the mapped_device,
> + * it returns an error instead, since we don't support restricting crypto
> + * capabilities on table changes.
> + */
> +static struct blk_keyslot_manager *
> +dm_init_inline_encryption(struct mapped_device *md, struct dm_table *t)
> +{
> +	struct blk_keyslot_manager *ksm;
> +	struct dm_target *ti;
> +	unsigned int i;
> +
> +	ksm = kmalloc(sizeof(*ksm), GFP_KERNEL);
> +	if (!ksm)
> +		return ERR_PTR(-EINVAL);

ENOMEM, not EINVAL.

> +	blk_ksm_init_passthrough(ksm);
> +	ksm->ksm_ll_ops = dm_ksm_ll_ops;
> +	ksm->max_dun_bytes_supported = UINT_MAX;
> +	memset(ksm->crypto_modes_supported, 0xFF,
> +	       sizeof(ksm->crypto_modes_supported));
> +	ksm->priv = md;
> +
> +	for (i = 0; i < dm_table_get_num_targets(t); i++) {
> +		ti = dm_table_get_target(t, i);
> +
> +		if (!ti->may_passthrough_inline_crypto) {
> +			blk_ksm_intersect_modes(ksm, NULL);
> +			break;
> +		}
> +		if (!ti->type->iterate_devices)
> +			continue;
> +		ti->type->iterate_devices(ti, device_intersect_crypto_modes,
> +					  ksm);
> +	}
> +
> +	if (!blk_ksm_is_superset(ksm, md->queue->ksm)) {
> +		DMWARN("Inline encryption capabilities of new DM table were more restrictive than the old table's. This is not supported!");
> +		blk_ksm_destroy(ksm);
> +		return ERR_PTR(-EOPNOTSUPP);

Missing kfree(ksm).

Also it looks like other code is using EINVAL for a bad dm table.

> +	}
> +
> +	return ksm;

How about returning NULL if no crypto modes are actually supported?

> +/**
> + * dm_verify_inline_encryption() - Verifies that the current keyslot manager of
> + *				   the mapped_device can be replaced by the
> + *				   keyslot manager of a given dm_table.
> + * @md: The mapped_device
> + * @t: The dm_table
> + *
> + * In particular, this function checks that the keyslot manager that will be
> + * constructed for the dm_table will support a superset of the capabilities that
> + * the current keyslot manager of the mapped_device supports.
> + *
> + * Return: 0 if the table's keyslot_manager can replace the current keyslot
> + *	   manager of the mapped_device. Negative value otherwise.
> + */
> +int dm_verify_inline_encryption(struct mapped_device *md, struct dm_table *t)
> +{
> +	struct blk_keyslot_manager *ksm = dm_init_inline_encryption(md, t);
> +
> +	if (IS_ERR(ksm))
> +		return PTR_ERR(ksm);
> +	blk_ksm_destroy(ksm);
> +
> +	return 0;
> +}

This function seems redundant with dm_init_inline_encryption().  Wouldn't it be
simpler to do:

- dm_setup_md_queue() and dm_swap_table() call dm_init_inline_encryption() after
  dm_calculate_queue_limits().

- ksm gets passed to dm_table_set_restrictions(), which calls
  dm_update_keyslot_manager() (maybe rename to dm_update_inline_encryption()?)
  to actually set q->ksm.

That way, the crypto capabilities would be handled similarly to how the
queue_limits are already handled.

> +static void dm_update_keyslot_manager(struct mapped_device *md,
> +				      struct blk_keyslot_manager *ksm)
> +{
> +	bool ksm_is_empty = true;
> +	int i;
> +
> +	/*
> +	 * If the new KSM doesn't actually support any crypto modes, we may as
> +	 * well set a NULL ksm.
> +	 */
> +	ksm_is_empty = true;
> +	for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
> +		if (ksm->crypto_modes_supported[i]) {
> +			ksm_is_empty = false;
> +			break;
> +		}
> +	}

dm_init_inline_encryption() seems like a better place for this "are no modes
supported" logic.

> +	if (ksm_is_empty) {
> +		blk_ksm_destroy(ksm);
> +
> +		/* At this point, md->queue->ksm must also be NULL, since we're
> +		 * guaranteed that ksm is a superset of md->queue->ksm, and we
> +		 * never set md->queue->ksm to a non-null empty ksm.
> +		 */
> +		if (WARN_ON(md->queue->ksm))
> +			blk_ksm_register(NULL, md->queue);
> +		return;
> +	}
> +
> +	/* Make the ksm less restrictive */
> +	if (!md->queue->ksm) {
> +		blk_ksm_register(ksm, md->queue);
> +	} else {
> +		blk_ksm_update_capabilities(md->queue->ksm, ksm);
> +		blk_ksm_destroy(ksm);
> +	}
> +}

Wouldn't it be simpler to just destroy (and free) the existing
blk_keyslot_manager (if any), then set the new one (if it's not NULL)?

> +static void dm_destroy_inline_encryption(struct mapped_device *md)
> +{
> +	if (!md->queue->ksm)
> +		return;
> +	blk_ksm_destroy(md->queue->ksm);

Missing kfree().

> +	blk_ksm_unregister(md->queue);
> +}
> +
> +#else /* CONFIG_BLK_INLINE_ENCRYPTION */
> +
> +static inline struct blk_keyslot_manager *
> +dm_init_inline_encryption(struct mapped_device *md, struct dm_table *t)
> +{
> +	return NULL;
> +}

Seems it would be simpler for these functions to take a request_queue instead of
a mapped_device.

>  /*
>   * Returns old map, which caller must destroy.
>   */
> @@ -1959,6 +2164,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
>  	struct request_queue *q = md->queue;
>  	bool request_based = dm_table_request_based(t);
>  	sector_t size;
> +	struct blk_keyslot_manager *ksm;
>  	int ret;
>  
>  	lockdep_assert_held(&md->suspend_lock);
> @@ -1994,12 +2200,21 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
>  		md->immutable_target = dm_table_get_immutable_target(t);
>  	}
>  
> +	ksm = dm_init_inline_encryption(md, t);
> +	if (IS_ERR(ksm)) {
> +		old_map = ERR_PTR(PTR_ERR(ksm));
> +		goto out;
> +	}

It seems too late to fail here, since the mapped_device already started being
updated.  What I suggested above would address this.

> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +
>  /**
>   * struct blk_ksm_ll_ops - functions to manage keyslots in hardware
>   * @keyslot_program:	Program the specified key into the specified slot in the
> @@ -106,6 +108,21 @@ void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm);
>  
>  void blk_ksm_destroy(struct blk_keyslot_manager *ksm);
>  
> +void blk_ksm_intersect_modes(struct blk_keyslot_manager *parent,
> +			     const struct blk_keyslot_manager *child);
> +
>  void blk_ksm_init_passthrough(struct blk_keyslot_manager *ksm);
>  
> +bool blk_ksm_is_superset(struct blk_keyslot_manager *ksm_superset,
> +			 struct blk_keyslot_manager *ksm_subset);
> +
> +void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
> +				 struct blk_keyslot_manager *reference_ksm);
> +
> +#else /* CONFIG_BLK_INLINE_ENCRYPTION */
> +
> +static inline void blk_ksm_destroy(struct blk_keyslot_manager *ksm) { }
> +
> +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */

Is the blk_ksm_destroy() stub really needed?

- Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support
  2020-10-27 21:31   ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Eric Biggers
@ 2020-10-27 23:58     ` Satya Tangirala
  2020-10-28  0:17       ` Eric Biggers
  0 siblings, 1 reply; 18+ messages in thread
From: Satya Tangirala @ 2020-10-27 23:58 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Jens Axboe, Mike Snitzer, linux-kernel, linux-block, dm-devel,
	Alasdair Kergon

On Tue, Oct 27, 2020 at 02:31:51PM -0700, Eric Biggers wrote:
> On Thu, Oct 15, 2020 at 09:46:31PM +0000, Satya Tangirala wrote:
> > Update the device-mapper core to support exposing the inline crypto
> > support of the underlying device(s) through the device-mapper device.
> > 
> > This works by creating a "passthrough keyslot manager" for the dm
> > device, which declares support for encryption settings which all
> > underlying devices support.  When a supported setting is used, the bio
> > cloning code handles cloning the crypto context to the bios for all the
> > underlying devices.  When an unsupported setting is used, the blk-crypto
> > fallback is used as usual.
> > 
> > Crypto support on each underlying device is ignored unless the
> > corresponding dm target opts into exposing it.  This is needed because
> > for inline crypto to semantically operate on the original bio, the data
> > must not be transformed by the dm target.  Thus, targets like dm-linear
> > can expose crypto support of the underlying device, but targets like
> > dm-crypt can't.  (dm-crypt could use inline crypto itself, though.)
> > 
> > When a key is evicted from the dm device, it is evicted from all
> > underlying devices.
> > 
> > A DM device's table can only be changed if the "new" inline encryption
> > capabilities are a superset of the "old" inline encryption capabilities.
> > Attempts to make changes to the table that result in some inline encryption
> > capability becoming no longer supported will be rejected.
> > 
> > Co-developed-by: Eric Biggers <ebiggers@google.com>
> > Signed-off-by: Eric Biggers <ebiggers@google.com>
> > Signed-off-by: Satya Tangirala <satyat@google.com>
> > ---
> >  block/blk-crypto.c              |   1 +
> >  block/keyslot-manager.c         |  89 +++++++++++++
> >  drivers/md/dm-ioctl.c           |   8 ++
> >  drivers/md/dm.c                 | 217 +++++++++++++++++++++++++++++++-
> >  drivers/md/dm.h                 |  19 +++
> >  include/linux/device-mapper.h   |   6 +
> >  include/linux/keyslot-manager.h |  17 +++
> >  7 files changed, 356 insertions(+), 1 deletion(-)
> 
> I'm having a hard time understanding what's going on in this patch now.  Besides
> the simplifications I'm suggesting in other comments below, you should consider
> splitting this into more than one patch.  The block layer changes could be a
> separate patch, as could the key eviction support.
> 
Sure - I'll also add more details on the patch in the commit message.
> > 
> > diff --git a/block/blk-crypto.c b/block/blk-crypto.c
> > index 5da43f0973b4..c2be8f15006c 100644
> > --- a/block/blk-crypto.c
> > +++ b/block/blk-crypto.c
> > @@ -409,3 +409,4 @@ int blk_crypto_evict_key(struct request_queue *q,
> >  	 */
> >  	return blk_crypto_fallback_evict_key(key);
> >  }
> > +EXPORT_SYMBOL_GPL(blk_crypto_evict_key);
> > diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
> > index 5ad476dafeab..e16e4a074765 100644
> > --- a/block/keyslot-manager.c
> > +++ b/block/keyslot-manager.c
> > @@ -416,6 +416,95 @@ void blk_ksm_unregister(struct request_queue *q)
> >  {
> >  	q->ksm = NULL;
> >  }
> > +EXPORT_SYMBOL_GPL(blk_ksm_unregister);
> 
> blk_ksm_unregister() doesn't seem to be necessary, since it just sets a pointer
> to NULL, which the callers could easily do themselves.
> 
> > +/**
> > + * blk_ksm_intersect_modes() - restrict supported modes by child device
> > + * @parent: The keyslot manager for parent device
> > + * @child: The keyslot manager for child device, or NULL
> > + *
> > + * Clear any crypto mode support bits in @parent that aren't set in @child.
> > + * If @child is NULL, then all parent bits are cleared.
> > + *
> > + * Only use this when setting up the keyslot manager for a layered device,
> > + * before it's been exposed yet.
> > + */
> > +void blk_ksm_intersect_modes(struct blk_keyslot_manager *parent,
> > +			     const struct blk_keyslot_manager *child)
> > +{
> > +	if (child) {
> > +		unsigned int i;
> > +
> > +		parent->max_dun_bytes_supported =
> > +			min(parent->max_dun_bytes_supported,
> > +			    child->max_dun_bytes_supported);
> > +		for (i = 0; i < ARRAY_SIZE(child->crypto_modes_supported);
> > +		     i++) {
> > +			parent->crypto_modes_supported[i] &=
> > +				child->crypto_modes_supported[i];
> > +		}
> > +	} else {
> > +		parent->max_dun_bytes_supported = 0;
> > +		memset(parent->crypto_modes_supported, 0,
> > +		       sizeof(parent->crypto_modes_supported));
> > +	}
> > +}
> > +EXPORT_SYMBOL_GPL(blk_ksm_intersect_modes);
> > +
> > +/**
> > + * blk_ksm_is_superset() - Check if a KSM supports a superset of crypto modes
> > + *			   and DUN bytes that another KSM supports.
> > + * @ksm_superset: The KSM that we want to verify is a superset
> > + * @ksm_subset: The KSM that we want to verify is a subset
> > + *
> > + * Return: True if @ksm_superset supports a superset of the crypto modes and DUN
> > + *	   bytes that @ksm_subset supports.
> > + */
> > +bool blk_ksm_is_superset(struct blk_keyslot_manager *ksm_superset,
> > +			 struct blk_keyslot_manager *ksm_subset)
> 
> blk_ksm_is_superset() is confusing because it actually does "superset or the
> same", not just "superset".  That *is* the mathematical definition of superset,
> but it may not be what people expect when they read this...  Is there a better
> name, or can the comment properly explain it?
> 
A better name still eludes me, but I'll definitely at least comment it better
if I still can't think of a better name.
> > +/**
> > + * blk_ksm_update_capabilities() - Update the restrictions of a KSM to those of
> > + *				   another KSM
> > + * @target_ksm: The KSM whose restrictions to update.
> > + * @reference_ksm: The KSM to whose restrictions this function will update
> > + *		   @target_ksm's restrictions to,
> > + */
> > +void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
> > +				 struct blk_keyslot_manager *reference_ksm)
> > +{
> > +	memcpy(target_ksm->crypto_modes_supported,
> > +	       reference_ksm->crypto_modes_supported,
> > +	       sizeof(target_ksm->crypto_modes_supported));
> > +
> > +	target_ksm->max_dun_bytes_supported =
> > +				reference_ksm->max_dun_bytes_supported;
> > +}
> > +EXPORT_SYMBOL_GPL(blk_ksm_update_capabilities);
> 
> Wouldn't it be easier to replace the original blk_keyslot_manager, rather than
> modify it?  Then blk_ksm_update_capabilities() wouldn't be needed.
> 
I didn't want to replace the original blk_keyslot_manager because it's
possible that e.g. fscrypt is checking for crypto capabilities support
via blk_ksm_crypto_cfg_supported() when DM wants to replace the
blk_keyslot_manager. DM would have to free the memory used by the
blk_keyslot_manager, but blk_ksm_crypto_cfg_supported() might still
be trying to access that memory. I did it this way to avoid having to
add refcounts or something else to the blk_keyslot_manager...(And I
didn't bother adding any synchronization code since the capabilities
only ever expand, and never contract).
> > diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
> > index cd0478d44058..2b3efa9f9fae 100644
> > --- a/drivers/md/dm-ioctl.c
> > +++ b/drivers/md/dm-ioctl.c
> > @@ -1358,6 +1358,10 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si
> >  		goto err_unlock_md_type;
> >  	}
> >  
> > +	r = dm_verify_inline_encryption(md, t);
> > +	if (r)
> > +		goto err_unlock_md_type;
> > +
> >  	if (dm_get_md_type(md) == DM_TYPE_NONE) {
> >  		/* Initial table load: acquire type of table. */
> >  		dm_set_md_type(md, dm_table_get_type(t));
> > @@ -2114,6 +2118,10 @@ int __init dm_early_create(struct dm_ioctl *dmi,
> >  	if (r)
> >  		goto err_destroy_table;
> >  
> > +	r = dm_verify_inline_encryption(md, t);
> > +	if (r)
> > +		goto err_destroy_table;
> > +
> >  	md->type = dm_table_get_type(t);
> >  	/* setup md->queue to reflect md's type (may block) */
> >  	r = dm_setup_md_queue(md, t);
> 
> Both table_load() and dm_early_create() call dm_setup_md_queue().  Wouldn't it
> be simpler to handle inline encryption in dm_setup_md_queue(), instead of doing
> it in both table_load() and dm_early_create()?
> 
table_load() only calls dm_setup_md_queue() on initial table load (when
the md_type is DM_TYPE_NONE), so I can't call
dm_verify_inline_encryption() in only dm_setup_md_queue(), because
dm_verify_inline_encryption() needs to run on every table load.
> > +/*
> > + * Constructs and returns a keyslot manager that represents the crypto
> > + * capabilities of the devices described by the dm_table. However, if the
> > + * constructed keyslot manager does not support a superset of the crypto
> > + * capabilities supported by the currect keyslot manager of the mapped_device,
> > + * it returns an error instead, since we don't support restricting crypto
> > + * capabilities on table changes.
> > + */
> > +static struct blk_keyslot_manager *
> > +dm_init_inline_encryption(struct mapped_device *md, struct dm_table *t)
> > +{
> > +	struct blk_keyslot_manager *ksm;
> > +	struct dm_target *ti;
> > +	unsigned int i;
> > +
> > +	ksm = kmalloc(sizeof(*ksm), GFP_KERNEL);
> > +	if (!ksm)
> > +		return ERR_PTR(-EINVAL);
> 
> ENOMEM, not EINVAL.
> 
Ahhh :(
> > +	blk_ksm_init_passthrough(ksm);
> > +	ksm->ksm_ll_ops = dm_ksm_ll_ops;
> > +	ksm->max_dun_bytes_supported = UINT_MAX;
> > +	memset(ksm->crypto_modes_supported, 0xFF,
> > +	       sizeof(ksm->crypto_modes_supported));
> > +	ksm->priv = md;
> > +
> > +	for (i = 0; i < dm_table_get_num_targets(t); i++) {
> > +		ti = dm_table_get_target(t, i);
> > +
> > +		if (!ti->may_passthrough_inline_crypto) {
> > +			blk_ksm_intersect_modes(ksm, NULL);
> > +			break;
> > +		}
> > +		if (!ti->type->iterate_devices)
> > +			continue;
> > +		ti->type->iterate_devices(ti, device_intersect_crypto_modes,
> > +					  ksm);
> > +	}
> > +
> > +	if (!blk_ksm_is_superset(ksm, md->queue->ksm)) {
> > +		DMWARN("Inline encryption capabilities of new DM table were more restrictive than the old table's. This is not supported!");
> > +		blk_ksm_destroy(ksm);
> > +		return ERR_PTR(-EOPNOTSUPP);
> 
> Missing kfree(ksm).
> 
Ah I totally forgot blk_ksm_destroy() doesn't free the memory used by
the ksm (not anymore at least, I'm getting confused by the numerous
revisions that code went through) - I'll need to fix that everywhere.
> Also it looks like other code is using EINVAL for a bad dm table.
> 
> > +	}
> > +
> > +	return ksm;
> 
> How about returning NULL if no crypto modes are actually supported?
> 
> > +/**
> > + * dm_verify_inline_encryption() - Verifies that the current keyslot manager of
> > + *				   the mapped_device can be replaced by the
> > + *				   keyslot manager of a given dm_table.
> > + * @md: The mapped_device
> > + * @t: The dm_table
> > + *
> > + * In particular, this function checks that the keyslot manager that will be
> > + * constructed for the dm_table will support a superset of the capabilities that
> > + * the current keyslot manager of the mapped_device supports.
> > + *
> > + * Return: 0 if the table's keyslot_manager can replace the current keyslot
> > + *	   manager of the mapped_device. Negative value otherwise.
> > + */
> > +int dm_verify_inline_encryption(struct mapped_device *md, struct dm_table *t)
> > +{
> > +	struct blk_keyslot_manager *ksm = dm_init_inline_encryption(md, t);
> > +
> > +	if (IS_ERR(ksm))
> > +		return PTR_ERR(ksm);
> > +	blk_ksm_destroy(ksm);
> > +
> > +	return 0;
> > +}
> 
> This function seems redundant with dm_init_inline_encryption().  Wouldn't it be
> simpler to do:
> 
> - dm_setup_md_queue() and dm_swap_table() call dm_init_inline_encryption() after
>   dm_calculate_queue_limits().
> 
> - ksm gets passed to dm_table_set_restrictions(), which calls
>   dm_update_keyslot_manager() (maybe rename to dm_update_inline_encryption()?)
>   to actually set q->ksm.
> 
> That way, the crypto capabilities would be handled similarly to how the
> queue_limits are already handled.
> 
If we call it from dm_swap_table(), we could have it pass the returned
ksm to __bind(), either as a new argument, or by adding the ksm to the
queue_limits (I'll have to check if that's ok/a good idea in the first
place), and __bind() could send the argument to
dm_table_set_restrictions()

But the real issue is, I think we should check whether a new table is
valid (from the ksm capabilities support perspective) at the time that
table is loaded (as opposed to only checking it when DM attempts to swap
it in, which might be a lot later, when the user resumes the device) - so
I can't only call it from dm_setup_md_queue(), and I'd have to call it
from table_load() anyway. And the returned ksm that table_load() obtains
from dm_init_inline_encryption() can't really be used - because
1) the ksm constructed at dm_swap_table() might actually support more
capabilities than the ksm constructed in table_load(), because
underlying devices might get resumed, and have new tables swapped in,
and might support more capabilities than before
2) a subsequent dm_swap_table() call could fail for whatever reason, and
we'll need to revert to the current ksm.

What I'm doing right now is simply freeing the ksm returned by
dm_init_inline_encryption() whenever it's called from table_load()
(and I'm trying to make that process a little nicer by wrapping it in a
function called dm_verify_inline_encryption()) - so if we're going to
have to call dm_init_inline_encryption() and then freeing the returned
ksm in table_load(), I think it might be better to continue to have
dm_verify_inline_encryption(), unless you'd prefer just open coding the
function directly.
> > +static void dm_update_keyslot_manager(struct mapped_device *md,
> > +				      struct blk_keyslot_manager *ksm)
> > +{
> > +	bool ksm_is_empty = true;
> > +	int i;
> > +
> > +	/*
> > +	 * If the new KSM doesn't actually support any crypto modes, we may as
> > +	 * well set a NULL ksm.
> > +	 */
> > +	ksm_is_empty = true;
> > +	for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
> > +		if (ksm->crypto_modes_supported[i]) {
> > +			ksm_is_empty = false;
> > +			break;
> > +		}
> > +	}
> 
> dm_init_inline_encryption() seems like a better place for this "are no modes
> supported" logic.
> 
Alright :)
> > +	if (ksm_is_empty) {
> > +		blk_ksm_destroy(ksm);
> > +
> > +		/* At this point, md->queue->ksm must also be NULL, since we're
> > +		 * guaranteed that ksm is a superset of md->queue->ksm, and we
> > +		 * never set md->queue->ksm to a non-null empty ksm.
> > +		 */
> > +		if (WARN_ON(md->queue->ksm))
> > +			blk_ksm_register(NULL, md->queue);
> > +		return;
> > +	}
> > +
> > +	/* Make the ksm less restrictive */
> > +	if (!md->queue->ksm) {
> > +		blk_ksm_register(ksm, md->queue);
> > +	} else {
> > +		blk_ksm_update_capabilities(md->queue->ksm, ksm);
> > +		blk_ksm_destroy(ksm);
> > +	}
> > +}
> 
> Wouldn't it be simpler to just destroy (and free) the existing
> blk_keyslot_manager (if any), then set the new one (if it's not NULL)?
>
Yeah, I really wanted to do that too, but as I addressed above, I don't
think it's that straightforward :(
> > +static void dm_destroy_inline_encryption(struct mapped_device *md)
> > +{
> > +	if (!md->queue->ksm)
> > +		return;
> > +	blk_ksm_destroy(md->queue->ksm);
> 
> Missing kfree().
>
Thanks, will address this everywhere I call blk_ksm_destroy().
> > +	blk_ksm_unregister(md->queue);
> > +}
> > +
> > +#else /* CONFIG_BLK_INLINE_ENCRYPTION */
> > +
> > +static inline struct blk_keyslot_manager *
> > +dm_init_inline_encryption(struct mapped_device *md, struct dm_table *t)
> > +{
> > +	return NULL;
> > +}
> 
> Seems it would be simpler for these functions to take a request_queue instead of
> a mapped_device.
> 
> >  /*
> >   * Returns old map, which caller must destroy.
> >   */
> > @@ -1959,6 +2164,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
> >  	struct request_queue *q = md->queue;
> >  	bool request_based = dm_table_request_based(t);
> >  	sector_t size;
> > +	struct blk_keyslot_manager *ksm;
> >  	int ret;
> >  
> >  	lockdep_assert_held(&md->suspend_lock);
> > @@ -1994,12 +2200,21 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
> >  		md->immutable_target = dm_table_get_immutable_target(t);
> >  	}
> >  
> > +	ksm = dm_init_inline_encryption(md, t);
> > +	if (IS_ERR(ksm)) {
> > +		old_map = ERR_PTR(PTR_ERR(ksm));
> > +		goto out;
> > +	}
> 
> It seems too late to fail here, since the mapped_device already started being
> updated.  What I suggested above would address this.
>
Alright, I'll move the call to dm_init_inline_encryption() earlier, into
dm_swap_table().
> > +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> > +
> >  /**
> >   * struct blk_ksm_ll_ops - functions to manage keyslots in hardware
> >   * @keyslot_program:	Program the specified key into the specified slot in the
> > @@ -106,6 +108,21 @@ void blk_ksm_reprogram_all_keys(struct blk_keyslot_manager *ksm);
> >  
> >  void blk_ksm_destroy(struct blk_keyslot_manager *ksm);
> >  
> > +void blk_ksm_intersect_modes(struct blk_keyslot_manager *parent,
> > +			     const struct blk_keyslot_manager *child);
> > +
> >  void blk_ksm_init_passthrough(struct blk_keyslot_manager *ksm);
> >  
> > +bool blk_ksm_is_superset(struct blk_keyslot_manager *ksm_superset,
> > +			 struct blk_keyslot_manager *ksm_subset);
> > +
> > +void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
> > +				 struct blk_keyslot_manager *reference_ksm);
> > +
> > +#else /* CONFIG_BLK_INLINE_ENCRYPTION */
> > +
> > +static inline void blk_ksm_destroy(struct blk_keyslot_manager *ksm) { }
> > +
> > +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
> 
> Is the blk_ksm_destroy() stub really needed?
>
I'm calling blk_ksm_destroy() from __bind() wihout any ifdefs, so I
think it's necessary - I'll check again just in case.
> - Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support
  2020-10-27 23:58     ` Satya Tangirala
@ 2020-10-28  0:17       ` Eric Biggers
  2020-10-29  4:44         ` Satya Tangirala
  0 siblings, 1 reply; 18+ messages in thread
From: Eric Biggers @ 2020-10-28  0:17 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: Jens Axboe, Mike Snitzer, linux-kernel, linux-block, dm-devel,
	Alasdair Kergon

On Tue, Oct 27, 2020 at 11:58:47PM +0000, Satya Tangirala wrote:
> > > +/**
> > > + * blk_ksm_update_capabilities() - Update the restrictions of a KSM to those of
> > > + *				   another KSM
> > > + * @target_ksm: The KSM whose restrictions to update.
> > > + * @reference_ksm: The KSM to whose restrictions this function will update
> > > + *		   @target_ksm's restrictions to,
> > > + */
> > > +void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
> > > +				 struct blk_keyslot_manager *reference_ksm)
> > > +{
> > > +	memcpy(target_ksm->crypto_modes_supported,
> > > +	       reference_ksm->crypto_modes_supported,
> > > +	       sizeof(target_ksm->crypto_modes_supported));
> > > +
> > > +	target_ksm->max_dun_bytes_supported =
> > > +				reference_ksm->max_dun_bytes_supported;
> > > +}
> > > +EXPORT_SYMBOL_GPL(blk_ksm_update_capabilities);
> > 
> > Wouldn't it be easier to replace the original blk_keyslot_manager, rather than
> > modify it?  Then blk_ksm_update_capabilities() wouldn't be needed.
> > 
> I didn't want to replace the original blk_keyslot_manager because it's
> possible that e.g. fscrypt is checking for crypto capabilities support
> via blk_ksm_crypto_cfg_supported() when DM wants to replace the
> blk_keyslot_manager. DM would have to free the memory used by the
> blk_keyslot_manager, but blk_ksm_crypto_cfg_supported() might still
> be trying to access that memory. I did it this way to avoid having to
> add refcounts or something else to the blk_keyslot_manager...(And I
> didn't bother adding any synchronization code since the capabilities
> only ever expand, and never contract).

Are you sure that's possible?  That would imply that there is no synchronization
between limits/capabilities in the request_queue being changed and the
request_queue being used.  That's already buggy.  Maybe it's the sort of thing
that is gotten away with in practice, in which case avoiding a free() would
indeed be a good idea, but it's worth explicitly clarifying whether all this
code is indeed racy by design...

> > > diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
> > > index cd0478d44058..2b3efa9f9fae 100644
> > > --- a/drivers/md/dm-ioctl.c
> > > +++ b/drivers/md/dm-ioctl.c
> > > @@ -1358,6 +1358,10 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si
> > >  		goto err_unlock_md_type;
> > >  	}
> > >  
> > > +	r = dm_verify_inline_encryption(md, t);
> > > +	if (r)
> > > +		goto err_unlock_md_type;
> > > +
> > >  	if (dm_get_md_type(md) == DM_TYPE_NONE) {
> > >  		/* Initial table load: acquire type of table. */
> > >  		dm_set_md_type(md, dm_table_get_type(t));
> > > @@ -2114,6 +2118,10 @@ int __init dm_early_create(struct dm_ioctl *dmi,
> > >  	if (r)
> > >  		goto err_destroy_table;
> > >  
> > > +	r = dm_verify_inline_encryption(md, t);
> > > +	if (r)
> > > +		goto err_destroy_table;
> > > +
> > >  	md->type = dm_table_get_type(t);
> > >  	/* setup md->queue to reflect md's type (may block) */
> > >  	r = dm_setup_md_queue(md, t);
> > 
> > Both table_load() and dm_early_create() call dm_setup_md_queue().  Wouldn't it
> > be simpler to handle inline encryption in dm_setup_md_queue(), instead of doing
> > it in both table_load() and dm_early_create()?
> > 
> table_load() only calls dm_setup_md_queue() on initial table load (when
> the md_type is DM_TYPE_NONE), so I can't call
> dm_verify_inline_encryption() in only dm_setup_md_queue(), because
> dm_verify_inline_encryption() needs to run on every table load.

Where do all the other limitations and capabilities of the request_queue get
updated on non-initial table loads, then?

> > > +/**
> > > + * dm_verify_inline_encryption() - Verifies that the current keyslot manager of
> > > + *				   the mapped_device can be replaced by the
> > > + *				   keyslot manager of a given dm_table.
> > > + * @md: The mapped_device
> > > + * @t: The dm_table
> > > + *
> > > + * In particular, this function checks that the keyslot manager that will be
> > > + * constructed for the dm_table will support a superset of the capabilities that
> > > + * the current keyslot manager of the mapped_device supports.
> > > + *
> > > + * Return: 0 if the table's keyslot_manager can replace the current keyslot
> > > + *	   manager of the mapped_device. Negative value otherwise.
> > > + */
> > > +int dm_verify_inline_encryption(struct mapped_device *md, struct dm_table *t)
> > > +{
> > > +	struct blk_keyslot_manager *ksm = dm_init_inline_encryption(md, t);
> > > +
> > > +	if (IS_ERR(ksm))
> > > +		return PTR_ERR(ksm);
> > > +	blk_ksm_destroy(ksm);
> > > +
> > > +	return 0;
> > > +}
> > 
> > This function seems redundant with dm_init_inline_encryption().  Wouldn't it be
> > simpler to do:
> > 
> > - dm_setup_md_queue() and dm_swap_table() call dm_init_inline_encryption() after
> >   dm_calculate_queue_limits().
> > 
> > - ksm gets passed to dm_table_set_restrictions(), which calls
> >   dm_update_keyslot_manager() (maybe rename to dm_update_inline_encryption()?)
> >   to actually set q->ksm.
> > 
> > That way, the crypto capabilities would be handled similarly to how the
> > queue_limits are already handled.
> > 
> If we call it from dm_swap_table(), we could have it pass the returned
> ksm to __bind(), either as a new argument, or by adding the ksm to the
> queue_limits (I'll have to check if that's ok/a good idea in the first
> place), and __bind() could send the argument to
> dm_table_set_restrictions()
> 
> But the real issue is, I think we should check whether a new table is
> valid (from the ksm capabilities support perspective) at the time that
> table is loaded (as opposed to only checking it when DM attempts to swap
> it in, which might be a lot later, when the user resumes the device) - so
> I can't only call it from dm_setup_md_queue(), and I'd have to call it
> from table_load() anyway. And the returned ksm that table_load() obtains
> from dm_init_inline_encryption() can't really be used - because
> 1) the ksm constructed at dm_swap_table() might actually support more
> capabilities than the ksm constructed in table_load(), because
> underlying devices might get resumed, and have new tables swapped in,
> and might support more capabilities than before
> 2) a subsequent dm_swap_table() call could fail for whatever reason, and
> we'll need to revert to the current ksm.
> 
> What I'm doing right now is simply freeing the ksm returned by
> dm_init_inline_encryption() whenever it's called from table_load()
> (and I'm trying to make that process a little nicer by wrapping it in a
> function called dm_verify_inline_encryption()) - so if we're going to
> have to call dm_init_inline_encryption() and then freeing the returned
> ksm in table_load(), I think it might be better to continue to have
> dm_verify_inline_encryption(), unless you'd prefer just open coding the
> function directly.

I don't understand why this needs to be so complicated.  Doesn't the dm layer
have the same problem for all the other queue limits and capabilities?  What
makes inline encryption different?

- Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support
  2020-10-28  0:17       ` Eric Biggers
@ 2020-10-29  4:44         ` Satya Tangirala
  0 siblings, 0 replies; 18+ messages in thread
From: Satya Tangirala @ 2020-10-29  4:44 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Jens Axboe, Mike Snitzer, linux-kernel, linux-block, dm-devel,
	Alasdair Kergon

On Tue, Oct 27, 2020 at 05:17:31PM -0700, Eric Biggers wrote:
> On Tue, Oct 27, 2020 at 11:58:47PM +0000, Satya Tangirala wrote:
> > > > +/**
> > > > + * blk_ksm_update_capabilities() - Update the restrictions of a KSM to those of
> > > > + *				   another KSM
> > > > + * @target_ksm: The KSM whose restrictions to update.
> > > > + * @reference_ksm: The KSM to whose restrictions this function will update
> > > > + *		   @target_ksm's restrictions to,
> > > > + */
> > > > +void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
> > > > +				 struct blk_keyslot_manager *reference_ksm)
> > > > +{
> > > > +	memcpy(target_ksm->crypto_modes_supported,
> > > > +	       reference_ksm->crypto_modes_supported,
> > > > +	       sizeof(target_ksm->crypto_modes_supported));
> > > > +
> > > > +	target_ksm->max_dun_bytes_supported =
> > > > +				reference_ksm->max_dun_bytes_supported;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(blk_ksm_update_capabilities);
> > > 
> > > Wouldn't it be easier to replace the original blk_keyslot_manager, rather than
> > > modify it?  Then blk_ksm_update_capabilities() wouldn't be needed.
> > > 
> > I didn't want to replace the original blk_keyslot_manager because it's
> > possible that e.g. fscrypt is checking for crypto capabilities support
> > via blk_ksm_crypto_cfg_supported() when DM wants to replace the
> > blk_keyslot_manager. DM would have to free the memory used by the
> > blk_keyslot_manager, but blk_ksm_crypto_cfg_supported() might still
> > be trying to access that memory. I did it this way to avoid having to
> > add refcounts or something else to the blk_keyslot_manager...(And I
> > didn't bother adding any synchronization code since the capabilities
> > only ever expand, and never contract).
> 
> Are you sure that's possible?  That would imply that there is no synchronization
> between limits/capabilities in the request_queue being changed and the
> request_queue being used.  That's already buggy.  Maybe it's the sort of thing
> that is gotten away with in practice, in which case avoiding a free() would
> indeed be a good idea, but it's worth explicitly clarifying whether all this
> code is indeed racy by design...
> 
I tried checking if the two code regions are reachable at the same time
(by adding some hacky code in the middle of
blk_ksm_crypto_cfg_supported() to loop indefinitely until a certain flag
is set at the end of dm_update_keyslot_manager(), which is right after
where we'd free the old ksm when the table is swapped), and it turns out
the two regions really *can* run at the same time. Otoh, I'd imagine
dm_stop_queue() might synchronize the limits in the request_queue(), but
that's only called on request based DM devices...tl;dr I don't know if
changing limits in the request_queue is racy, but checking for crypto
capabilities is.

In case you're interested, here's the hack I used to test that

diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
index e16e4a074765..918bdd58e6b2 100644
--- a/block/keyslot-manager.c
+++ b/block/keyslot-manager.c
@@ -34,6 +34,7 @@
 #include <linux/pm_runtime.h>
 #include <linux/wait.h>
 #include <linux/blkdev.h>
+#include <linux/delay.h>
 
 struct blk_ksm_keyslot {
 	atomic_t slot_refs;
@@ -284,6 +285,7 @@ void blk_ksm_put_slot(struct blk_ksm_keyslot *slot)
 	}
 }
 
+volatile int my_inline_var = 0;
 /**
  * blk_ksm_crypto_cfg_supported() - Find out if a crypto configuration is
  *				    supported by a ksm.
@@ -297,8 +299,18 @@ void blk_ksm_put_slot(struct blk_ksm_keyslot *slot)
 bool blk_ksm_crypto_cfg_supported(struct blk_keyslot_manager *ksm,
 				  const struct blk_crypto_config *cfg)
 {
+	int mtmp = 0;
+
 	if (!ksm)
 		return false;
+	if (my_inline_var == 0)
+		my_inline_var = 1;
+	while (my_inline_var != 3) {
+		if (mtmp % 10 == 0)
+			printk("In blk_ksm_crypto supported! %d", my_inline_var);
+		mtmp++;
+		msleep(500);
+	}
 	if (!(ksm->crypto_modes_supported[cfg->crypto_mode] &
 	      cfg->data_unit_size))
 		return false;
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index cb1191d6e945..c6733de1388c 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2141,6 +2141,14 @@ static int loop_add(struct loop_device **l, int i)
 	if (!disk)
 		goto out_free_queue;
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	blk_ksm_init_passthrough(&lo->ksm);
+	lo->ksm.max_dun_bytes_supported = 16;
+	lo->ksm.crypto_modes_supported[BLK_ENCRYPTION_MODE_AES_256_XTS] = 0xFFFFFFFF;
+	lo->ksm.crypto_modes_supported[BLK_ENCRYPTION_MODE_ADIANTUM] = 0xFFFFFFFF;
+	blk_ksm_register(&lo->ksm, lo->lo_queue);
+#endif
+
 	/*
 	 * Disable partition scanning by default. The in-kernel partition
 	 * scanning can be requested individually per-device during its
diff --git a/drivers/block/loop.h b/drivers/block/loop.h
index af75a5ee4094..4fc9aa9cab94 100644
--- a/drivers/block/loop.h
+++ b/drivers/block/loop.h
@@ -12,6 +12,7 @@
 #include <linux/bio.h>
 #include <linux/blkdev.h>
 #include <linux/blk-mq.h>
+#include <linux/keyslot-manager.h>
 #include <linux/spinlock.h>
 #include <linux/mutex.h>
 #include <linux/kthread.h>
@@ -62,6 +63,9 @@ struct loop_device {
 	struct request_queue	*lo_queue;
 	struct blk_mq_tag_set	tag_set;
 	struct gendisk		*lo_disk;
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	struct blk_keyslot_manager	ksm;
+#endif
 };
 
 struct loop_cmd {
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 22bb2c90583d..165521d1ade2 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2087,7 +2087,7 @@ int dm_verify_inline_encryption(struct mapped_device *md, struct dm_table *t)
 
 	return 0;
 }
-
+extern volatile int my_inline_var;
 static void dm_update_keyslot_manager(struct mapped_device *md,
 				      struct blk_keyslot_manager *ksm)
 {
@@ -2125,6 +2125,11 @@ static void dm_update_keyslot_manager(struct mapped_device *md,
 		blk_ksm_update_capabilities(md->queue->ksm, ksm);
 		blk_ksm_destroy(ksm);
 	}
+	printk("update KSM!");
+	if (my_inline_var == 2) {
+		printk("Update to 3 in ksm update");
+		my_inline_var = 3;
+	}
 }
 
 static void dm_destroy_inline_encryption(struct mapped_device *md)
@@ -2213,6 +2218,11 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
 		goto out;
 	}
 
+	if (my_inline_var == 1) {
+		printk("Update to 2 in bind");
+		my_inline_var = 2;
+	}
+
 	dm_update_keyslot_manager(md, ksm);
 
 	old_map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));

Using that patch, I
1) set up a loopback device on a file
2) set up a dm-linear device (dm-0) on that loopback device
3) suspended dm-0
4) loaded a new table to dm-0 (I just used the same table as the existing
   table)
5) tried to read an encrypted file from dm-0 in the background (which
   promptly started printing out "In blk_ksm_crypto supported! 1" every
   5s)
6) resumed dm-0, which causes the "new" table to be swapped in, and sets
   my_inline_var to 3, which eventually results in the read in step 5
   to run to completion.

> > > > diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
> > > > index cd0478d44058..2b3efa9f9fae 100644
> > > > --- a/drivers/md/dm-ioctl.c
> > > > +++ b/drivers/md/dm-ioctl.c
> > > > @@ -1358,6 +1358,10 @@ static int table_load(struct file *filp, struct dm_ioctl *param, size_t param_si
> > > >  		goto err_unlock_md_type;
> > > >  	}
> > > >  
> > > > +	r = dm_verify_inline_encryption(md, t);
> > > > +	if (r)
> > > > +		goto err_unlock_md_type;
> > > > +
> > > >  	if (dm_get_md_type(md) == DM_TYPE_NONE) {
> > > >  		/* Initial table load: acquire type of table. */
> > > >  		dm_set_md_type(md, dm_table_get_type(t));
> > > > @@ -2114,6 +2118,10 @@ int __init dm_early_create(struct dm_ioctl *dmi,
> > > >  	if (r)
> > > >  		goto err_destroy_table;
> > > >  
> > > > +	r = dm_verify_inline_encryption(md, t);
> > > > +	if (r)
> > > > +		goto err_destroy_table;
> > > > +
> > > >  	md->type = dm_table_get_type(t);
> > > >  	/* setup md->queue to reflect md's type (may block) */
> > > >  	r = dm_setup_md_queue(md, t);
> > > 
> > > Both table_load() and dm_early_create() call dm_setup_md_queue().  Wouldn't it
> > > be simpler to handle inline encryption in dm_setup_md_queue(), instead of doing
> > > it in both table_load() and dm_early_create()?
> > > 
> > table_load() only calls dm_setup_md_queue() on initial table load (when
> > the md_type is DM_TYPE_NONE), so I can't call
> > dm_verify_inline_encryption() in only dm_setup_md_queue(), because
> > dm_verify_inline_encryption() needs to run on every table load.
> 
> Where do all the other limitations and capabilities of the request_queue get
> updated on non-initial table loads, then?
> 
I don't think they get updated on non-initial table loads at all - they
only get updated on table swaps. Integrity is, however, an exception -
it gets updated on table loads, and verified on table swaps (and if
verification fails during the swap, it removes the integrity profile
entirely).
> > > > +/**
> > > > + * dm_verify_inline_encryption() - Verifies that the current keyslot manager of
> > > > + *				   the mapped_device can be replaced by the
> > > > + *				   keyslot manager of a given dm_table.
> > > > + * @md: The mapped_device
> > > > + * @t: The dm_table
> > > > + *
> > > > + * In particular, this function checks that the keyslot manager that will be
> > > > + * constructed for the dm_table will support a superset of the capabilities that
> > > > + * the current keyslot manager of the mapped_device supports.
> > > > + *
> > > > + * Return: 0 if the table's keyslot_manager can replace the current keyslot
> > > > + *	   manager of the mapped_device. Negative value otherwise.
> > > > + */
> > > > +int dm_verify_inline_encryption(struct mapped_device *md, struct dm_table *t)
> > > > +{
> > > > +	struct blk_keyslot_manager *ksm = dm_init_inline_encryption(md, t);
> > > > +
> > > > +	if (IS_ERR(ksm))
> > > > +		return PTR_ERR(ksm);
> > > > +	blk_ksm_destroy(ksm);
> > > > +
> > > > +	return 0;
> > > > +}
> > > 
> > > This function seems redundant with dm_init_inline_encryption().  Wouldn't it be
> > > simpler to do:
> > > 
> > > - dm_setup_md_queue() and dm_swap_table() call dm_init_inline_encryption() after
> > >   dm_calculate_queue_limits().
> > > 
> > > - ksm gets passed to dm_table_set_restrictions(), which calls
> > >   dm_update_keyslot_manager() (maybe rename to dm_update_inline_encryption()?)
> > >   to actually set q->ksm.
> > > 
> > > That way, the crypto capabilities would be handled similarly to how the
> > > queue_limits are already handled.
> > > 
> > If we call it from dm_swap_table(), we could have it pass the returned
> > ksm to __bind(), either as a new argument, or by adding the ksm to the
> > queue_limits (I'll have to check if that's ok/a good idea in the first
> > place), and __bind() could send the argument to
> > dm_table_set_restrictions()
> > 
> > But the real issue is, I think we should check whether a new table is
> > valid (from the ksm capabilities support perspective) at the time that
> > table is loaded (as opposed to only checking it when DM attempts to swap
> > it in, which might be a lot later, when the user resumes the device) - so
> > I can't only call it from dm_setup_md_queue(), and I'd have to call it
> > from table_load() anyway. And the returned ksm that table_load() obtains
> > from dm_init_inline_encryption() can't really be used - because
> > 1) the ksm constructed at dm_swap_table() might actually support more
> > capabilities than the ksm constructed in table_load(), because
> > underlying devices might get resumed, and have new tables swapped in,
> > and might support more capabilities than before
> > 2) a subsequent dm_swap_table() call could fail for whatever reason, and
> > we'll need to revert to the current ksm.
> > 
> > What I'm doing right now is simply freeing the ksm returned by
> > dm_init_inline_encryption() whenever it's called from table_load()
> > (and I'm trying to make that process a little nicer by wrapping it in a
> > function called dm_verify_inline_encryption()) - so if we're going to
> > have to call dm_init_inline_encryption() and then freeing the returned
> > ksm in table_load(), I think it might be better to continue to have
> > dm_verify_inline_encryption(), unless you'd prefer just open coding the
> > function directly.
> 
> I don't understand why this needs to be so complicated.  Doesn't the dm layer
> have the same problem for all the other queue limits and capabilities?  What
> makes inline encryption different?
> 
It's this complicated only because I wanted to verify whether the inline
crypto capabilities of the new table are acceptable at table load time,
rather than throwing an error only at table swap time. If we decide
it's alright to throw an error only at table swap time, then
dm_verify_inline_encryption() can go away completely, and we won't need
the code in table_load() and dm_early_create() that calls that
function.
> - Eric

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2020-10-29  9:08 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-15 21:46 [dm-devel] [PATCH v2 0/4] add support for inline encryption to device mapper Satya Tangirala
2020-10-15 21:46 ` [dm-devel] [PATCH v2 1/4] block: keyslot-manager: Introduce passthrough keyslot manager Satya Tangirala
2020-10-16  7:20   ` Christoph Hellwig
2020-10-21  4:44     ` Eric Biggers
2020-10-21  5:27       ` Satya Tangirala
2020-10-27 20:04   ` Eric Biggers
2020-10-15 21:46 ` [dm-devel] [PATCH v2 2/4] block: add private field to struct keyslot_manager Satya Tangirala
2020-10-16  7:19   ` Christoph Hellwig
2020-10-16  8:39     ` Satya Tangirala
2020-10-15 21:46 ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Satya Tangirala
2020-10-25 21:02   ` kernel test robot
2020-10-25 21:02   ` [dm-devel] [PATCH] dm: fix err_cast.cocci warnings kernel test robot
2020-10-27 21:31   ` [dm-devel] [PATCH v2 3/4] dm: add support for passing through inline crypto support Eric Biggers
2020-10-27 23:58     ` Satya Tangirala
2020-10-28  0:17       ` Eric Biggers
2020-10-29  4:44         ` Satya Tangirala
2020-10-15 21:46 ` [dm-devel] [PATCH v2 4/4] dm: enable may_passthrough_inline_crypto on some targets Satya Tangirala
2020-10-27 21:10   ` Eric Biggers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).