linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/8] Inline Encryption Support
@ 2019-06-05 23:28 Satya Tangirala
  2019-06-05 23:28 ` [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption Satya Tangirala
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

This patch series adds support for Inline Encryption to the block layer,
UFS, fscrypt and f2fs.

Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size, etc.)
along with a data transfer request to a storage device, and the inline
encryption hardware will use that context to en/decrypt the data. The
inline encryption hardware is part of the storage device, and it
conceptually sits on the data path between system memory and the storage
device. Inline Encryption hardware has become increasingly common, and we
want to support it in the kernel.

Inline Encryption hardware implementations often function around the
concept of a limited number of "keyslots", which can hold an encryption
context each. The storage device can be directed to en/decrypt any
particular request with the encryption context stored in any particular
keyslot.

Patch 1 introduces a Keyslot Manager to efficiently manage keyslots.
The keyslot manager also functions as the interface that blk-crypto
(introduced in Path 3), will use to program keys into inline encryption
hardware. For more information on the Keyslot Manager, refer to
documentation found in block/keyslot-manager.c and linux/keyslot-manager.h.

Patch 2 introduces struct bio_crypt_ctx, and a ptr to one in struct bio,
which allows struct bio to represent an encryption context that can be
passed down the storage stack from the filesystem layer to the storage
driver.

Patch 3 introduces blk-crypto. Blk-crypto delegates crypto operations to
inline encryption hardware when available, and also contains a software
fallback to the kernel crypto API. Blk-crypto also makes it possible for
layered devices like device mapper to make use of inline encryption
hardware. Given that blk-crypto works as a software fallback, we are
considering removing file content en/decryption from fscrypt and simply
using blk-crypto in a future patch. For more details on blk-crypto, refer
to Documentation/block/blk-crypto.txt.

Patches 4-6 add support for inline encryption into the UFS driver according
to the JEDEC UFS HCI v2.1 specification. Inline encryption support for
other drivers (like eMMC) may be added in the same way - the device driver
should set up a Keyslot Manager in the device's request_queue (refer to
the UFS crypto additions in ufshcd-crypto.c and ufshcd.c for an example).

Patches 7 and 8 add support to fscrypt and f2fs, so that we have
a complete stack that can make use of inline encryption.

There have been a few patch sets addressing Inline Encryption Support in
the past. Briefly, this patch set differs from those as follows:

1) "crypto: qce: ice: Add support for Inline Crypto Engine"
is specific to certain hardware, while our patch set's Inline
Encryption support for UFS is implemented according to the JEDEC UFS
specification.

2) "scsi: ufs: UFS Host Controller crypto changes" registers inline
encryption support as a kernel crypto algorithm. Our patch views inline
encryption as being fundamentally different from a generic crypto
provider (in that inline encryption is tied to a device), and so does
not use the kernel crypto API to represent inline encryption hardware.

3) "scsi: ufs: add real time/inline crypto support to UFS HCD" requires
the device mapper to work - our patch does not.

Changes v1 => v2:
 - Block layer and UFS changes are split into 3 patches each.
 - We now only have a ptr to a struct bio_crypt_ctx in struct bio, instead
   of the struct itself.
 - struct bio_crypt_ctx no longer has flags.
 - blk-crypto now correctly handles the case when it fails to init
   (because of insufficient memory), but kernel continues to boot.
 - ufshcd-crypto now works on big endian cpus.
 - Many cleanups.

Satya Tangirala (8):
  block: Keyslot Manager for Inline Encryption
  block: Add encryption context to struct bio
  block: blk-crypto for Inline Encryption
  scsi: ufs: UFS driver v2.1 spec crypto additions
  scsi: ufs: UFS crypto API
  scsi: ufs: Add inline encryption support to UFS
  fscrypt: wire up fscrypt to use blk-crypto
  f2fs: Wire up f2fs to use inline encryption via fscrypt

 Documentation/block/blk-crypto.txt | 185 ++++++++++
 block/Kconfig                      |   8 +
 block/Makefile                     |   2 +
 block/bio.c                        |  17 +-
 block/blk-core.c                   |  11 +-
 block/blk-crypt-ctx.c              |  90 +++++
 block/blk-crypto.c                 | 557 +++++++++++++++++++++++++++++
 block/blk-merge.c                  |  34 +-
 block/bounce.c                     |   9 +-
 block/keyslot-manager.c            | 315 ++++++++++++++++
 drivers/md/dm.c                    |  15 +-
 drivers/scsi/ufs/Kconfig           |  10 +
 drivers/scsi/ufs/Makefile          |   1 +
 drivers/scsi/ufs/ufshcd-crypto.c   | 438 +++++++++++++++++++++++
 drivers/scsi/ufs/ufshcd-crypto.h   |  69 ++++
 drivers/scsi/ufs/ufshcd.c          |  84 ++++-
 drivers/scsi/ufs/ufshcd.h          |  23 ++
 drivers/scsi/ufs/ufshci.h          |  67 +++-
 fs/crypto/Kconfig                  |   7 +
 fs/crypto/bio.c                    | 159 ++++++--
 fs/crypto/crypto.c                 |   9 +
 fs/crypto/fscrypt_private.h        |  10 +
 fs/crypto/keyinfo.c                |  69 ++--
 fs/crypto/policy.c                 |  10 +
 fs/f2fs/data.c                     |  77 +++-
 fs/f2fs/super.c                    |   1 +
 include/linux/bio.h                | 180 ++++++++++
 include/linux/blk-crypto.h         |  40 +++
 include/linux/blk_types.h          |  39 ++
 include/linux/blkdev.h             |   9 +
 include/linux/fscrypt.h            |  64 ++++
 include/linux/keyslot-manager.h    | 116 ++++++
 include/uapi/linux/fs.h            |  12 +-
 33 files changed, 2668 insertions(+), 69 deletions(-)
 create mode 100644 Documentation/block/blk-crypto.txt
 create mode 100644 block/blk-crypt-ctx.c
 create mode 100644 block/blk-crypto.c
 create mode 100644 block/keyslot-manager.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h
 create mode 100644 include/linux/blk-crypto.h
 create mode 100644 include/linux/keyslot-manager.h

-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption
  2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
@ 2019-06-05 23:28 ` Satya Tangirala
  2019-06-07 22:28   ` Eric Biggers
  2019-06-12 18:26   ` Eric Biggers
  2019-06-05 23:28 ` [RFC PATCH v2 2/8] block: Add encryption context to struct bio Satya Tangirala
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size, etc.)
along with a data transfer request to a storage device, and the inline
encryption hardware will use that context to en/decrypt the data. The
inline encryption hardware is part of the storage device, and it
conceptually sits on the data path between system memory and the storage
device.

Inline Encryption hardware implementations often function around the
concept of "keyslots". These implementations often have a limited number
of "keyslots", each of which can hold an encryption context (we say that
an encryption context can be "programmed" into a keyslot). Requests made
to the storage device may have a keyslot associated with them, and the
inline encryption hardware will en/decrypt the data in the requests using
the encryption context programmed into that associated keyslot. As
keyslots are limited, and programming keys may be expensive in many
implementations, and multiple requests may use exactly the same encryption
contexts, we introduce a Keyslot Manager to efficiently manage keyslots.
The keyslot manager also functions as the interface that upper layers will
use to program keys into inline encryption hardware. For more information
on the Keyslot Manager, refer to documentation found in
block/keyslot-manager.c and linux/keyslot-manager.h.

Known issues:
1) Keyslot Manager has a performance bug where the same encryption
   context may be programmed into multiple keyslots at the same time in
   certain situations when all keyslots are being used.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 block/keyslot-manager.c         | 315 ++++++++++++++++++++++++++++++++
 include/linux/blk_types.h       |  11 ++
 include/linux/blkdev.h          |   9 +
 include/linux/keyslot-manager.h | 116 ++++++++++++
 4 files changed, 451 insertions(+)
 create mode 100644 block/keyslot-manager.c
 create mode 100644 include/linux/keyslot-manager.h

diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
new file mode 100644
index 000000000000..d4a5d6d78d2c
--- /dev/null
+++ b/block/keyslot-manager.c
@@ -0,0 +1,315 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * keyslot-manager.c
+ *
+ * Copyright 2019 Google LLC
+ */
+
+/**
+ * DOC: The Keyslot Manager
+ *
+ * Many devices with inline encryption support have a limited number of "slots"
+ * into which encryption contexts may be programmed, and requests can be tagged
+ * with a slot number to specify the key to use for en/decryption.
+ *
+ * As the number of slots are limited, and programming keys is expensive on
+ * many inline encryption hardware, we don't want to program the same key into
+ * multiple slots - if multiple requests are using the same key, we want to
+ * program just one slot with that key and use that slot for all requests.
+ *
+ * The keyslot manager manages these keyslots appropriately, and also acts as
+ * an abstraction between the inline encryption hardware and the upper layers.
+ *
+ * Lower layer devices will set up a keyslot manager in their request queue
+ * and tell it how to perform device specific operations like programming/
+ * evicting keys from keyslots.
+ *
+ * Upper layers will call keyslot_manager_get_slot_for_key() to program a
+ * key into some slot in the inline encryption hardware.
+ */
+#include <linux/slab.h>
+#include <linux/keyslot-manager.h>
+#include <linux/atomic.h>
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+
+struct keyslot_manager {
+	unsigned int num_slots;
+	atomic_t num_idle_slots;
+	struct keyslot_mgmt_ll_ops ksm_ll_ops;
+	void *ll_priv_data;
+	struct mutex lock;
+	wait_queue_head_t wait_queue;
+	u64 seq_num;
+	u64 *last_used_seq_nums;
+	atomic_t slot_refs[];
+};
+
+/**
+ * keyslot_manager_create() - Create a keyslot manager
+ * @num_slots: The number of key slots to manage.
+ * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot
+ *		manager will use to perform operations like programming and
+ *		evicting keys.
+ * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
+ *
+ * Allocate memory for and initialize a keyslot manager. Called by for e.g.
+ * storage drivers to set up a keyslot manager in their request_queue.
+ *
+ * Context: May sleep
+ * Return: Pointer to constructed keyslot manager or NULL on error.
+ */
+struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
+				const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
+				void *ll_priv_data)
+{
+	struct keyslot_manager *ksm;
+
+	if (num_slots == 0)
+		return NULL;
+
+	/* Check that all ops are specified */
+	if (ksm_ll_ops->keyslot_program == NULL ||
+	    ksm_ll_ops->keyslot_evict == NULL ||
+	    ksm_ll_ops->crypt_mode_supported == NULL ||
+	    ksm_ll_ops->keyslot_find == NULL)
+		return NULL;
+
+	ksm = kzalloc(struct_size(ksm, slot_refs, num_slots), GFP_KERNEL);
+	if (!ksm)
+		return NULL;
+
+	ksm->num_slots = num_slots;
+	atomic_set(&ksm->num_idle_slots, num_slots);
+	ksm->ksm_ll_ops = *ksm_ll_ops;
+	ksm->ll_priv_data = ll_priv_data;
+
+	mutex_init(&ksm->lock);
+	init_waitqueue_head(&ksm->wait_queue);
+
+	ksm->last_used_seq_nums = kcalloc(num_slots, sizeof(u64), GFP_KERNEL);
+	if (!ksm->last_used_seq_nums) {
+		kzfree(ksm);
+		ksm = NULL;
+	}
+
+	return ksm;
+}
+EXPORT_SYMBOL(keyslot_manager_create);
+
+/**
+ * keyslot_manager_get_slot_for_key() - Program a key into a keyslot.
+ * @ksm: The keyslot manager to program the key into.
+ * @key: Pointer to the bytes of the key to program. Must be the correct length
+ *      for the chosen @crypt_mode; see blk_crypt_modes in blk-crypto.c.
+ * @crypt_mode: Identifier for the encryption algorithm to use.
+ * @data_unit_size: The data unit size to use for en/decryption.
+ *
+ * Get a keyslot that's been programmed with the specified key, crypt_mode, and
+ * data_unit_size.  If one already exists, return it with incremented refcount.
+ * Otherwise, wait for a keyslot to become idle and program it.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ * Return: The keyslot on success, else a -errno value.
+ */
+int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+				     const u8 *key,
+				     enum blk_crypt_mode_num crypt_mode,
+				     unsigned int data_unit_size)
+{
+	int slot;
+	int err;
+	int c;
+	int lru_idle_slot;
+	u64 min_seq_num;
+
+	mutex_lock(&ksm->lock);
+	slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
+					    crypt_mode,
+					    data_unit_size);
+
+	if (slot < 0 && slot != -ENOKEY) {
+		mutex_unlock(&ksm->lock);
+		return slot;
+	}
+
+	if (WARN_ON(slot >= (int)ksm->num_slots)) {
+		mutex_unlock(&ksm->lock);
+		return -EINVAL;
+	}
+
+	/* Try to use the returned slot */
+	if (slot != -ENOKEY) {
+		/*
+		 * NOTE: We may fail to get a slot if the number of refs
+		 * overflows UINT_MAX. I don't think we care enough about
+		 * that possibility to make the refcounts u64, considering
+		 * the only way for that to happen is for at least UINT_MAX
+		 * requests to be in flight at the same time.
+		 */
+		if ((unsigned int)atomic_read(&ksm->slot_refs[slot]) ==
+		    UINT_MAX) {
+			mutex_unlock(&ksm->lock);
+			return -EBUSY;
+		}
+
+		if (atomic_fetch_inc(&ksm->slot_refs[slot]) == 0)
+			atomic_dec(&ksm->num_idle_slots);
+
+		ksm->last_used_seq_nums[slot] = ++ksm->seq_num;
+
+		mutex_unlock(&ksm->lock);
+		return slot;
+	}
+
+	/*
+	 * If we're here, that means there wasn't a slot that
+	 * was already programmed with the key
+	 */
+
+	/* Wait till there is a free slot available */
+	while (atomic_read(&ksm->num_idle_slots) == 0) {
+		mutex_unlock(&ksm->lock);
+		wait_event(ksm->wait_queue,
+			   (atomic_read(&ksm->num_idle_slots) > 0));
+		mutex_lock(&ksm->lock);
+	}
+
+	/* Todo: fix linear scan? */
+	/* Find least recently used idle slot (i.e. slot with minimum number) */
+	lru_idle_slot  = -1;
+	min_seq_num = 0;
+	for (c = 0; c < ksm->num_slots; c++) {
+		if (atomic_read(&ksm->slot_refs[c]) != 0)
+			continue;
+
+		if (lru_idle_slot == -1 ||
+		    ksm->last_used_seq_nums[c] < min_seq_num) {
+			lru_idle_slot = c;
+			min_seq_num = ksm->last_used_seq_nums[c];
+		}
+	}
+
+	if (WARN_ON(lru_idle_slot == -1)) {
+		mutex_unlock(&ksm->lock);
+		return -EBUSY;
+	}
+
+	atomic_dec(&ksm->num_idle_slots);
+	atomic_inc(&ksm->slot_refs[lru_idle_slot]);
+	err = ksm->ksm_ll_ops.keyslot_program(ksm->ll_priv_data, key,
+					      crypt_mode,
+					      data_unit_size,
+					      lru_idle_slot);
+
+	if (err) {
+		atomic_dec(&ksm->slot_refs[lru_idle_slot]);
+		atomic_inc(&ksm->num_idle_slots);
+		wake_up(&ksm->wait_queue);
+		mutex_unlock(&ksm->lock);
+		return err;
+	}
+
+	ksm->seq_num++;
+	ksm->last_used_seq_nums[lru_idle_slot] = ksm->seq_num;
+
+	mutex_unlock(&ksm->lock);
+	return lru_idle_slot;
+}
+EXPORT_SYMBOL(keyslot_manager_get_slot_for_key);
+
+/**
+ * keyslot_manager_get_slot() - Increment the refcount on the specified slot.
+ * @ksm - The keyslot manager that we want to modify.
+ * @slot - The slot to increment the refcount of.
+ *
+ * This function assumes that there is already an active reference to that slot
+ * and simply increments the refcount. This is useful when cloning a bio that
+ * already has a reference to a keyslot, and we want the cloned bio to also have
+ * its own reference.
+ *
+ * Context: Any context.
+ */
+void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+	if (WARN_ON(slot >= ksm->num_slots))
+		return;
+
+	WARN_ON(atomic_inc_return(&ksm->slot_refs[slot]) < 2);
+}
+EXPORT_SYMBOL(keyslot_manager_get_slot);
+
+/**
+ * keyslot_manager_put_slot() - Release a reference to a slot
+ * @ksm: The keyslot manager to release the reference from.
+ * @slot: The slot to release the reference from.
+ *
+ * Context: Any context.
+ */
+void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+	if (WARN_ON(slot >= ksm->num_slots))
+		return;
+
+	if (atomic_dec_and_test(&ksm->slot_refs[slot])) {
+		atomic_inc(&ksm->num_idle_slots);
+		wake_up(&ksm->wait_queue);
+	}
+}
+EXPORT_SYMBOL(keyslot_manager_put_slot);
+
+/**
+ * keyslot_manager_evict_key() - Evict a key from the lower layer device.
+ * @ksm - The keyslot manager to evict from
+ * @key - The key to evict
+ * @crypt_mode - The crypto algorithm the key was programmed with.
+ * @data_unit_size - The data_unit_size the key was programmed with.
+ *
+ * Finds the slot that the specified key, crypt_mode, data_unit_size combo
+ * was programmed into, and evicts that slot from the lower layer device if
+ * the refcount on the slot is 0. Returns -EBUSY if the refcount is not 0, and
+ * -errno on error.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ */
+int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+			      const u8 *key,
+			      enum blk_crypt_mode_num crypt_mode,
+			      unsigned int data_unit_size)
+{
+	int slot;
+	int err = 0;
+
+	mutex_lock(&ksm->lock);
+	slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
+					    crypt_mode,
+					    data_unit_size);
+
+	if (slot < 0) {
+		mutex_unlock(&ksm->lock);
+		return slot;
+	}
+
+	if (atomic_read(&ksm->slot_refs[slot]) == 0) {
+		err = ksm->ksm_ll_ops.keyslot_evict(ksm->ll_priv_data, key,
+						    crypt_mode,
+						    data_unit_size,
+						    slot);
+	} else {
+		err = -EBUSY;
+	}
+
+	mutex_unlock(&ksm->lock);
+	return err;
+}
+EXPORT_SYMBOL(keyslot_manager_evict_key);
+
+void keyslot_manager_destroy(struct keyslot_manager *ksm)
+{
+	if (!ksm)
+		return;
+	kzfree(ksm->last_used_seq_nums);
+	kzfree(ksm);
+}
+EXPORT_SYMBOL(keyslot_manager_destroy);
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 95202f80676c..aafa96839f95 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -137,6 +137,17 @@ static inline void bio_issue_init(struct bio_issue *issue,
 			((u64)size << BIO_ISSUE_SIZE_SHIFT));
 }
 
+enum blk_crypt_mode_num {
+	BLK_ENCRYPTION_MODE_AES_256_XTS	= 0,
+	/*
+	 * TODO: Support these too
+	 * BLK_ENCRYPTION_MODE_AES_256_CTS	= 1,
+	 * BLK_ENCRYPTION_MODE_AES_128_CBC	= 2,
+	 * BLK_ENCRYPTION_MODE_AES_128_CTS	= 3,
+	 * BLK_ENCRYPTION_MODE_ADIANTUM		= 4,
+	 */
+};
+
 /*
  * main unit of I/O for the block layer and lower layers (ie drivers and
  * stacking drivers)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 592669bcc536..f76d5dff27fe 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -385,6 +385,10 @@ static inline int blkdev_reset_zones_ioctl(struct block_device *bdev,
 
 #endif /* CONFIG_BLK_DEV_ZONED */
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+struct keyslot_manager;
+#endif
+
 struct request_queue {
 	/*
 	 * Together with queue_head for cacheline sharing
@@ -473,6 +477,11 @@ struct request_queue {
 	unsigned int		dma_pad_mask;
 	unsigned int		dma_alignment;
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	/* Inline crypto capabilities */
+	struct keyslot_manager *ksm;
+#endif
+
 	unsigned int		rq_timeout;
 	int			poll_nsec;
 
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
new file mode 100644
index 000000000000..76a9c255cb7e
--- /dev/null
+++ b/include/linux/keyslot-manager.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <linux/types.h>
+#include <linux/blk_types.h>
+
+#ifndef __LINUX_KEYSLOT_MANAGER_H
+#define __LINUX_KEYSLOT_MANAGER_H
+
+/**
+ * struct keyslot_mgmt_ll_ops - functions to manage keyslots in hardware
+ * @keyslot_program:	Program the specified key and algorithm into the
+ *			specified slot in the inline encryption hardware.
+ * @keyslot_evict:	Evict key from the specified keyslot in the hardware.
+ *			The key, crypt_mode and data_unit_size are also passed
+ *			down so that for e.g. dm layers can evict keys from
+ *			the devices that they map over.
+ *			Returns 0 on success, -errno otherwise.
+ * @crypt_mode_supported:	Check whether a crypt_mode and data_unit_size
+ *				combo is supported.
+ * @keyslot_find:	Returns the slot number that matches the key,
+ *			or -ENOKEY if no match found, or -errno on
+ *			error.
+ *
+ * This structure should be provided by storage device drivers when they set up
+ * a keyslot manager - this structure holds the function ptrs that the keyslot
+ * manager will use to manipulate keyslots in the hardware.
+ */
+struct keyslot_mgmt_ll_ops {
+	int (*keyslot_program)(void *ll_priv_data, const u8 *key,
+			       enum blk_crypt_mode_num crypt_mode,
+			       unsigned int data_unit_size,
+			       unsigned int slot);
+	int (*keyslot_evict)(void *ll_priv_data, const u8 *key,
+			     enum blk_crypt_mode_num crypt_mode,
+			     unsigned int data_unit_size,
+			     unsigned int slot);
+	bool (*crypt_mode_supported)(void *ll_priv_data,
+				      enum blk_crypt_mode_num crypt_mode,
+				      unsigned int data_unit_size);
+	int (*keyslot_find)(void *ll_priv_data, const u8 *key,
+			    enum blk_crypt_mode_num crypt_mode,
+			    unsigned int data_unit_size);
+};
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+struct keyslot_manager;
+
+extern struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
+				const struct keyslot_mgmt_ll_ops *ksm_ops,
+				void *ll_priv_data);
+
+extern int
+keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+				 const u8 *key,
+				 enum blk_crypt_mode_num crypt_mode,
+				 unsigned int data_unit_size);
+
+extern void keyslot_manager_get_slot(struct keyslot_manager *ksm,
+				     unsigned int slot);
+
+extern void keyslot_manager_put_slot(struct keyslot_manager *ksm,
+				     unsigned int slot);
+
+extern int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+				     const u8 *key,
+				     enum blk_crypt_mode_num crypt_mode,
+				     unsigned int data_unit_size);
+
+extern void keyslot_manager_destroy(struct keyslot_manager *ksm);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+struct keyslot_manager {};
+
+static inline struct keyslot_manager *
+keyslot_manager_create(unsigned int num_slots,
+		       const struct keyslot_mgmt_ll_ops *ksm_ops,
+		       void *ll_priv_data)
+{
+	return NULL;
+}
+
+static inline int
+keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+				 const u8 *key,
+				 enum blk_crypt_mode_num crypt_mode,
+				 unsigned int data_unit_size)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline void keyslot_manager_get_slot(struct keyslot_manager *ksm,
+					    unsigned int slot) { }
+
+static inline int keyslot_manager_put_slot(struct keyslot_manager *ksm,
+					   unsigned int slot)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+				     const u8 *key,
+				     enum blk_crypt_mode_num crypt_mode,
+				     unsigned int data_unit_size)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline void keyslot_manager_destroy(struct keyslot_manager *ksm)
+{ }
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#endif /* __LINUX_KEYSLOT_MANAGER_H */
-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 2/8] block: Add encryption context to struct bio
  2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
  2019-06-05 23:28 ` [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption Satya Tangirala
@ 2019-06-05 23:28 ` Satya Tangirala
  2019-06-12 18:10   ` Eric Biggers
  2019-06-05 23:28 ` [RFC PATCH v2 3/8] block: blk-crypto for Inline Encryption Satya Tangirala
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

We must have some way of letting a storage device driver know what
encryption context it should use for en/decrypting a request. However,
it's the filesystem/fscrypt that knows about and manages encryption
contexts. As such, when the filesystem layer submits a bio to the block
layer, and this bio eventually reaches a device driver with support for
inline encryption, the device driver will need to have been told the
encryption context for that bio.

We want to communicate the encryption context from the filesystem layer
to the storage device along with the bio, when the bio is submitted to the
block layer. To do this, we add a struct bio_crypt_ctx to struct bio, which
can represent an encryption context (note that we can't use the bi_private
field in struct bio to do this because that field does not function to pass
information across layers in the storage stack). We also introduce various
functions to manipulate the bio_crypt_ctx and make the bio/request merging
logic aware of the bio_crypt_ctx.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 block/bio.c               |  12 ++-
 block/blk-crypt-ctx.c     |  90 +++++++++++++++++++
 block/blk-merge.c         |  34 ++++++-
 block/bounce.c            |   9 +-
 drivers/md/dm.c           |  15 ++--
 include/linux/bio.h       | 180 ++++++++++++++++++++++++++++++++++++++
 include/linux/blk_types.h |  28 ++++++
 7 files changed, 355 insertions(+), 13 deletions(-)
 create mode 100644 block/blk-crypt-ctx.c

diff --git a/block/bio.c b/block/bio.c
index 683cbb40f051..87aa87288b39 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -16,6 +16,7 @@
 #include <linux/workqueue.h>
 #include <linux/cgroup.h>
 #include <linux/blk-cgroup.h>
+#include <linux/keyslot-manager.h>
 
 #include <trace/events/block.h>
 #include "blk.h"
@@ -240,6 +241,7 @@ static void bio_free(struct bio *bio)
 	struct bio_set *bs = bio->bi_pool;
 	void *p;
 
+	bio_crypt_free_ctx(bio);
 	bio_uninit(bio);
 
 	if (bs) {
@@ -612,6 +614,7 @@ EXPORT_SYMBOL(__bio_clone_fast);
 struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs)
 {
 	struct bio *b;
+	int ret;
 
 	b = bio_alloc_bioset(gfp_mask, 0, bs);
 	if (!b)
@@ -619,9 +622,13 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs)
 
 	__bio_clone_fast(b, bio);
 
-	if (bio_integrity(bio)) {
-		int ret;
+	ret = bio_clone_crypt_context(b, bio, gfp_mask);
+	if (ret < 0) {
+		bio_put(b);
+		return NULL;
+	}
 
+	if (bio_integrity(bio)) {
 		ret = bio_integrity_clone(b, bio, gfp_mask);
 
 		if (ret < 0) {
@@ -1019,6 +1026,7 @@ void bio_advance(struct bio *bio, unsigned bytes)
 		bio_integrity_advance(bio, bytes);
 
 	bio_advance_iter(bio, &bio->bi_iter, bytes);
+	bio_crypt_advance(bio, bytes);
 }
 EXPORT_SYMBOL(bio_advance);
 
diff --git a/block/blk-crypt-ctx.c b/block/blk-crypt-ctx.c
new file mode 100644
index 000000000000..174c058ab0c6
--- /dev/null
+++ b/block/blk-crypt-ctx.c
@@ -0,0 +1,90 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/slab.h>
+#include <linux/keyslot-manager.h>
+
+struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
+{
+	return kzalloc(sizeof(struct bio_crypt_ctx), gfp_mask);
+}
+
+void bio_crypt_free_ctx(struct bio *bio)
+{
+	kzfree(bio->bi_crypt_context);
+	bio->bi_crypt_context = NULL;
+}
+
+int bio_clone_crypt_context(struct bio *dst, struct bio *src, gfp_t gfp_mask)
+{
+	if (!bio_is_encrypted(src) || bio_crypt_swhandled(src))
+		return 0;
+
+	dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask);
+	if (!dst->bi_crypt_context)
+		return -ENOMEM;
+
+	*dst->bi_crypt_context = *src->bi_crypt_context;
+
+	if (!bio_crypt_has_keyslot(src))
+		return 0;
+
+	keyslot_manager_get_slot(src->bi_crypt_context->processing_ksm,
+				 src->bi_crypt_context->keyslot);
+
+	return 0;
+}
+
+bool bio_crypt_should_process(struct bio *bio, struct request_queue *q)
+{
+	if (!bio_is_encrypted(bio))
+		return false;
+
+	WARN_ON(!bio_crypt_has_keyslot(bio));
+	return q->ksm == bio->bi_crypt_context->processing_ksm;
+}
+EXPORT_SYMBOL(bio_crypt_should_process);
+
+/*
+ * Checks that two bio crypt contexts are compatible - i.e. that
+ * they are mergeable except for data_unit_num continuity.
+ */
+bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
+{
+	struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
+	struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
+
+	if (bio_is_encrypted(b_1) != bio_is_encrypted(b_2))
+		return false;
+
+	if (!bio_is_encrypted(b_1))
+		return true;
+
+	return bc1->keyslot != bc2->keyslot &&
+	       bc1->data_unit_size_bits == bc2->data_unit_size_bits;
+}
+
+/*
+ * Checks that two bio crypt contexts are compatible, and also
+ * that their data_unit_nums are continuous (and can hence be merged)
+ */
+bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
+				  unsigned int b1_sectors,
+				  struct bio *b_2)
+{
+	struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
+	struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
+
+	if (!bio_crypt_ctx_compatible(b_1, b_2))
+		return false;
+
+	return !bio_is_encrypted(b_1) ||
+		(bc1->data_unit_num +
+		(b1_sectors >> (bc1->data_unit_size_bits - 9)) ==
+		bc2->data_unit_num);
+}
+
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 17713d7d98d5..f416e7f38270 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -531,6 +531,9 @@ static inline int ll_new_hw_segment(struct request_queue *q,
 	if (blk_integrity_merge_bio(q, req, bio) == false)
 		goto no_merge;
 
+	if (WARN_ON(!bio_crypt_ctx_compatible(bio, req->bio)))
+		goto no_merge;
+
 	/*
 	 * This will form the start of a new hw segment.  Bump both
 	 * counters.
@@ -696,8 +699,13 @@ static enum elv_merge blk_try_req_merge(struct request *req,
 {
 	if (blk_discard_mergable(req))
 		return ELEVATOR_DISCARD_MERGE;
-	else if (blk_rq_pos(req) + blk_rq_sectors(req) == blk_rq_pos(next))
+	else if (blk_rq_pos(req) + blk_rq_sectors(req) == blk_rq_pos(next)) {
+		if (!bio_crypt_ctx_back_mergeable(
+			req->bio, blk_rq_sectors(req), next->bio)) {
+			return ELEVATOR_NO_MERGE;
+		}
 		return ELEVATOR_BACK_MERGE;
+	}
 
 	return ELEVATOR_NO_MERGE;
 }
@@ -733,6 +741,9 @@ static struct request *attempt_merge(struct request_queue *q,
 	if (req->ioprio != next->ioprio)
 		return NULL;
 
+	if (!bio_crypt_ctx_compatible(req->bio, next->bio))
+		return NULL;
+
 	/*
 	 * If we are allowed to merge, then append bio list
 	 * from next to rq and release next. merge_requests_fn
@@ -865,16 +876,31 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (rq->ioprio != bio_prio(bio))
 		return false;
 
+	/* Only merge if the crypt contexts are compatible */
+	if (!bio_crypt_ctx_compatible(bio, rq->bio))
+		return false;
+
 	return true;
 }
 
 enum elv_merge blk_try_merge(struct request *rq, struct bio *bio)
 {
-	if (blk_discard_mergable(rq))
+	if (blk_discard_mergable(rq)) {
 		return ELEVATOR_DISCARD_MERGE;
-	else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector)
+	} else if (blk_rq_pos(rq) + blk_rq_sectors(rq) ==
+		   bio->bi_iter.bi_sector) {
+		if (!bio_crypt_ctx_back_mergeable(rq->bio,
+						  blk_rq_sectors(rq), bio)) {
+			return ELEVATOR_NO_MERGE;
+		}
 		return ELEVATOR_BACK_MERGE;
-	else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector)
+	} else if (blk_rq_pos(rq) - bio_sectors(bio) ==
+		   bio->bi_iter.bi_sector) {
+		if (!bio_crypt_ctx_back_mergeable(bio,
+						  bio_sectors(bio), rq->bio)) {
+			return ELEVATOR_NO_MERGE;
+		}
 		return ELEVATOR_FRONT_MERGE;
+	}
 	return ELEVATOR_NO_MERGE;
 }
diff --git a/block/bounce.c b/block/bounce.c
index f8ed677a1bf7..238278c13af5 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -220,6 +220,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
 	struct bvec_iter iter;
 	struct bio_vec bv;
 	struct bio *bio;
+	int ret;
 
 	/*
 	 * Pre immutable biovecs, __bio_clone() used to just do a memcpy from
@@ -267,9 +268,13 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
 		break;
 	}
 
-	if (bio_integrity(bio_src)) {
-		int ret;
+	ret = bio_clone_crypt_context(bio, bio_src, gfp_mask);
+	if (ret < 0) {
+		bio_put(bio);
+		return NULL;
+	}
 
+	if (bio_integrity(bio_src)) {
 		ret = bio_integrity_clone(bio, bio_src, gfp_mask);
 		if (ret < 0) {
 			bio_put(bio);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 5475081dcbd6..3da81e4cddde 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1324,12 +1324,15 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio,
 		     sector_t sector, unsigned len)
 {
 	struct bio *clone = &tio->clone;
+	int ret;
 
 	__bio_clone_fast(clone, bio);
 
-	if (bio_integrity(bio)) {
-		int r;
+	ret = bio_clone_crypt_context(clone, bio, GFP_NOIO);
+	if (ret < 0)
+		return ret;
 
+	if (bio_integrity(bio)) {
 		if (unlikely(!dm_target_has_integrity(tio->ti->type) &&
 			     !dm_target_passes_integrity(tio->ti->type))) {
 			DMWARN("%s: the target %s doesn't support integrity data.",
@@ -1338,9 +1341,11 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio,
 			return -EIO;
 		}
 
-		r = bio_integrity_clone(clone, bio, GFP_NOIO);
-		if (r < 0)
-			return r;
+		ret = bio_integrity_clone(clone, bio, GFP_NOIO);
+		if (ret < 0) {
+			bio_crypt_free_ctx(clone);
+			return ret;
+		}
 	}
 
 	bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector));
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 0f23b5682640..ba9552932571 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -561,6 +561,186 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags)
 }
 #endif
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+extern int bio_clone_crypt_context(struct bio *dst, struct bio *src,
+				   gfp_t gfp_mask);
+
+static inline bool bio_is_encrypted(struct bio *bio)
+{
+	return bio && bio->bi_crypt_context;
+}
+
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
+{
+	if (bio_is_encrypted(bio)) {
+		bio->bi_crypt_context->data_unit_num +=
+			bytes >> bio->bi_crypt_context->data_unit_size_bits;
+	}
+}
+
+extern bool bio_crypt_swhandled(struct bio *bio);
+
+static inline bool bio_crypt_has_keyslot(struct bio *bio)
+{
+	return bio_is_encrypted(bio) &&
+	       bio->bi_crypt_context->keyslot >= 0;
+}
+
+extern struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
+
+extern void bio_crypt_free_ctx(struct bio *bio);
+
+static inline int bio_crypt_set_ctx(struct bio *bio,
+				     u8 *raw_key,
+				     enum blk_crypt_mode_num crypt_mode,
+				     u64 dun,
+				     unsigned int dun_bits)
+{
+	struct bio_crypt_ctx *crypt_ctx;
+
+	crypt_ctx = bio_crypt_alloc_ctx(GFP_NOIO);
+	if (!crypt_ctx)
+		return -ENOMEM;
+
+	crypt_ctx->raw_key = raw_key;
+	crypt_ctx->data_unit_num = dun;
+	crypt_ctx->data_unit_size_bits = dun_bits;
+	crypt_ctx->crypt_mode = crypt_mode;
+	crypt_ctx->processing_ksm = NULL;
+	crypt_ctx->keyslot = -1;
+	bio->bi_crypt_context = crypt_ctx;
+
+	return 0;
+}
+
+static inline int bio_crypt_get_slot(struct bio *bio)
+{
+	return bio->bi_crypt_context->keyslot;
+}
+
+static inline void bio_crypt_set_keyslot(struct bio *bio,
+					 unsigned int keyslot,
+					 struct keyslot_manager *ksm)
+{
+	bio->bi_crypt_context->keyslot = keyslot;
+	bio->bi_crypt_context->processing_ksm = ksm;
+
+	bio->bi_crypt_context->crypt_iter = bio->bi_iter;
+	bio->bi_crypt_context->sw_data_unit_num =
+		bio->bi_crypt_context->data_unit_num;
+}
+
+static inline void bio_crypt_unset_keyslot(struct bio *bio)
+{
+	bio->bi_crypt_context->processing_ksm = NULL;
+	bio->bi_crypt_context->keyslot = -1;
+}
+
+static inline u8 *bio_crypt_raw_key(struct bio *bio)
+{
+	return bio->bi_crypt_context->raw_key;
+}
+
+static inline enum blk_crypt_mode_num bio_crypt_mode(struct bio *bio)
+{
+	return bio->bi_crypt_context->crypt_mode;
+}
+
+static inline u64 bio_crypt_data_unit_num(struct bio *bio)
+{
+	WARN_ON(!bio_is_encrypted(bio));
+	return bio->bi_crypt_context->data_unit_num;
+}
+
+static inline u64 bio_crypt_sw_data_unit_num(struct bio *bio)
+{
+	WARN_ON(!bio_is_encrypted(bio));
+	return bio->bi_crypt_context->sw_data_unit_num;
+}
+
+extern bool bio_crypt_should_process(struct bio *bio, struct request_queue *q);
+
+extern bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2);
+
+extern bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
+					 unsigned int b1_sectors,
+					 struct bio *b_2);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+static inline int bio_clone_crypt_context(struct bio *dst, struct bio *src,
+					  gfp_t gfp_mask)
+{
+	return 0;
+}
+
+static inline void bio_crypt_advance(struct bio *bio,
+				     unsigned int bytes) { }
+
+static inline bool bio_is_encrypted(struct bio *bio)
+{
+	return false;
+}
+
+static inline void bio_crypt_free_ctx(struct bio *bio) { }
+
+static inline void bio_crypt_set_ctx(struct bio *bio,
+				     u8 *raw_key,
+				     enum blk_crypt_mode_num crypt_mode,
+				     u64 dun,
+				     unsigned int dun_bits) { }
+
+static inline bool bio_crypt_swhandled(struct bio *bio)
+{
+	return false;
+}
+
+static inline bool bio_crypt_has_keyslot(struct bio *bio)
+{
+	return false;
+}
+
+static inline void bio_crypt_set_keyslot(struct bio *bio,
+					 unsigned int keyslot,
+					 struct keyslot_manager *ksm) { }
+
+static inline void bio_crypt_unset_keyslot(struct bio *bio) { }
+
+static inline int bio_crypt_get_slot(struct bio *bio)
+{
+	return -1;
+}
+
+static inline u8 *bio_crypt_raw_key(struct bio *bio)
+{
+	return NULL;
+}
+
+static inline u64 bio_crypt_data_unit_num(struct bio *bio)
+{
+	WARN_ON(1);
+	return 0;
+}
+
+static inline bool bio_crypt_should_process(struct bio *bio,
+					    struct request_queue *q)
+{
+	return false;
+}
+
+static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
+{
+	return true;
+}
+
+static inline bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
+						unsigned int b1_sectors,
+						struct bio *b_2)
+{
+	return true;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
 /*
  * BIO list management for use by remapping drivers (e.g. DM or MD) and loop.
  *
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index aafa96839f95..c111b1ce8d24 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -148,6 +148,29 @@ enum blk_crypt_mode_num {
 	 */
 };
 
+struct bio_crypt_ctx {
+	int keyslot;
+	u8 *raw_key;
+	enum blk_crypt_mode_num crypt_mode;
+	u64 data_unit_num;
+	unsigned int data_unit_size_bits;
+
+	/*
+	 * The keyslot manager where the key has been programmed
+	 * with keyslot.
+	 */
+	struct keyslot_manager *processing_ksm;
+
+	/*
+	 * Copy of the bvec_iter when this bio was submitted.
+	 * We only want to en/decrypt the part of the bio
+	 * as described by the bvec_iter upon submission because
+	 * bio might be split before being resubmitted
+	 */
+	struct bvec_iter crypt_iter;
+	u64 sw_data_unit_num;
+};
+
 /*
  * main unit of I/O for the block layer and lower layers (ie drivers and
  * stacking drivers)
@@ -186,6 +209,11 @@ struct bio {
 	struct blkcg_gq		*bi_blkg;
 	struct bio_issue	bi_issue;
 #endif
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	struct bio_crypt_ctx	*bi_crypt_context;
+#endif
+
 	union {
 #if defined(CONFIG_BLK_DEV_INTEGRITY)
 		struct bio_integrity_payload *bi_integrity; /* data integrity */
-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 3/8] block: blk-crypto for Inline Encryption
  2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
  2019-06-05 23:28 ` [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption Satya Tangirala
  2019-06-05 23:28 ` [RFC PATCH v2 2/8] block: Add encryption context to struct bio Satya Tangirala
@ 2019-06-05 23:28 ` Satya Tangirala
  2019-06-12 23:34   ` Eric Biggers
  2019-06-05 23:28 ` [RFC PATCH v2 4/8] scsi: ufs: UFS driver v2.1 spec crypto additions Satya Tangirala
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

We introduce blk-crypto, which manages programming keyslots for struct
bios. With blk-crypto, filesystems only need to call bio_crypt_set_ctx with
the encryption key, algorithm and data_unit_num; they don't have to worry
about getting a keyslot for each encryption context, as blk-crypto handles
that. Blk-crypto also makes it possible for layered devices like device
mapper to make use of inline encryption hardware.

Blk-crypto delegates crypto operations to inline encryption hardware when
available, and also contains a software fallback to the kernel crypto API.
For more details, refer to Documentation/block/blk-crypto.txt.

Known issues:
1) We're allocating crypto_skcipher in blk_crypto_keyslot_program, which
   uses GFP_KERNEL to allocate memory, but this function is on the write
   path for IO - we need to add support for specifying a different flags
   to the crypto API.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 Documentation/block/blk-crypto.txt | 185 ++++++++++
 block/Kconfig                      |   8 +
 block/Makefile                     |   2 +
 block/bio.c                        |   5 +
 block/blk-core.c                   |  11 +-
 block/blk-crypto.c                 | 558 +++++++++++++++++++++++++++++
 include/linux/blk-crypto.h         |  40 +++
 7 files changed, 808 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/block/blk-crypto.txt
 create mode 100644 block/blk-crypto.c
 create mode 100644 include/linux/blk-crypto.h

diff --git a/Documentation/block/blk-crypto.txt b/Documentation/block/blk-crypto.txt
new file mode 100644
index 000000000000..96a7983a117d
--- /dev/null
+++ b/Documentation/block/blk-crypto.txt
@@ -0,0 +1,185 @@
+BLK-CRYPTO and KEYSLOT MANAGER
+===========================
+
+CONTENTS
+1. Objective
+2. Constraints and notes
+3. Design
+4. Blk-crypto
+ 4-1 What does blk-crypto do on bio submission
+5. Layered Devices
+6. Future optimizations for layered devices
+
+1. Objective
+============
+
+We want to support inline encryption (IE) in the kernel.
+To allow for testing, we also want a software fallback when actual
+IE hardware is absent. We also want IE to work with layered devices
+like dm and loopback (i.e. we want to be able to use the IE hardware
+of the underlying devices if present, or else fall back to software
+en/decryption).
+
+
+2. Constraints and notes
+========================
+
+1) IE hardware have a limited number of “keyslots” that can be programmed
+with an encryption context (key, algorithm, data unit size, etc.) at any time.
+One can specify a keyslot in a data request made to the device, and the
+device will en/decrypt the data using the encryption context programmed into
+that specified keyslot. When possible, we want to make multiple requests with
+the same encryption context share the same keyslot.
+
+2) We need a way for filesystems to specify an encryption context to use for
+en/decrypting a struct bio, and a device driver (like UFS) needs to be able
+to use that encryption context when it processes the bio.
+
+3) We need a way for device drivers to expose their capabilities in a unified
+way to the upper layers.
+
+
+3. Design
+=========
+
+We add a struct bio_crypt_context to struct bio that can represent an
+encryption context, because we need to able to pass this encryption context
+from the FS layer to the device driver to act upon.
+
+While IE hardware works on the notion of keyslots, the FS layer has no
+knowledge of keyslots - it simply wants to specify an encryption context to
+use while en/decrypting a bio.
+
+We introduce a keyslot manager (KSM) that handles the translation from
+encryption contexts specified by the FS to keyslots on the IE hardware.
+This KSM also serves as the way IE hardware can expose their capabilities to
+upper layers. The generic mode of operation is: each device driver that wants
+to support IE will construct a KSM and set it up in its struct request_queue.
+Upper layers that want to use IE on this device can then use this KSM in
+the device’s struct request_queue to translate an encryption context into
+a keyslot. The presence of the KSM in the request queue shall be used to mean
+that the device supports IE.
+
+On the device driver end of the interface, the device driver needs to tell the
+KSM how to actually manipulate the IE hardware in the device to do things like
+programming the crypto key into the IE hardware into a particular keyslot. All
+this is achieved through the struct keyslot_mgmt_ll_ops that the device driver
+passes to the KSM when creating it.
+
+It uses refcounts to track which keyslots are idle (either they have no
+encryption context programmed, or there are no in flight struct bios
+referencing that keyslot). When a new encryption context needs a keyslot, it
+tries to find a keyslot that has already been programmed with the same
+encryption context, and if there is no such keyslot, it evicts the least
+recently used idle keyslot and programs the new encryption context into that
+one. If no idle keyslots are available, then the caller will sleep until there
+is at least one.
+
+
+4. Blk-crypto
+=============
+
+The above is sufficient for simple cases, but does not work if there is a
+need for a software fallback, or if we are want to use IE with layered devices.
+To these ends, we introduce blk-crypto. Blk-crypto allows us to present a
+unified view of encryption to the FS (so FS only needs to specify an
+encryption context and not worry about keyslots at all), and blk-crypto can
+decide whether to delegate the en/decryption to IE hardware or to software
+(i.e. to the kernel crypto API). Blk-crypto maintains an internal KSM that
+serves as the software fallback to the kernel crypto API.
+
+Blk-crypto needs to ensure that the encryption context is programmed into the
+"correct" keyslot manager for IE. If a bio is submitted to a layered device
+that eventually passes the bio down to a device that really does support IE, we
+want the encryption context to be programmed into a keyslot for the KSM of the
+device with IE support. However, blk-crypto does not know a priori whether a
+particular device is the final device in the layering structure for a bio or
+not. So in the case that a particular device does not support IE, since it is
+possibly the final destination device for the bio, if the bio requires
+encryption (i.e. the bio is doing a write operation), blk-crypto must fallback
+to software *before* sending the bio to the device.
+
+Blk-crypto ensures that
+1) The bio’s encryption context is programmed into a keyslot in the KSM of the
+request queue that the bio is being submitted to (or the software fallback KSM
+if the request queue doesn’t have a KSM), and that the processing_ksm in the
+bi_crypt_context is set to this KSM
+
+2) That the bio has its own individual reference to the keyslot in this KSM.
+Once the bio passes through blk-crypto, its encryption context is programmed
+in some KSM. The “its own individual reference to the keyslot” ensures that
+keyslots can be released by each bio independently of other bios while ensuring
+that the bio has a valid reference to the keyslot when, for e.g., the software
+fallback KSM in blk-crypto performs crypto for on the device’s behalf. The
+individual references are ensured by increasing the refcount for the keyslot in
+the processing_ksm when a bio with a programmed encryption context is cloned.
+
+
+4-1. What blk-crypto does on bio submission
+-------------------------------------------
+
+Case 1: blk-crypto is given a bio with only an encryption context that hasn’t
+been programmed into any keyslot in any KSM (for e.g. a bio from the FS). In
+this case, blk-crypto will program the encryption context into the KSM of the
+request queue the bio is being submitted to (and if this KSM does not exist,
+then it will program it into blk-crypto’s internal KSM for software fallback).
+The KSM that this encryption context was programmed into is stored as the
+processing_ksm in the bio’s bi_crypt_context.
+
+Case 2: blk-crypto is given a bio whose encryption context has already been
+programmed into a keyslot in the *software fallback KSM*. In this case,
+blk-crypto does nothing; it treats the bio as not having specified an
+encryption context. Note that we cannot do what we will do in Case 3 here
+because we would have already encrypted the bio in software by this point.
+
+Case 3: blk-crypto is given a bio whose encryption context has already been
+programmed into a keyslot in some KSM (that is *not* the software fallback
+KSM). In this case, blk-crypto first releases that keyslot from that KSM and
+then treats the bio as in Case 1.
+
+This way, when a device driver is processing a bio, it can be sure that
+the bio’s encryption context has been programmed into some KSM (either the
+device driver’s request queue’s KSM, or blk-crypto’s software fallback KSM).
+It then simply needs to check if the bio’s processing_ksm is the device’s
+request queue’s KSM. If so, then it should proceed with IE. If not, it should
+simply do nothing with respect to crypto, because some other KSM (perhaps the
+blk-crypto software fallback KSM) is handling the en/decryption.
+
+Blk-crypto will release the keyslot that is being held by the bio (and also
+decrypt it if the bio is using the software fallback KSM) once
+bio_remaining_done returns true for the bio.
+
+
+5. Layered Devices
+==================
+
+Layered devices that wish to support IE need to create their own keyslot
+manager for their request queue, and expose whatever functionality they choose.
+When a layered device wants to pass a bio to another layer (either by
+resubmitting the same bio, or by submitting a clone), it doesn’t need to do
+anything special because the bio (or the clone) will once again pass through
+blk-crypto, which will work as described in Case 3. If a layered device wants
+for some reason to do the IO by itself instead of passing it on to a child
+device, but it also chose to expose IE capabilities by setting up a KSM in its
+request queue, it is then responsible for en/decrypting the data itself. In
+such cases, the device can choose to call the blk-crypto function
+blk_crypto_fallback_to_software (TODO: Not yet implemented), which will
+cause the en/decryption to be done via software fallback.
+
+
+6. Future Optimizations for layered devices
+===========================================
+
+Creating a keyslot manager for the layered device uses up memory for each
+keyslot, and in general, a layered device (like dm-linear) merely passes the
+request on to a “child” device, so the keyslots in the layered device itself
+might be completely unused. We can instead define a new type of KSM; the
+“passthrough KSM”, that layered devices can use to let blk-crypto know that
+this layered device *will* pass the bio to some child device (and hence
+through blk-crypto again, at which point blk-crypto can program the encryption
+context, instead of programming it into the layered device’s KSM). Again, if
+the device “lies” and decides to do the IO itself instead of passing it on to
+a child device, it is responsible for doing the en/decryption (and can choose
+to call blk_crypto_fallback_to_software). Another use case for the
+"passthrough KSM" is for IE devices that want to manage their own keyslots/do
+not have a limited number of keyslots.
diff --git a/block/Kconfig b/block/Kconfig
index 1b220101a9cb..0bd4b5060bf8 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -163,6 +163,14 @@ config BLK_SED_OPAL
 	Enabling this option enables users to setup/unlock/lock
 	Locking ranges for SED devices using the Opal protocol.
 
+config BLK_INLINE_ENCRYPTION
+	bool "Enable inline encryption support in block layer"
+	help
+	  Build the blk-crypto subsystem.
+	  Enabling this lets the block layer handle encryption,
+	  so users can take advantage of inline encryption
+	  hardware if present.
+
 menu "Partition Types"
 
 source "block/partitions/Kconfig"
diff --git a/block/Makefile b/block/Makefile
index eee1b4ceecf9..5d38ea437937 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -35,3 +35,5 @@ obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
 obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
 obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
 obj-$(CONFIG_BLK_PM)		+= blk-pm.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= blk-crypt-ctx.o blk-crypto.o \
+					     keyslot-manager.o
diff --git a/block/bio.c b/block/bio.c
index 87aa87288b39..711b026d5159 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -17,6 +17,7 @@
 #include <linux/cgroup.h>
 #include <linux/blk-cgroup.h>
 #include <linux/keyslot-manager.h>
+#include <linux/blk-crypto.h>
 
 #include <trace/events/block.h>
 #include "blk.h"
@@ -1829,6 +1830,10 @@ void bio_endio(struct bio *bio)
 again:
 	if (!bio_remaining_done(bio))
 		return;
+
+	if (!blk_crypto_endio(bio))
+		return;
+
 	if (!bio_integrity_endio(bio))
 		return;
 
diff --git a/block/blk-core.c b/block/blk-core.c
index ee1b35fe8572..1892c3904b8c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -36,6 +36,7 @@
 #include <linux/blk-cgroup.h>
 #include <linux/debugfs.h>
 #include <linux/bpf.h>
+#include <linux/blk-crypto.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/block.h>
@@ -1005,7 +1006,9 @@ blk_qc_t generic_make_request(struct bio *bio)
 			/* Create a fresh bio_list for all subordinate requests */
 			bio_list_on_stack[1] = bio_list_on_stack[0];
 			bio_list_init(&bio_list_on_stack[0]);
-			ret = q->make_request_fn(q, bio);
+
+			if (!blk_crypto_submit_bio(&bio))
+				ret = q->make_request_fn(q, bio);
 
 			blk_queue_exit(q);
 
@@ -1058,6 +1061,9 @@ blk_qc_t direct_make_request(struct bio *bio)
 	if (!generic_make_request_checks(bio))
 		return BLK_QC_T_NONE;
 
+	if (blk_crypto_submit_bio(&bio))
+		return BLK_QC_T_NONE;
+
 	if (unlikely(blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0))) {
 		if (nowait && !blk_queue_dying(q))
 			bio->bi_status = BLK_STS_AGAIN;
@@ -1737,5 +1743,8 @@ int __init blk_dev_init(void)
 	blk_debugfs_root = debugfs_create_dir("block", NULL);
 #endif
 
+	if (blk_crypto_init() < 0)
+		panic("Failed to init blk-crypto\n");
+
 	return 0;
 }
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
new file mode 100644
index 000000000000..5adb5251ae7e
--- /dev/null
+++ b/block/blk-crypto.c
@@ -0,0 +1,558 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+#include <linux/blk-crypto.h>
+#include <linux/keyslot-manager.h>
+#include <linux/mempool.h>
+#include <linux/blk-cgroup.h>
+#include <crypto/skcipher.h>
+#include <crypto/algapi.h>
+
+struct blk_crypt_mode {
+	const char *friendly_name;
+	const char *cipher_str;
+	size_t keysize;
+	size_t ivsize;
+	bool needs_essiv;
+};
+
+static const struct blk_crypt_mode blk_crypt_modes[] = {
+	[BLK_ENCRYPTION_MODE_AES_256_XTS] = {
+		.friendly_name = "AES-256-XTS",
+		.cipher_str = "xts(aes)",
+		.keysize = 64,
+		.ivsize = 16,
+	},
+	/* TODO: the rest of the algs that fscrypt supports */
+};
+
+#define BLK_CRYPTO_MAX_KEY_SIZE 64
+/* TODO: Do we want to make this user configurable somehow? */
+#define BLK_CRYPTO_NUM_KEYSLOTS 100
+
+static struct blk_crypto_keyslot {
+	struct crypto_skcipher *tfm;
+	enum blk_crypt_mode_num crypt_mode;
+	u8 key[BLK_CRYPTO_MAX_KEY_SIZE];
+} *blk_crypto_keyslots;
+
+struct work_mem {
+	struct work_struct crypto_work;
+	struct bio *bio;
+};
+
+static struct keyslot_manager *blk_crypto_ksm;
+static struct workqueue_struct *blk_crypto_wq;
+static mempool_t *blk_crypto_page_pool;
+static struct kmem_cache *blk_crypto_work_mem_cache;
+
+static unsigned int num_prealloc_bounce_pg = 32;
+
+bool bio_crypt_swhandled(struct bio *bio)
+{
+	return bio_crypt_has_keyslot(bio) &&
+	       bio->bi_crypt_context->processing_ksm == blk_crypto_ksm;
+}
+
+/* TODO: handle modes that need essiv */
+static int blk_crypto_keyslot_program(void *priv, const u8 *key,
+				      enum blk_crypt_mode_num crypt_mode,
+				      unsigned int data_unit_size,
+				      unsigned int slot)
+{
+	struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
+	struct crypto_skcipher *tfm = slotp->tfm;
+	const struct blk_crypt_mode *mode = &blk_crypt_modes[crypt_mode];
+	size_t keysize = mode->keysize;
+	int err;
+
+	if (crypt_mode != slotp->crypt_mode || !tfm) {
+		crypto_free_skcipher(slotp->tfm);
+		slotp->tfm = NULL;
+		memset(slotp->key, 0, BLK_CRYPTO_MAX_KEY_SIZE);
+		tfm = crypto_alloc_skcipher(
+			mode->cipher_str, 0, 0);
+		if (IS_ERR(tfm))
+			return PTR_ERR(tfm);
+
+		crypto_skcipher_set_flags(tfm,
+					  CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
+		slotp->crypt_mode = crypt_mode;
+		slotp->tfm = tfm;
+	}
+
+
+	err = crypto_skcipher_setkey(tfm, key, keysize);
+
+	if (err) {
+		crypto_free_skcipher(tfm);
+		slotp->tfm = NULL;
+		return err;
+	}
+
+	memcpy(slotp->key, key, keysize);
+
+	return 0;
+}
+
+static int blk_crypto_keyslot_evict(void *priv, const u8 *key,
+				    enum blk_crypt_mode_num crypt_mode,
+				    unsigned int data_unit_size,
+				    unsigned int slot)
+{
+	crypto_free_skcipher(blk_crypto_keyslots[slot].tfm);
+	blk_crypto_keyslots[slot].tfm = NULL;
+	memset(blk_crypto_keyslots[slot].key, 0, BLK_CRYPTO_MAX_KEY_SIZE);
+
+	return 0;
+}
+
+static int blk_crypto_keyslot_find(void *priv,
+				   const u8 *key,
+				   enum blk_crypt_mode_num crypt_mode,
+				   unsigned int data_unit_size_bytes)
+{
+	int slot;
+	const size_t keysize = blk_crypt_modes[crypt_mode].keysize;
+
+	/* TODO: hashmap? */
+	for (slot = 0; slot < BLK_CRYPTO_NUM_KEYSLOTS; slot++) {
+		if (blk_crypto_keyslots[slot].crypt_mode == crypt_mode &&
+		    !crypto_memneq(blk_crypto_keyslots[slot].key, key,
+				   keysize)) {
+			return slot;
+		}
+	}
+
+	return -ENOKEY;
+}
+
+static bool blk_crypt_mode_supported(void *priv,
+				     enum blk_crypt_mode_num crypt_mode,
+				     unsigned int data_unit_size)
+{
+	// Of course, blk-crypto supports all blk_crypt_modes.
+	return true;
+}
+
+static const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = {
+	.keyslot_program	= blk_crypto_keyslot_program,
+	.keyslot_evict		= blk_crypto_keyslot_evict,
+	.keyslot_find		= blk_crypto_keyslot_find,
+	.crypt_mode_supported	= blk_crypt_mode_supported,
+};
+
+static void blk_crypto_put_keyslot(struct bio *bio)
+{
+	struct bio_crypt_ctx *crypt_ctx = bio->bi_crypt_context;
+
+	keyslot_manager_put_slot(crypt_ctx->processing_ksm, crypt_ctx->keyslot);
+	bio_crypt_unset_keyslot(bio);
+}
+
+static int blk_crypto_get_keyslot(struct bio *bio,
+				      struct keyslot_manager *ksm)
+{
+	int slot;
+	enum blk_crypt_mode_num crypt_mode = bio_crypt_mode(bio);
+
+	if (!ksm)
+		return -ENOMEM;
+
+	slot = keyslot_manager_get_slot_for_key(ksm,
+						bio_crypt_raw_key(bio),
+						crypt_mode, PAGE_SIZE);
+	if (slot < 0)
+		return slot;
+
+	bio_crypt_set_keyslot(bio, slot, ksm);
+	return 0;
+}
+
+static void blk_crypto_encrypt_endio(struct bio *enc_bio)
+{
+	struct bio *src_bio = enc_bio->bi_private;
+	struct bio_vec *enc_bvec, *enc_bvec_end;
+
+	enc_bvec = enc_bio->bi_io_vec;
+	enc_bvec_end = enc_bvec + enc_bio->bi_vcnt;
+	for (; enc_bvec != enc_bvec_end; enc_bvec++)
+		mempool_free(enc_bvec->bv_page, blk_crypto_page_pool);
+
+	src_bio->bi_status = enc_bio->bi_status;
+
+	bio_put(enc_bio);
+	bio_endio(src_bio);
+}
+
+static struct bio *blk_crypto_clone_bio(struct bio *bio_src)
+{
+	struct bvec_iter iter;
+	struct bio_vec bv;
+	struct bio *bio;
+
+	bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL);
+	if (!bio)
+		return NULL;
+	bio->bi_disk		= bio_src->bi_disk;
+	bio->bi_opf		= bio_src->bi_opf;
+	bio->bi_ioprio		= bio_src->bi_ioprio;
+	bio->bi_write_hint	= bio_src->bi_write_hint;
+	bio->bi_iter.bi_sector	= bio_src->bi_iter.bi_sector;
+	bio->bi_iter.bi_size	= bio_src->bi_iter.bi_size;
+
+	bio_for_each_segment(bv, bio_src, iter)
+		bio->bi_io_vec[bio->bi_vcnt++] = bv;
+
+	if (bio_integrity(bio_src)) {
+		int ret;
+
+		ret = bio_integrity_clone(bio, bio_src, GFP_NOIO);
+		if (ret < 0) {
+			bio_put(bio);
+			return NULL;
+		}
+	}
+
+	bio_clone_blkg_association(bio, bio_src);
+	blkcg_bio_issue_init(bio);
+
+	return bio;
+}
+
+static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
+{
+	struct bio *src_bio = *bio_ptr;
+	int slot;
+	struct skcipher_request *ciph_req = NULL;
+	DECLARE_CRYPTO_WAIT(wait);
+	struct bio_vec bv;
+	struct bvec_iter iter;
+	int err = 0;
+	u64 curr_dun;
+	union {
+		__le64 dun;
+		u8 bytes[16];
+	} iv;
+	struct scatterlist src, dst;
+	struct bio *enc_bio;
+	struct bio_vec *enc_bvec;
+	int i, j;
+	unsigned int num_sectors;
+
+	if (!blk_crypto_keyslots)
+		return -ENOMEM;
+
+	/* Split the bio if it's too big for single page bvec */
+	i = 0;
+	num_sectors = 0;
+	bio_for_each_segment(bv, src_bio, iter) {
+		num_sectors += bv.bv_len >> 9;
+		if (++i == BIO_MAX_PAGES)
+			break;
+	}
+	if (num_sectors < bio_sectors(src_bio)) {
+		struct bio *split_bio;
+
+		split_bio = bio_split(src_bio, num_sectors, GFP_NOIO, NULL);
+		if (!split_bio) {
+			src_bio->bi_status = BLK_STS_RESOURCE;
+			return -ENOMEM;
+		}
+		bio_chain(split_bio, src_bio);
+		generic_make_request(src_bio);
+		*bio_ptr = split_bio;
+	}
+
+	src_bio = *bio_ptr;
+
+	enc_bio = blk_crypto_clone_bio(src_bio);
+	if (!enc_bio) {
+		src_bio->bi_status = BLK_STS_RESOURCE;
+		return -ENOMEM;
+	}
+
+	err = blk_crypto_get_keyslot(src_bio, blk_crypto_ksm);
+	if (err) {
+		src_bio->bi_status = BLK_STS_IOERR;
+		bio_put(enc_bio);
+		return err;
+	}
+	slot = bio_crypt_get_slot(src_bio);
+
+	ciph_req = skcipher_request_alloc(blk_crypto_keyslots[slot].tfm,
+					  GFP_NOIO);
+	if (!ciph_req) {
+		src_bio->bi_status = BLK_STS_RESOURCE;
+		err = -ENOMEM;
+		bio_put(enc_bio);
+		goto out_release_keyslot;
+	}
+
+	skcipher_request_set_callback(ciph_req,
+				      CRYPTO_TFM_REQ_MAY_BACKLOG |
+				      CRYPTO_TFM_REQ_MAY_SLEEP,
+				      crypto_req_done, &wait);
+
+	curr_dun = bio_crypt_sw_data_unit_num(src_bio);
+	sg_init_table(&src, 1);
+	sg_init_table(&dst, 1);
+	for (i = 0, enc_bvec = enc_bio->bi_io_vec; i < enc_bio->bi_vcnt;
+	     enc_bvec++, i++) {
+		struct page *page = enc_bvec->bv_page;
+		struct page *ciphertext_page =
+			mempool_alloc(blk_crypto_page_pool, GFP_NOFS);
+
+		enc_bvec->bv_page = ciphertext_page;
+
+		if (!ciphertext_page)
+			goto no_mem_for_ciph_page;
+
+		memset(&iv, 0, sizeof(iv));
+		iv.dun = cpu_to_le64(curr_dun);
+
+		sg_set_page(&src, page, enc_bvec->bv_len, enc_bvec->bv_offset);
+		sg_set_page(&dst, ciphertext_page, enc_bvec->bv_len,
+			    enc_bvec->bv_offset);
+
+		skcipher_request_set_crypt(ciph_req, &src, &dst,
+					   enc_bvec->bv_len, iv.bytes);
+		err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req), &wait);
+		if (err)
+			goto no_mem_for_ciph_page;
+
+		curr_dun++;
+		continue;
+no_mem_for_ciph_page:
+		err = -ENOMEM;
+		for (j = i - 1; j >= 0; j--) {
+			mempool_free(enc_bio->bi_io_vec->bv_page,
+				     blk_crypto_page_pool);
+		}
+		bio_put(enc_bio);
+		goto out_release_cipher;
+	}
+
+	enc_bio->bi_private = src_bio;
+	enc_bio->bi_end_io = blk_crypto_encrypt_endio;
+
+	*bio_ptr = enc_bio;
+out_release_cipher:
+	skcipher_request_free(ciph_req);
+out_release_keyslot:
+	blk_crypto_put_keyslot(src_bio);
+	return err;
+}
+
+/*
+ * TODO: assumption right now is:
+ * each segment in bio has length == the data_unit_size
+ */
+static void blk_crypto_decrypt_bio(struct work_struct *w)
+{
+	struct work_mem *work_mem =
+		container_of(w, struct work_mem, crypto_work);
+	struct bio *bio = work_mem->bio;
+	int slot = bio_crypt_get_slot(bio);
+	struct skcipher_request *ciph_req;
+	DECLARE_CRYPTO_WAIT(wait);
+	struct bio_vec bv;
+	struct bvec_iter iter;
+	u64 curr_dun;
+	union {
+		__le64 dun;
+		u8 bytes[16];
+	} iv;
+	struct scatterlist sg;
+
+	curr_dun = bio_crypt_sw_data_unit_num(bio);
+
+	kmem_cache_free(blk_crypto_work_mem_cache, work_mem);
+	ciph_req = skcipher_request_alloc(blk_crypto_keyslots[slot].tfm,
+					  GFP_NOFS);
+	if (!ciph_req) {
+		bio->bi_status = BLK_STS_RESOURCE;
+		goto out;
+	}
+
+	skcipher_request_set_callback(ciph_req,
+				      CRYPTO_TFM_REQ_MAY_BACKLOG |
+				      CRYPTO_TFM_REQ_MAY_SLEEP,
+				      crypto_req_done, &wait);
+
+	sg_init_table(&sg, 1);
+	__bio_for_each_segment(bv, bio, iter,
+			       bio->bi_crypt_context->crypt_iter) {
+		struct page *page = bv.bv_page;
+		int err;
+
+		memset(&iv, 0, sizeof(iv));
+		iv.dun = cpu_to_le64(curr_dun);
+
+		sg_set_page(&sg, page, bv.bv_len, bv.bv_offset);
+		skcipher_request_set_crypt(ciph_req, &sg, &sg,
+					   bv.bv_len, iv.bytes);
+		err = crypto_wait_req(crypto_skcipher_decrypt(ciph_req), &wait);
+		if (err) {
+			bio->bi_status = BLK_STS_IOERR;
+			goto out;
+		}
+		curr_dun++;
+	}
+
+out:
+	skcipher_request_free(ciph_req);
+	blk_crypto_put_keyslot(bio);
+	bio_endio(bio);
+}
+
+static void blk_crypto_queue_decrypt_bio(struct bio *bio)
+{
+	struct work_mem *work_mem =
+		kmem_cache_zalloc(blk_crypto_work_mem_cache, GFP_ATOMIC);
+
+	if (!work_mem) {
+		bio->bi_status = BLK_STS_RESOURCE;
+		return bio_endio(bio);
+	}
+
+	INIT_WORK(&work_mem->crypto_work, blk_crypto_decrypt_bio);
+	work_mem->bio = bio;
+	queue_work(blk_crypto_wq, &work_mem->crypto_work);
+}
+
+/*
+ * Ensures that:
+ * 1) The bio’s encryption context is programmed into a keyslot in the
+ * keyslot manager (KSM) of the request queue that the bio is being submitted
+ * to (or the software fallback KSM if the request queue doesn’t have a KSM),
+ * and that the processing_ksm in the bi_crypt_context of this bio is set to
+ * this KSM.
+ *
+ * 2) That the bio has a reference to this keyslot in this KSM.
+ */
+int blk_crypto_submit_bio(struct bio **bio_ptr)
+{
+	struct bio *bio = *bio_ptr;
+	struct request_queue *q;
+	int err;
+	enum blk_crypt_mode_num crypt_mode;
+	struct bio_crypt_ctx *crypt_ctx;
+
+	if (!bio_has_data(bio))
+		return 0;
+
+	if (!bio_is_encrypted(bio) || bio_crypt_swhandled(bio))
+		return 0;
+
+	crypt_ctx = bio->bi_crypt_context;
+	q = bio->bi_disk->queue;
+	crypt_mode = bio_crypt_mode(bio);
+
+	if (bio_crypt_has_keyslot(bio)) {
+		/* Key already programmed into device? */
+		if (q->ksm == crypt_ctx->processing_ksm)
+			return 0;
+
+		/* Nope, release the existing keyslot. */
+		blk_crypto_put_keyslot(bio);
+	}
+
+	/* Get device keyslot if supported */
+	if (q->ksm) {
+		err = blk_crypto_get_keyslot(bio, q->ksm);
+		if (!err)
+			return 0;
+	}
+
+	/* Fallback to software crypto */
+	if (bio_data_dir(bio) == WRITE) {
+		/* Encrypt the data now */
+		err = blk_crypto_encrypt_bio(bio_ptr);
+		if (err)
+			goto out_encrypt_err;
+	} else {
+		err = blk_crypto_get_keyslot(bio, blk_crypto_ksm);
+		if (err)
+			goto out_err;
+	}
+	return 0;
+out_err:
+	bio->bi_status = BLK_STS_IOERR;
+out_encrypt_err:
+	bio_endio(bio);
+	return err;
+}
+
+/*
+ * If the bio is not en/decrypted in software, this function releases the
+ * reference to the keyslot that blk_crypto_submit_bio got.
+ * If blk_crypto_submit_bio decided to fallback to software crypto for this
+ * bio, then if the bio is doing a write, we free the allocated bounce pages,
+ * and if the bio is doing a read, we queue the bio for decryption into a
+ * workqueue and return -EAGAIN. After the bio has been decrypted, we release
+ * the keyslot before we call bio_endio(bio).
+ */
+bool blk_crypto_endio(struct bio *bio)
+{
+	if (!bio_crypt_has_keyslot(bio))
+		return true;
+
+	if (!bio_crypt_swhandled(bio)) {
+		blk_crypto_put_keyslot(bio);
+		return true;
+	}
+
+	/* bio_data_dir(bio) == READ. So decrypt bio */
+	blk_crypto_queue_decrypt_bio(bio);
+	return false;
+}
+
+int __init blk_crypto_init(void)
+{
+	blk_crypto_ksm = keyslot_manager_create(BLK_CRYPTO_NUM_KEYSLOTS,
+						&blk_crypto_ksm_ll_ops,
+						NULL);
+	if (!blk_crypto_ksm)
+		goto out_ksm;
+
+	blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
+					WQ_UNBOUND | WQ_HIGHPRI,
+					num_online_cpus());
+	if (!blk_crypto_wq)
+		goto out_wq;
+
+	blk_crypto_keyslots = kzalloc(sizeof(*blk_crypto_keyslots) *
+				      BLK_CRYPTO_NUM_KEYSLOTS,
+				      GFP_KERNEL);
+	if (!blk_crypto_keyslots)
+		goto out_blk_crypto_keyslots;
+
+	blk_crypto_page_pool =
+		mempool_create_page_pool(num_prealloc_bounce_pg, 0);
+	if (!blk_crypto_page_pool)
+		goto out_bounce_pool;
+
+	blk_crypto_work_mem_cache = KMEM_CACHE(work_mem, SLAB_RECLAIM_ACCOUNT);
+	if (!blk_crypto_work_mem_cache)
+		goto out_work_mem_cache;
+
+	return 0;
+
+out_work_mem_cache:
+	mempool_destroy(blk_crypto_page_pool);
+	blk_crypto_page_pool = NULL;
+out_bounce_pool:
+	kzfree(blk_crypto_keyslots);
+	blk_crypto_keyslots = NULL;
+out_blk_crypto_keyslots:
+	destroy_workqueue(blk_crypto_wq);
+	blk_crypto_wq = NULL;
+out_wq:
+	keyslot_manager_destroy(blk_crypto_ksm);
+	blk_crypto_ksm = NULL;
+out_ksm:
+	pr_warn("No memory for blk-crypto software fallback.");
+	return -ENOMEM;
+}
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
new file mode 100644
index 000000000000..cbb5bea6dcdb
--- /dev/null
+++ b/include/linux/blk-crypto.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_BLK_CRYPTO_H
+#define __LINUX_BLK_CRYPTO_H
+
+#include <linux/types.h>
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+struct bio;
+
+int blk_crypto_init(void);
+
+int blk_crypto_submit_bio(struct bio **bio_ptr);
+
+bool blk_crypto_endio(struct bio *bio);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline int blk_crypto_init(void)
+{
+	return 0;
+}
+
+static inline int blk_crypto_submit_bio(struct bio **bio)
+{
+	return 0;
+}
+
+static inline bool blk_crypto_endio(struct bio *bio)
+{
+	return true;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#endif /* __LINUX_BLK_CRYPTO_H */
-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 4/8] scsi: ufs: UFS driver v2.1 spec crypto additions
  2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
                   ` (2 preceding siblings ...)
  2019-06-05 23:28 ` [RFC PATCH v2 3/8] block: blk-crypto for Inline Encryption Satya Tangirala
@ 2019-06-05 23:28 ` Satya Tangirala
  2019-06-05 23:28 ` [RFC PATCH v2 5/8] scsi: ufs: UFS crypto API Satya Tangirala
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Add the crypto registers and structs defined in v2.1 of the JEDEC UFSHCI
specification in preparation to add support for inline encryption to
UFS.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 drivers/scsi/ufs/ufshcd.h |  5 +++
 drivers/scsi/ufs/ufshci.h | 67 +++++++++++++++++++++++++++++++++++++--
 2 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index ecfa898b9ccc..d3b6a6b57a37 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -692,6 +692,11 @@ struct ufs_hba {
 	 * the performance of ongoing read/write operations.
 	 */
 #define UFSHCD_CAP_KEEP_AUTO_BKOPS_ENABLED_EXCEPT_SUSPEND (1 << 5)
+	/*
+	 * This capability allows the host controller driver to use the
+	 * inline crypto engine, if it is present
+	 */
+#define UFSHCD_CAP_CRYPTO (1 << 6)
 
 	struct devfreq *devfreq;
 	struct ufs_clk_scaling clk_scaling;
diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
index 6fa889de5ee5..a757eaf99a19 100644
--- a/drivers/scsi/ufs/ufshci.h
+++ b/drivers/scsi/ufs/ufshci.h
@@ -90,6 +90,7 @@ enum {
 	MASK_64_ADDRESSING_SUPPORT		= 0x01000000,
 	MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT	= 0x02000000,
 	MASK_UIC_DME_TEST_MODE_SUPPORT		= 0x04000000,
+	MASK_CRYPTO_SUPPORT			= 0x10000000,
 };
 
 #define UFS_MASK(mask, offset)		((mask) << (offset))
@@ -143,6 +144,7 @@ enum {
 #define DEVICE_FATAL_ERROR			0x800
 #define CONTROLLER_FATAL_ERROR			0x10000
 #define SYSTEM_BUS_FATAL_ERROR			0x20000
+#define CRYPTO_ENGINE_FATAL_ERROR		0x40000
 
 #define UFSHCD_UIC_PWR_MASK	(UIC_HIBERNATE_ENTER |\
 				UIC_HIBERNATE_EXIT |\
@@ -153,11 +155,13 @@ enum {
 #define UFSHCD_ERROR_MASK	(UIC_ERROR |\
 				DEVICE_FATAL_ERROR |\
 				CONTROLLER_FATAL_ERROR |\
-				SYSTEM_BUS_FATAL_ERROR)
+				SYSTEM_BUS_FATAL_ERROR |\
+				CRYPTO_ENGINE_FATAL_ERROR)
 
 #define INT_FATAL_ERRORS	(DEVICE_FATAL_ERROR |\
 				CONTROLLER_FATAL_ERROR |\
-				SYSTEM_BUS_FATAL_ERROR)
+				SYSTEM_BUS_FATAL_ERROR |\
+				CRYPTO_ENGINE_FATAL_ERROR)
 
 /* HCS - Host Controller Status 30h */
 #define DEVICE_PRESENT				0x1
@@ -316,6 +320,61 @@ enum {
 	INTERRUPT_MASK_ALL_VER_21	= 0x71FFF,
 };
 
+/* CCAP - Crypto Capability 100h */
+union ufs_crypto_capabilities {
+	__le32 reg_val;
+	struct {
+		u8 num_crypto_cap;
+		u8 config_count;
+		u8 reserved;
+		u8 config_array_ptr;
+	};
+};
+
+enum ufs_crypto_key_size {
+	UFS_CRYPTO_KEY_SIZE_INVALID	= 0x0,
+	UFS_CRYPTO_KEY_SIZE_128		= 0x1,
+	UFS_CRYPTO_KEY_SIZE_192		= 0x2,
+	UFS_CRYPTO_KEY_SIZE_256		= 0x3,
+	UFS_CRYPTO_KEY_SIZE_512		= 0x4,
+};
+
+enum ufs_crypto_alg {
+	UFS_CRYPTO_ALG_AES_XTS			= 0x0,
+	UFS_CRYPTO_ALG_BITLOCKER_AES_CBC	= 0x1,
+	UFS_CRYPTO_ALG_AES_ECB			= 0x2,
+	UFS_CRYPTO_ALG_ESSIV_AES_CBC		= 0x3,
+};
+
+/* x-CRYPTOCAP - Crypto Capability X */
+union ufs_crypto_cap_entry {
+	__le32 reg_val;
+	struct {
+		u8 algorithm_id;
+		u8 sdus_mask; /* Supported data unit size mask */
+		u8 key_size;
+		u8 reserved;
+	};
+};
+
+#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
+#define UFS_CRYPTO_KEY_MAX_SIZE 64
+/* x-CRYPTOCFG - Crypto Configuration X */
+union ufs_crypto_cfg_entry {
+	__le32 reg_val[32];
+	struct {
+		u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE];
+		u8 data_unit_size;
+		u8 crypto_cap_idx;
+		u8 reserved_1;
+		u8 config_enable;
+		u8 reserved_multi_host;
+		u8 reserved_2;
+		u8 vsb[2];
+		u8 reserved_3[56];
+	};
+};
+
 /*
  * Request Descriptor Definitions
  */
@@ -337,6 +396,7 @@ enum {
 	UTP_NATIVE_UFS_COMMAND		= 0x10000000,
 	UTP_DEVICE_MANAGEMENT_FUNCTION	= 0x20000000,
 	UTP_REQ_DESC_INT_CMD		= 0x01000000,
+	UTP_REQ_DESC_CRYPTO_ENABLE_CMD	= 0x00800000,
 };
 
 /* UTP Transfer Request Data Direction (DD) */
@@ -356,6 +416,9 @@ enum {
 	OCS_PEER_COMM_FAILURE		= 0x5,
 	OCS_ABORTED			= 0x6,
 	OCS_FATAL_ERROR			= 0x7,
+	OCS_DEVICE_FATAL_ERROR		= 0x8,
+	OCS_INVALID_CRYPTO_CONFIG	= 0x9,
+	OCS_GENERAL_CRYPTO_ERROR	= 0xA,
 	OCS_INVALID_COMMAND_STATUS	= 0x0F,
 	MASK_OCS			= 0x0F,
 };
-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 5/8] scsi: ufs: UFS crypto API
  2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
                   ` (3 preceding siblings ...)
  2019-06-05 23:28 ` [RFC PATCH v2 4/8] scsi: ufs: UFS driver v2.1 spec crypto additions Satya Tangirala
@ 2019-06-05 23:28 ` Satya Tangirala
  2019-06-13 17:11   ` Eric Biggers
  2019-06-05 23:28 ` [RFC PATCH v2 6/8] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Introduce functions to manipulate UFS inline encryption hardware
in line with the JEDEC UFSHCI v2.1 specification and to work with the
block keyslot manager.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 drivers/scsi/ufs/Kconfig         |  10 +
 drivers/scsi/ufs/Makefile        |   1 +
 drivers/scsi/ufs/ufshcd-crypto.c | 438 +++++++++++++++++++++++++++++++
 drivers/scsi/ufs/ufshcd-crypto.h |  69 +++++
 4 files changed, 518 insertions(+)
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h

diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
index 0b845ab7c3bf..861aabfe791b 100644
--- a/drivers/scsi/ufs/Kconfig
+++ b/drivers/scsi/ufs/Kconfig
@@ -150,3 +150,13 @@ config SCSI_UFS_BSG
 
 	  Select this if you need a bsg device node for your UFS controller.
 	  If unsure, say N.
+
+config SCSI_UFS_CRYPTO
+	bool "UFS Crypto Engine Support"
+	depends on SCSI_UFSHCD && BLK_INLINE_ENCRYPTION
+	help
+	  Enable Crypto Engine Support in UFS.
+	  Enabling this makes it possible for the kernel to use the crypto
+	  capabilities of the UFS device (if present) to perform crypto
+	  operations on data being transferred to/from the device.
+
diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
index 2a9097939bcb..094c39989a37 100644
--- a/drivers/scsi/ufs/Makefile
+++ b/drivers/scsi/ufs/Makefile
@@ -11,3 +11,4 @@ obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o
 obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
 obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
 obj-$(CONFIG_SCSI_UFS_MEDIATEK) += ufs-mediatek.o
+ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
new file mode 100644
index 000000000000..678866d15b8e
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.c
@@ -0,0 +1,438 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <crypto/algapi.h>
+
+#include "ufshcd.h"
+#include "ufshcd-crypto.h"
+
+bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+	return hba->crypto_capabilities.reg_val != 0;
+}
+
+bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
+{
+	return hba->caps & UFSHCD_CAP_CRYPTO;
+}
+
+static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
+{
+	return cap_idx < hba->crypto_capabilities.num_crypto_cap;
+}
+
+#define NUM_KEYSLOTS(hba) (hba->crypto_capabilities.config_count + 1)
+
+bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
+{
+	/*
+	 * The actual number of configurations supported is (CFGC+1), so slot
+	 * numbers range from 0 to config_count inclusive.
+	 */
+	return slot < NUM_KEYSLOTS(hba);
+}
+
+static u8 get_data_unit_size_mask(unsigned int data_unit_size)
+{
+	if (data_unit_size < 512 || data_unit_size > 65536 ||
+	    !is_power_of_2(data_unit_size))
+		return 0;
+
+	return data_unit_size / 512;
+}
+
+static size_t get_keysize_bytes(enum ufs_crypto_key_size size)
+{
+	switch (size) {
+	case UFS_CRYPTO_KEY_SIZE_128: return 16;
+	case UFS_CRYPTO_KEY_SIZE_192: return 24;
+	case UFS_CRYPTO_KEY_SIZE_256: return 32;
+	case UFS_CRYPTO_KEY_SIZE_512: return 64;
+	default: return 0;
+	}
+}
+
+static int ufshcd_crypto_alg_find(void *hba_p,
+			   enum blk_crypt_mode_num crypt_mode,
+			   unsigned int data_unit_size)
+{
+	struct ufs_hba *hba = hba_p;
+	enum ufs_crypto_alg ufs_alg;
+	u8 data_unit_mask;
+	int cap_idx;
+	enum ufs_crypto_key_size ufs_key_size;
+	union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
+
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return -EINVAL;
+
+	switch (crypt_mode) {
+	case BLK_ENCRYPTION_MODE_AES_256_XTS:
+		ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
+		ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
+		break;
+	/*
+	 * case BLK_CRYPTO_ALG_BITLOCKER_AES_CBC:
+	 *	ufs_alg = UFS_CRYPTO_ALG_BITLOCKER_AES_CBC;
+	 *	break;
+	 * case INLINECRYPT_ALG_AES_ECB:
+	 *	ufs_alg = UFS_CRYPTO_ALG_AES_ECB;
+	 *	break;
+	 * case INLINECRYPT_ALG_ESSIV_AES_CBC:
+	 *	ufs_alg = UFS_CRYPTO_ALG_ESSIV_AES_CBC;
+	 *	break;
+	 */
+	default: return -EINVAL;
+	}
+
+	data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+	/*
+	 * TODO: We can replace this for loop entirely by constructing
+	 * a table on init that translates blk_crypt_mode to
+	 * ufs crypt alg numbers. (By assuming that each alg/keysize combo
+	 * appears only once in the ufs crypto caps array.)
+	 */
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		if (ccap_array[cap_idx].algorithm_id == ufs_alg &&
+		    (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
+		    ccap_array[cap_idx].key_size == ufs_key_size)
+			return cap_idx;
+	}
+
+	return -EINVAL;
+}
+
+/**
+ * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
+ *
+ *	Writes the key with the appropriate format - for AES_XTS,
+ *	the first half of the key is copied as is, the second half is
+ *	copied with an offset halfway into the cfg->crypto_key array.
+ *	For the other supported crypto algs, the key is just copied.
+ *
+ * @cfg: The crypto config to write to
+ * @key: The key to write
+ * @cap: The crypto capability (which specifies the crypto alg and key size)
+ *
+ * Returns 0 on success, or -EINVAL
+ */
+static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg,
+					     const u8 *key,
+					     union ufs_crypto_cap_entry cap)
+{
+	size_t key_size_bytes = get_keysize_bytes(cap.key_size);
+
+	if (key_size_bytes == 0)
+		return -EINVAL;
+
+	switch (cap.algorithm_id) {
+	case UFS_CRYPTO_ALG_AES_XTS:
+		key_size_bytes *= 2;
+		if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE)
+			return -EINVAL;
+
+		memcpy(cfg->crypto_key, key, key_size_bytes/2);
+		memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
+		       key + key_size_bytes/2, key_size_bytes/2);
+		return 0;
+	case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC: // fallthrough
+	case UFS_CRYPTO_ALG_AES_ECB: // fallthrough
+	case UFS_CRYPTO_ALG_ESSIV_AES_CBC:
+		memcpy(cfg->crypto_key, key, key_size_bytes);
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static void program_key(struct ufs_hba *hba,
+			const union ufs_crypto_cfg_entry *cfg,
+			int slot)
+{
+	int i;
+	u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
+
+	/* Clear the dword 16 */
+	ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	/* Ensure that CFGE is cleared before programming the key */
+	wmb();
+	for (i = 0; i < 16; i++) {
+		ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]),
+			      slot_offset + i * sizeof(cfg->reg_val[0]));
+		/* Spec says each dword in key must be written sequentially */
+		wmb();
+	}
+	/* Write dword 17 */
+	ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]),
+		      slot_offset + 17 * sizeof(cfg->reg_val[0]));
+	/* Dword 16 must be written last */
+	wmb();
+	/* Write dword 16 */
+	ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]),
+		      slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	wmb();
+}
+
+static int ufshcd_crypto_keyslot_program(void *hba_p, const u8 *key,
+					 enum blk_crypt_mode_num crypt_mode,
+					 unsigned int data_unit_size,
+					 unsigned int slot)
+{
+	struct ufs_hba *hba = hba_p;
+	int err = 0;
+	u8 data_unit_mask;
+	union ufs_crypto_cfg_entry cfg;
+	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
+	int crypto_alg_id;
+
+	crypto_alg_id = ufshcd_crypto_alg_find(hba_p, crypt_mode,
+					       data_unit_size);
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_keyslot_valid(hba, slot) ||
+	    !ufshcd_cap_idx_valid(hba, crypto_alg_id))
+		return -EINVAL;
+
+	data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+	if (!(data_unit_mask &
+	      hba->crypto_cap_array[crypto_alg_id].sdus_mask))
+		return -EINVAL;
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.data_unit_size = data_unit_mask;
+	cfg.crypto_cap_idx = crypto_alg_id;
+	cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE;
+
+	err = ufshcd_crypto_cfg_entry_write_key(&cfg, key,
+				hba->crypto_cap_array[crypto_alg_id]);
+	if (err)
+		return err;
+
+	program_key(hba, &cfg, slot);
+
+	memcpy(&cfg_arr[slot], &cfg, sizeof(cfg));
+	memzero_explicit(&cfg, sizeof(cfg));
+
+	return 0;
+}
+
+static int ufshcd_crypto_keyslot_find(void *hba_p,
+				      const u8 *key,
+				      enum blk_crypt_mode_num crypt_mode,
+				      unsigned int data_unit_size)
+{
+	struct ufs_hba *hba = hba_p;
+	int err = 0;
+	int slot;
+	u8 data_unit_mask;
+	union ufs_crypto_cfg_entry cfg;
+	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
+	int crypto_alg_id;
+
+	crypto_alg_id = ufshcd_crypto_alg_find(hba_p, crypt_mode,
+					       data_unit_size);
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_cap_idx_valid(hba, crypto_alg_id))
+		return -EINVAL;
+
+	data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+	if (!(data_unit_mask &
+	      hba->crypto_cap_array[crypto_alg_id].sdus_mask))
+		return -EINVAL;
+
+	memset(&cfg, 0, sizeof(cfg));
+	err = ufshcd_crypto_cfg_entry_write_key(&cfg, key,
+					hba->crypto_cap_array[crypto_alg_id]);
+
+	if (err)
+		return -EINVAL;
+
+	for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++) {
+		if ((cfg_arr[slot].config_enable &
+		     UFS_CRYPTO_CONFIGURATION_ENABLE) &&
+		    data_unit_mask == cfg_arr[slot].data_unit_size &&
+		    crypto_alg_id == cfg_arr[slot].crypto_cap_idx &&
+		    crypto_memneq(&cfg.crypto_key, cfg_arr[slot].crypto_key,
+				  UFS_CRYPTO_KEY_MAX_SIZE) == 0) {
+			memzero_explicit(&cfg, sizeof(cfg));
+			return slot;
+		}
+	}
+
+	memzero_explicit(&cfg, sizeof(cfg));
+	return -ENOKEY;
+}
+
+static int ufshcd_crypto_keyslot_evict(void *hba_p, const u8 *key,
+				       enum blk_crypt_mode_num crypt_mode,
+				       unsigned int data_unit_size,
+				       unsigned int slot)
+{
+	struct ufs_hba *hba = hba_p;
+	int i = 0;
+	u32 reg_base;
+	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_keyslot_valid(hba, slot))
+		return -EINVAL;
+
+	memset(&cfg_arr[slot], 0, sizeof(cfg_arr[slot]));
+	reg_base = hba->crypto_cfg_register + slot * sizeof(cfg_arr[0]);
+
+	/*
+	 * Clear the crypto cfg on the device. Clearing CFGE
+	 * might not be sufficient, so just clear the entire cfg.
+	 */
+	for (i = 0; i < sizeof(cfg_arr[0]); i += sizeof(__le32))
+		ufshcd_writel(hba, 0, reg_base + i);
+	wmb();
+
+	return 0;
+}
+
+static bool ufshcd_crypt_mode_supported(void *hba_p,
+					 enum blk_crypt_mode_num crypt_mode,
+					 unsigned int data_unit_size)
+{
+	return ufshcd_crypto_alg_find(hba_p, crypt_mode, data_unit_size) >= 0;
+}
+
+void ufshcd_crypto_enable(struct ufs_hba *hba)
+{
+	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
+	int slot;
+
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return;
+
+	hba->caps |= UFSHCD_CAP_CRYPTO;
+	/*
+	 * Reset might clear all keys, so reprogram all the keys.
+	 * Also serves to clear keys on driver init.
+	 */
+	for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++)
+		program_key(hba, &cfg_arr[slot], slot);
+}
+
+void ufshcd_crypto_disable(struct ufs_hba *hba)
+{
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+}
+
+
+/**
+ * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba
+ * @hba: Per adapter instance
+ *
+ * Returns 0 on success. Returns -ENODEV if such capabilties don't exist, and
+ * -ENOMEM upon OOM.
+ */
+int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+	int cap_idx = 0;
+	int err = 0;
+
+	/* Default to disabling crypto */
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+
+	if (!(hba->capabilities & MASK_CRYPTO_SUPPORT)) {
+		err = -ENODEV;
+		goto out;
+	}
+
+	/*
+	 * Crypto Capabilities should never be 0, because the
+	 * config_array_ptr > 04h. So we use a 0 value to indicate that
+	 * crypto init failed, and can't be enabled.
+	 */
+	hba->crypto_capabilities.reg_val =
+			cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
+	hba->crypto_cfg_register =
+		(u32)hba->crypto_capabilities.config_array_ptr * 0x100;
+	hba->crypto_cap_array =
+		devm_kcalloc(hba->dev,
+			     hba->crypto_capabilities.num_crypto_cap,
+			     sizeof(hba->crypto_cap_array[0]),
+			     GFP_KERNEL);
+	if (!hba->crypto_cap_array) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	hba->crypto_cfgs =
+		devm_kcalloc(hba->dev,
+			     hba->crypto_capabilities.config_count + 1,
+			     sizeof(union ufs_crypto_cfg_entry),
+			     GFP_KERNEL);
+	if (!hba->crypto_cfgs) {
+		err = -ENOMEM;
+		goto out_cfg_mem;
+	}
+
+	/*
+	 * Store all the capabilities now so that we don't need to repeatedly
+	 * access the device each time we want to know its capabilities
+	 */
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		hba->crypto_cap_array[cap_idx].reg_val =
+			cpu_to_le32(ufshcd_readl(hba,
+						 REG_UFS_CRYPTOCAP +
+						 cap_idx * sizeof(__le32)));
+	}
+
+	return 0;
+out_cfg_mem:
+	devm_kfree(hba->dev, hba->crypto_cap_array);
+out:
+	// TODO: print error?
+	/* Indicate that init failed by setting crypto_capabilities to 0 */
+	hba->crypto_capabilities.reg_val = 0;
+	return err;
+}
+
+static const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = {
+	.keyslot_program	= ufshcd_crypto_keyslot_program,
+	.keyslot_evict		= ufshcd_crypto_keyslot_evict,
+	.keyslot_find		= ufshcd_crypto_keyslot_find,
+	.crypt_mode_supported	= ufshcd_crypt_mode_supported,
+};
+
+void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					    struct request_queue *q)
+{
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return;
+
+	if (q) {
+		q->ksm = keyslot_manager_create(
+				hba->crypto_capabilities.config_count + 1,
+				&ufshcd_ksm_ops, hba);
+	}
+	/*
+	 * If we fail we make it look like
+	 * crypto is not supported, which will avoid issues
+	 * with reset
+	 */
+	if (!q || !q->ksm) {
+		ufshcd_crypto_disable(hba);
+		hba->crypto_capabilities.reg_val = 0;
+		devm_kfree(hba->dev, hba->crypto_cap_array);
+		devm_kfree(hba->dev, hba->crypto_cfgs);
+	}
+}
+
+void ufshcd_crypto_destroy_rq_keyslot_manager(struct request_queue *q)
+{
+	if (q && q->ksm)
+		keyslot_manager_destroy(q->ksm);
+}
+
diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h
new file mode 100644
index 000000000000..7790e99477b9
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef _UFSHCD_CRYPTO_H
+#define _UFSHCD_CRYPTO_H
+
+struct ufs_hba;
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+#include <linux/keyslot-manager.h>
+
+#include "ufshci.h"
+
+bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot);
+
+bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba);
+
+bool ufshcd_is_crypto_enabled(struct ufs_hba *hba);
+
+void ufshcd_crypto_enable(struct ufs_hba *hba);
+
+void ufshcd_crypto_disable(struct ufs_hba *hba);
+
+int ufshcd_hba_init_crypto(struct ufs_hba *hba);
+
+void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					    struct request_queue *q);
+
+void ufshcd_crypto_destroy_rq_keyslot_manager(struct request_queue *q);
+
+#else /* CONFIG_SCSI_UFS_CRYPTO */
+
+static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba,
+					unsigned int slot)
+{
+	return false;
+}
+
+static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+	return false;
+}
+
+static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
+{
+	return false;
+}
+
+static inline void ufshcd_crypto_enable(struct ufs_hba *hba) { }
+
+static inline void ufshcd_crypto_disable(struct ufs_hba *hba) { }
+
+static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+	return 0;
+}
+
+static inline void ufshcd_crypto_setup_rq_keyslot_manager(
+					struct ufs_hba *hba,
+					struct request_queue *q) { }
+
+static inline void ufshcd_crypto_destroy_rq_keyslot_manager(
+				struct request_queue *q) { }
+
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
+
+#endif /* _UFSHCD_CRYPTO_H */
-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 6/8] scsi: ufs: Add inline encryption support to UFS
  2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
                   ` (4 preceding siblings ...)
  2019-06-05 23:28 ` [RFC PATCH v2 5/8] scsi: ufs: UFS crypto API Satya Tangirala
@ 2019-06-05 23:28 ` Satya Tangirala
  2019-06-13 17:22   ` Eric Biggers
  2019-06-05 23:28 ` [RFC PATCH v2 7/8] fscrypt: wire up fscrypt to use blk-crypto Satya Tangirala
  2019-06-05 23:28 ` [RFC PATCH v2 8/8] f2fs: Wire up f2fs to use inline encryption via fscrypt Satya Tangirala
  7 siblings, 1 reply; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Wire up ufshcd.c with the UFS Crypto API, the block layer inline
encryption additions and the keyslot manager.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 drivers/scsi/ufs/ufshcd.c | 84 ++++++++++++++++++++++++++++++++++++---
 drivers/scsi/ufs/ufshcd.h | 18 +++++++++
 2 files changed, 97 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 8c1c551f2b42..c5ba141ce0cf 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -47,6 +47,7 @@
 #include "unipro.h"
 #include "ufs-sysfs.h"
 #include "ufs_bsg.h"
+#include "ufshcd-crypto.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/ufs.h>
@@ -855,7 +856,14 @@ static void ufshcd_enable_run_stop_reg(struct ufs_hba *hba)
  */
 static inline void ufshcd_hba_start(struct ufs_hba *hba)
 {
-	ufshcd_writel(hba, CONTROLLER_ENABLE, REG_CONTROLLER_ENABLE);
+	u32 val = CONTROLLER_ENABLE;
+
+	if (ufshcd_hba_is_crypto_supported(hba)) {
+		ufshcd_crypto_enable(hba);
+		val |= CRYPTO_GENERAL_ENABLE;
+	}
+
+	ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE);
 }
 
 /**
@@ -2208,9 +2216,21 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
 		dword_0 |= UTP_REQ_DESC_INT_CMD;
 
 	/* Transfer request descriptor header fields */
+	if (lrbp->crypto_enable) {
+		dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
+		dword_0 |= lrbp->crypto_key_slot;
+		req_desc->header.dword_1 =
+			cpu_to_le32((u32)lrbp->data_unit_num);
+		req_desc->header.dword_3 =
+			cpu_to_le32((u32)(lrbp->data_unit_num >> 32));
+	} else {
+		/* dword_1 and dword_3 are reserved, hence they are set to 0 */
+		req_desc->header.dword_1 = 0;
+		req_desc->header.dword_3 = 0;
+	}
+
 	req_desc->header.dword_0 = cpu_to_le32(dword_0);
-	/* dword_1 is reserved, hence it is set to 0 */
-	req_desc->header.dword_1 = 0;
+
 	/*
 	 * assigning invalid value for command status. Controller
 	 * updates OCS on command completion, with the command
@@ -2218,8 +2238,6 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
 	 */
 	req_desc->header.dword_2 =
 		cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
-	/* dword_3 is reserved, hence it is set to 0 */
-	req_desc->header.dword_3 = 0;
 
 	req_desc->prd_table_length = 0;
 }
@@ -2379,6 +2397,37 @@ static inline u16 ufshcd_upiu_wlun_to_scsi_wlun(u8 upiu_wlun_id)
 	return (upiu_wlun_id & ~UFS_UPIU_WLUN_ID) | SCSI_W_LUN_BASE;
 }
 
+static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+					     struct scsi_cmnd *cmd,
+					     struct ufshcd_lrb *lrbp)
+{
+	int key_slot;
+
+	if (!bio_crypt_should_process(cmd->request->bio,
+					cmd->request->q)) {
+		lrbp->crypto_enable = false;
+		return 0;
+	}
+
+	if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) {
+		/*
+		 * Upper layer asked us to do inline encryption
+		 * but that isn't enabled, so we fail this request.
+		 */
+		return -EINVAL;
+	}
+	key_slot = bio_crypt_get_slot(cmd->request->bio);
+	if (!ufshcd_keyslot_valid(hba, key_slot))
+		return -EINVAL;
+
+	lrbp->crypto_enable = true;
+	lrbp->crypto_key_slot = key_slot;
+	lrbp->data_unit_num = bio_crypt_data_unit_num(cmd->request->bio);
+
+	return 0;
+}
+
+
 /**
  * ufshcd_queuecommand - main entry point for SCSI requests
  * @host: SCSI host pointer
@@ -2466,6 +2515,13 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 	lrbp->task_tag = tag;
 	lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
 	lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false;
+
+	err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
+	if (err) {
+		lrbp->cmd = NULL;
+		clear_bit_unlock(tag, &hba->lrb_in_use);
+		goto out;
+	}
 	lrbp->req_abort_skip = false;
 
 	ufshcd_comp_scsi_upiu(hba, lrbp);
@@ -2499,6 +2555,7 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba,
 	lrbp->task_tag = tag;
 	lrbp->lun = 0; /* device management cmd is not specific to any LUN */
 	lrbp->intr_cmd = true; /* No interrupt aggregation */
+	lrbp->crypto_enable = false; /* No crypto operations */
 	hba->dev_cmd.type = cmd_type;
 
 	return ufshcd_comp_devman_upiu(hba, lrbp);
@@ -4191,6 +4248,8 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep)
 {
 	int err;
 
+	ufshcd_crypto_disable(hba);
+
 	ufshcd_writel(hba, CONTROLLER_DISABLE,  REG_CONTROLLER_ENABLE);
 	err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE,
 					CONTROLLER_ENABLE, CONTROLLER_DISABLE,
@@ -4584,10 +4643,13 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth)
 static int ufshcd_slave_configure(struct scsi_device *sdev)
 {
 	struct request_queue *q = sdev->request_queue;
+	struct ufs_hba *hba = shost_priv(sdev->host);
 
 	blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
 	blk_queue_max_segment_size(q, PRDT_DATA_BYTE_COUNT_MAX);
 
+	ufshcd_crypto_setup_rq_keyslot_manager(hba, q);
+
 	return 0;
 }
 
@@ -4598,6 +4660,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
 static void ufshcd_slave_destroy(struct scsi_device *sdev)
 {
 	struct ufs_hba *hba;
+	struct request_queue *q = sdev->request_queue;
 
 	hba = shost_priv(sdev->host);
 	/* Drop the reference as it won't be needed anymore */
@@ -4608,6 +4671,8 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev)
 		hba->sdev_ufs_device = NULL;
 		spin_unlock_irqrestore(hba->host->host_lock, flags);
 	}
+
+	ufshcd_crypto_destroy_rq_keyslot_manager(q);
 }
 
 /**
@@ -4723,6 +4788,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
 	case OCS_MISMATCH_RESP_UPIU_SIZE:
 	case OCS_PEER_COMM_FAILURE:
 	case OCS_FATAL_ERROR:
+	case OCS_INVALID_CRYPTO_CONFIG:
+	case OCS_GENERAL_CRYPTO_ERROR:
 	default:
 		result |= DID_ERROR << 16;
 		dev_err(hba->dev,
@@ -8290,6 +8357,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
 		goto exit_gating;
 	}
 
+	/* Init crypto */
+	err = ufshcd_hba_init_crypto(hba);
+	if (err) {
+		dev_err(hba->dev, "crypto setup failed\n");
+		goto out_remove_scsi_host;
+	}
+
 	/* Host controller enable */
 	err = ufshcd_hba_enable(hba);
 	if (err) {
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index d3b6a6b57a37..283014e0924f 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -167,6 +167,9 @@ struct ufs_pm_lvl_states {
  * @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation)
  * @issue_time_stamp: time stamp for debug purposes
  * @compl_time_stamp: time stamp for statistics
+ * @crypto_enable: whether or not the request needs inline crypto operations
+ * @crypto_key_slot: the key slot to use for inline crypto
+ * @data_unit_num: the data unit number for the first block for inline crypto
  * @req_abort_skip: skip request abort task flag
  */
 struct ufshcd_lrb {
@@ -191,6 +194,9 @@ struct ufshcd_lrb {
 	bool intr_cmd;
 	ktime_t issue_time_stamp;
 	ktime_t compl_time_stamp;
+	bool crypto_enable;
+	u8 crypto_key_slot;
+	u64 data_unit_num;
 
 	bool req_abort_skip;
 };
@@ -501,6 +507,10 @@ struct ufs_stats {
  * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
  *  device is known or not.
  * @scsi_block_reqs_cnt: reference counting for scsi block requests
+ * @crypto_capabilities: Content of crypto capabilities register (0x100)
+ * @crypto_cap_array: Array of crypto capabilities
+ * @crypto_cfg_register: Start of the crypto cfg array
+ * @crypto_cfgs: Array of crypto configurations (i.e. config for each slot)
  */
 struct ufs_hba {
 	void __iomem *mmio_base;
@@ -711,6 +721,14 @@ struct ufs_hba {
 
 	struct device		bsg_dev;
 	struct request_queue	*bsg_queue;
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+	/* crypto */
+	union ufs_crypto_capabilities crypto_capabilities;
+	union ufs_crypto_cap_entry *crypto_cap_array;
+	u32 crypto_cfg_register;
+	union ufs_crypto_cfg_entry *crypto_cfgs;
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
 };
 
 /* Returns true if clocks can be gated. Otherwise false */
-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 7/8] fscrypt: wire up fscrypt to use blk-crypto
  2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
                   ` (5 preceding siblings ...)
  2019-06-05 23:28 ` [RFC PATCH v2 6/8] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
@ 2019-06-05 23:28 ` Satya Tangirala
  2019-06-13 18:55   ` Eric Biggers
  2019-06-05 23:28 ` [RFC PATCH v2 8/8] f2fs: Wire up f2fs to use inline encryption via fscrypt Satya Tangirala
  7 siblings, 1 reply; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Introduce fscrypt_set_bio_crypt_ctx for filesystems to call to set up
encryption contexts in bios, and fscrypt_evict_crypt_key to evict
the encryption context associated with an inode.

Inline encryption is controlled by a policy flag in the fscrypt_info
in the inode, and filesystems may check if an inode should use inline
encryption by calling fscrypt_inode_is_hw_encrypted. Files can be marked
as inline encrypted from userspace by appropriately modifying the flags
(OR-ing FS_POLICY_FLAGS_HW_ENCRYPTION to it) in the fscrypt_policy
passed to fscrypt_ioctl_set_policy.

To test inline encryption with the fscrypt dummy context, add
ctx.flags |= FS_POLICY_FLAGS_HW_ENCRYPTION
when setting up the dummy context in fs/crypto/keyinfo.c.

Note that blk-crypto will fall back to software en/decryption in the
absence of inline crypto hardware, so setting up the ctx.flags in the
dummy context without inline crypto hardware serves as a test for
the software fallback in blk-crypto.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 block/blk-crypto.c          |   1 -
 fs/crypto/Kconfig           |   7 ++
 fs/crypto/bio.c             | 159 +++++++++++++++++++++++++++++++-----
 fs/crypto/crypto.c          |   9 ++
 fs/crypto/fscrypt_private.h |  10 +++
 fs/crypto/keyinfo.c         |  69 +++++++++++-----
 fs/crypto/policy.c          |  10 +++
 include/linux/fscrypt.h     |  64 +++++++++++++++
 include/uapi/linux/fs.h     |  12 ++-
 9 files changed, 296 insertions(+), 45 deletions(-)

diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 5adb5251ae7e..7e98acd2b963 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -82,7 +82,6 @@ static int blk_crypto_keyslot_program(void *priv, const u8 *key,
 		slotp->tfm = tfm;
 	}
 
-
 	err = crypto_skcipher_setkey(tfm, key, keysize);
 
 	if (err) {
diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig
index 24ed99e2eca0..aa5b2bc6c8dd 100644
--- a/fs/crypto/Kconfig
+++ b/fs/crypto/Kconfig
@@ -15,3 +15,10 @@ config FS_ENCRYPTION
 	  efficient since it avoids caching the encrypted and
 	  decrypted pages in the page cache.  Currently Ext4,
 	  F2FS and UBIFS make use of this feature.
+
+config FS_ENCRYPTION_HW_CRYPT
+	tristate "Enable fscrypt to use inline crypto"
+	default n
+	depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION
+	help
+	  Enables fscrypt to use inline crypto hardware if available.
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index b46021ebde85..b06b1a2be99b 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -24,17 +24,24 @@
 #include <linux/module.h>
 #include <linux/bio.h>
 #include <linux/namei.h>
+#include <linux/keyslot-manager.h>
+#include <linux/blkdev.h>
+#include <crypto/algapi.h>
 #include "fscrypt_private.h"
 
-static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
+static void __fscrypt_decrypt_bio(struct bio *bio, bool done, bool decrypt)
 {
 	struct bio_vec *bv;
 	struct bvec_iter_all iter_all;
 
 	bio_for_each_segment_all(bv, bio, iter_all) {
 		struct page *page = bv->bv_page;
-		int ret = fscrypt_decrypt_page(page->mapping->host, page,
-				PAGE_SIZE, 0, page->index);
+		int ret = 0;
+
+		if (decrypt) {
+			ret = fscrypt_decrypt_page(page->mapping->host, page,
+						   PAGE_SIZE, 0, page->index);
+		}
 
 		if (ret)
 			SetPageError(page);
@@ -47,7 +54,7 @@ static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
 
 void fscrypt_decrypt_bio(struct bio *bio)
 {
-	__fscrypt_decrypt_bio(bio, false);
+	__fscrypt_decrypt_bio(bio, false, true);
 }
 EXPORT_SYMBOL(fscrypt_decrypt_bio);
 
@@ -57,16 +64,27 @@ static void completion_pages(struct work_struct *work)
 		container_of(work, struct fscrypt_ctx, r.work);
 	struct bio *bio = ctx->r.bio;
 
-	__fscrypt_decrypt_bio(bio, true);
+	__fscrypt_decrypt_bio(bio, true, true);
+	fscrypt_release_ctx(ctx);
+	bio_put(bio);
+}
+
+static void decrypt_bio_hwcrypt(struct fscrypt_ctx *ctx, struct bio *bio)
+{
+	__fscrypt_decrypt_bio(bio, true, false);
 	fscrypt_release_ctx(ctx);
 	bio_put(bio);
 }
 
 void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
 {
-	INIT_WORK(&ctx->r.work, completion_pages);
-	ctx->r.bio = bio;
-	fscrypt_enqueue_decrypt_work(&ctx->r.work);
+	if (bio_is_encrypted(bio)) {
+		decrypt_bio_hwcrypt(ctx, bio);
+	} else {
+		INIT_WORK(&ctx->r.work, completion_pages);
+		ctx->r.bio = bio;
+		fscrypt_enqueue_decrypt_work(&ctx->r.work);
+	}
 }
 EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);
 
@@ -94,29 +112,33 @@ EXPORT_SYMBOL(fscrypt_pullback_bio_page);
 int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 				sector_t pblk, unsigned int len)
 {
-	struct fscrypt_ctx *ctx;
+	struct fscrypt_ctx *ctx = NULL;
 	struct page *ciphertext_page = NULL;
 	struct bio *bio;
 	int ret, err = 0;
 
 	BUG_ON(inode->i_sb->s_blocksize != PAGE_SIZE);
 
-	ctx = fscrypt_get_ctx(GFP_NOFS);
-	if (IS_ERR(ctx))
-		return PTR_ERR(ctx);
+	if (!fscrypt_inode_is_hw_encrypted(inode)) {
+		ctx = fscrypt_get_ctx(GFP_NOFS);
+		if (IS_ERR(ctx))
+			return PTR_ERR(ctx);
 
-	ciphertext_page = fscrypt_alloc_bounce_page(ctx, GFP_NOWAIT);
-	if (IS_ERR(ciphertext_page)) {
-		err = PTR_ERR(ciphertext_page);
-		goto errout;
+		ciphertext_page = fscrypt_alloc_bounce_page(ctx, GFP_NOWAIT);
+		if (IS_ERR(ciphertext_page)) {
+			err = PTR_ERR(ciphertext_page);
+			goto errout;
+		}
 	}
 
 	while (len--) {
-		err = fscrypt_do_page_crypto(inode, FS_ENCRYPT, lblk,
+		if (!fscrypt_inode_is_hw_encrypted(inode)) {
+			err = fscrypt_do_page_crypto(inode, FS_ENCRYPT, lblk,
 					     ZERO_PAGE(0), ciphertext_page,
 					     PAGE_SIZE, 0, GFP_NOFS);
-		if (err)
-			goto errout;
+			if (err)
+				goto errout;
+		}
 
 		bio = bio_alloc(GFP_NOWAIT, 1);
 		if (!bio) {
@@ -127,8 +149,14 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 		bio->bi_iter.bi_sector =
 			pblk << (inode->i_sb->s_blocksize_bits - 9);
 		bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
-		ret = bio_add_page(bio, ciphertext_page,
-					inode->i_sb->s_blocksize, 0);
+		if (!fscrypt_inode_is_hw_encrypted(inode)) {
+			ret = bio_add_page(bio, ciphertext_page,
+						inode->i_sb->s_blocksize, 0);
+		} else {
+			ret = bio_add_page(bio, ZERO_PAGE(0),
+						inode->i_sb->s_blocksize, 0);
+		}
+
 		if (ret != inode->i_sb->s_blocksize) {
 			/* should never happen! */
 			WARN_ON(1);
@@ -136,6 +164,7 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 			err = -EIO;
 			goto errout;
 		}
+		fscrypt_set_bio_crypt_ctx(inode, bio, pblk);
 		err = submit_bio_wait(bio);
 		if (err == 0 && bio->bi_status)
 			err = -EIO;
@@ -147,7 +176,93 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 	}
 	err = 0;
 errout:
-	fscrypt_release_ctx(ctx);
+	if (!fscrypt_inode_is_hw_encrypted(inode))
+		fscrypt_release_ctx(ctx);
 	return err;
 }
 EXPORT_SYMBOL(fscrypt_zeroout_range);
+
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+static enum blk_crypt_mode_num
+get_blk_crypto_alg_for_fscryptalg(u8 fscrypt_alg)
+{
+	switch (fscrypt_alg) {
+	case FS_ENCRYPTION_MODE_AES_256_XTS:
+		return BLK_ENCRYPTION_MODE_AES_256_XTS;
+	default: return -EINVAL;
+	}
+}
+
+int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
+				 struct bio *bio, u64 data_unit_num)
+{
+	struct fscrypt_info *ci = inode->i_crypt_info;
+
+	/* If inode is not hw encrypted, nothing to do. */
+	if (!fscrypt_inode_is_hw_encrypted(inode))
+		return 0;
+
+	return bio_crypt_set_ctx(bio, ci->ci_master_key->mk_raw,
+			get_blk_crypto_alg_for_fscryptalg(ci->ci_data_mode),
+			data_unit_num,
+			PAGE_SHIFT);
+}
+EXPORT_SYMBOL(fscrypt_set_bio_crypt_ctx);
+
+void fscrypt_unset_bio_crypt_ctx(struct bio *bio)
+{
+	bio_crypt_free_ctx(bio);
+}
+EXPORT_SYMBOL(fscrypt_unset_bio_crypt_ctx);
+
+int fscrypt_evict_crypt_key(struct inode *inode)
+{
+	struct request_queue *q;
+	struct fscrypt_info *ci;
+
+	if (!inode)
+		return 0;
+
+	q = inode->i_sb->s_bdev->bd_queue;
+	ci = inode->i_crypt_info;
+
+	if (!q || !q->ksm || !ci ||
+	    !fscrypt_inode_is_hw_encrypted(inode)) {
+		return 0;
+	}
+
+	return keyslot_manager_evict_key(q->ksm,
+					 ci->ci_master_key->mk_raw,
+					 get_blk_crypto_alg_for_fscryptalg(
+						ci->ci_data_mode),
+					 PAGE_SIZE);
+}
+EXPORT_SYMBOL(fscrypt_evict_crypt_key);
+
+bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1,
+				   const struct inode *inode_2)
+{
+	struct fscrypt_info *ci_1, *ci_2;
+	bool enc_1 = fscrypt_inode_is_hw_encrypted(inode_1);
+	bool enc_2 = fscrypt_inode_is_hw_encrypted(inode_2);
+
+	if (enc_1 != enc_2)
+		return false;
+
+	if (!enc_1)
+		return true;
+
+	if (inode_1 == inode_2)
+		return true;
+
+	ci_1 = inode_1->i_crypt_info;
+	ci_2 = inode_2->i_crypt_info;
+
+	return ci_1->ci_data_mode == ci_2->ci_data_mode &&
+	       crypto_memneq(ci_1->ci_master_key->mk_raw,
+			     ci_2->ci_master_key->mk_raw,
+			     ci_1->ci_master_key->mk_mode->keysize) == 0;
+}
+EXPORT_SYMBOL(fscrypt_inode_crypt_mergeable);
+
+#endif /* FS_ENCRYPTION_HW_CRYPT */
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index 335a362ee446..1bf195ef82c8 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -240,6 +240,11 @@ struct page *fscrypt_encrypt_page(const struct inode *inode,
 
 	BUG_ON(len % FS_CRYPTO_BLOCK_SIZE != 0);
 
+	/* If HW encryption, then pretend we did in place encryption */
+	if (fscrypt_inode_is_hw_encrypted(inode)) {
+		return ciphertext_page;
+	}
+
 	if (inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES) {
 		/* with inplace-encryption we just encrypt the page */
 		err = fscrypt_do_page_crypto(inode, FS_ENCRYPT, lblk_num, page,
@@ -302,6 +307,10 @@ int fscrypt_decrypt_page(const struct inode *inode, struct page *page,
 	if (!(inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES))
 		BUG_ON(!PageLocked(page));
 
+	/* If we have HW encryption, then this page is already decrypted */
+	if (fscrypt_inode_is_hw_encrypted(inode))
+		return 0;
+
 	return fscrypt_do_page_crypto(inode, FS_DECRYPT, lblk_num, page, page,
 				      len, offs, GFP_NOFS);
 }
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index 7da276159593..d6d65c88a629 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -49,6 +49,16 @@ struct fscrypt_symlink_data {
 	char encrypted_path[1];
 } __packed;
 
+/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
+struct fscrypt_master_key {
+	struct hlist_node mk_node;
+	refcount_t mk_refcount;
+	const struct fscrypt_mode *mk_mode;
+	struct crypto_skcipher *mk_ctfm;
+	u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+	u8 mk_raw[FS_MAX_KEY_SIZE];
+};
+
 /*
  * fscrypt_info - the "encryption key" for an inode
  *
diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
index dcd91a3fbe49..c00d986799d5 100644
--- a/fs/crypto/keyinfo.c
+++ b/fs/crypto/keyinfo.c
@@ -25,6 +25,21 @@ static struct crypto_shash *essiv_hash_tfm;
 static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */
 static DEFINE_SPINLOCK(fscrypt_master_keys_lock);
 
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+static inline bool __flags_hw_encrypted(u8 flags,
+					const struct inode *inode)
+{
+	return inode && (flags & FS_POLICY_FLAGS_HW_ENCRYPTION) &&
+	       S_ISREG(inode->i_mode);
+}
+#else
+static inline bool __flags_hw_encrypted(u8 flags,
+					const struct inode *inode)
+{
+	return false;
+}
+#endif /* CONFIG_FS_ENCRYPTION_HW_CRYPT */
+
 /*
  * Key derivation function.  This generates the derived key by encrypting the
  * master key with AES-128-ECB using the inode's nonce as the AES key.
@@ -220,6 +235,9 @@ static int find_and_derive_key(const struct inode *inode,
 			memcpy(derived_key, payload->raw, mode->keysize);
 			err = 0;
 		}
+	} else if (__flags_hw_encrypted(ctx->flags, inode)) {
+		memcpy(derived_key, payload->raw, mode->keysize);
+		err = 0;
 	} else {
 		err = derive_key_aes(payload->raw, ctx, derived_key,
 				     mode->keysize);
@@ -269,16 +287,6 @@ allocate_skcipher_for_mode(struct fscrypt_mode *mode, const u8 *raw_key,
 	return ERR_PTR(err);
 }
 
-/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
-struct fscrypt_master_key {
-	struct hlist_node mk_node;
-	refcount_t mk_refcount;
-	const struct fscrypt_mode *mk_mode;
-	struct crypto_skcipher *mk_ctfm;
-	u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
-	u8 mk_raw[FS_MAX_KEY_SIZE];
-};
-
 static void free_master_key(struct fscrypt_master_key *mk)
 {
 	if (mk) {
@@ -287,13 +295,15 @@ static void free_master_key(struct fscrypt_master_key *mk)
 	}
 }
 
-static void put_master_key(struct fscrypt_master_key *mk)
+static void put_master_key(struct fscrypt_master_key *mk,
+			   struct inode *inode)
 {
 	if (!refcount_dec_and_lock(&mk->mk_refcount, &fscrypt_master_keys_lock))
 		return;
 	hash_del(&mk->mk_node);
 	spin_unlock(&fscrypt_master_keys_lock);
 
+	fscrypt_evict_crypt_key(inode);
 	free_master_key(mk);
 }
 
@@ -360,11 +370,13 @@ fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode,
 		return ERR_PTR(-ENOMEM);
 	refcount_set(&mk->mk_refcount, 1);
 	mk->mk_mode = mode;
-	mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
-	if (IS_ERR(mk->mk_ctfm)) {
-		err = PTR_ERR(mk->mk_ctfm);
-		mk->mk_ctfm = NULL;
-		goto err_free_mk;
+	if (!__flags_hw_encrypted(ci->ci_flags, inode)) {
+		mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
+		if (IS_ERR(mk->mk_ctfm)) {
+			err = PTR_ERR(mk->mk_ctfm);
+			mk->mk_ctfm = NULL;
+			goto err_free_mk;
+		}
 	}
 	memcpy(mk->mk_descriptor, ci->ci_master_key_descriptor,
 	       FS_KEY_DESCRIPTOR_SIZE);
@@ -456,7 +468,8 @@ static int setup_crypto_transform(struct fscrypt_info *ci,
 	struct crypto_skcipher *ctfm;
 	int err;
 
-	if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) {
+	if ((ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) ||
+	    __flags_hw_encrypted(ci->ci_flags, inode)) {
 		mk = fscrypt_get_master_key(ci, mode, raw_key, inode);
 		if (IS_ERR(mk))
 			return PTR_ERR(mk);
@@ -486,13 +499,13 @@ static int setup_crypto_transform(struct fscrypt_info *ci,
 	return 0;
 }
 
-static void put_crypt_info(struct fscrypt_info *ci)
+static void put_crypt_info(struct fscrypt_info *ci, struct inode *inode)
 {
 	if (!ci)
 		return;
 
 	if (ci->ci_master_key) {
-		put_master_key(ci->ci_master_key);
+		put_master_key(ci->ci_master_key, inode);
 	} else {
 		crypto_free_skcipher(ci->ci_ctfm);
 		crypto_free_cipher(ci->ci_essiv_tfm);
@@ -577,7 +590,7 @@ int fscrypt_get_encryption_info(struct inode *inode)
 out:
 	if (res == -ENOKEY)
 		res = 0;
-	put_crypt_info(crypt_info);
+	put_crypt_info(crypt_info, NULL);
 	kzfree(raw_key);
 	return res;
 }
@@ -591,7 +604,7 @@ EXPORT_SYMBOL(fscrypt_get_encryption_info);
  */
 void fscrypt_put_encryption_info(struct inode *inode)
 {
-	put_crypt_info(inode->i_crypt_info);
+	put_crypt_info(inode->i_crypt_info, inode);
 	inode->i_crypt_info = NULL;
 }
 EXPORT_SYMBOL(fscrypt_put_encryption_info);
@@ -610,3 +623,17 @@ void fscrypt_free_inode(struct inode *inode)
 	}
 }
 EXPORT_SYMBOL(fscrypt_free_inode);
+
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+bool fscrypt_inode_is_hw_encrypted(const struct inode *inode)
+{
+	struct fscrypt_info *ci;
+
+	if (!inode)
+		return false;
+	ci = inode->i_crypt_info;
+
+	return ci && __flags_hw_encrypted(ci->ci_flags, inode);
+}
+EXPORT_SYMBOL(fscrypt_inode_is_hw_encrypted);
+#endif /* CONFIG_FS_ENCRYPTION_HW_CRYPT */
diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
index d536889ac31b..556e9da7a427 100644
--- a/fs/crypto/policy.c
+++ b/fs/crypto/policy.c
@@ -36,6 +36,7 @@ static int create_encryption_context_from_policy(struct inode *inode,
 	struct fscrypt_context ctx;
 
 	ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
+
 	memcpy(ctx.master_key_descriptor, policy->master_key_descriptor,
 					FS_KEY_DESCRIPTOR_SIZE);
 
@@ -46,8 +47,17 @@ static int create_encryption_context_from_policy(struct inode *inode,
 	if (policy->flags & ~FS_POLICY_FLAGS_VALID)
 		return -EINVAL;
 
+	/*
+	 * TODO: expose HW encryption via some toggleable knob
+	 * instead of as a policy?
+	 */
+	if (!inode->i_sb->s_cop->hw_crypt_supp &&
+	    (policy->flags & FS_POLICY_FLAGS_HW_ENCRYPTION))
+		return -EINVAL;
+
 	ctx.contents_encryption_mode = policy->contents_encryption_mode;
 	ctx.filenames_encryption_mode = policy->filenames_encryption_mode;
+
 	ctx.flags = policy->flags;
 	BUILD_BUG_ON(sizeof(ctx.nonce) != FS_KEY_DERIVATION_NONCE_SIZE);
 	get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
index f7680ef1abd2..f0e1589f32bd 100644
--- a/include/linux/fscrypt.h
+++ b/include/linux/fscrypt.h
@@ -61,6 +61,7 @@ struct fscrypt_operations {
 	bool (*dummy_context)(struct inode *);
 	bool (*empty_dir)(struct inode *);
 	unsigned int max_namelen;
+	bool hw_crypt_supp;
 };
 
 struct fscrypt_ctx {
@@ -130,6 +131,22 @@ extern int fscrypt_get_encryption_info(struct inode *);
 extern void fscrypt_put_encryption_info(struct inode *);
 extern void fscrypt_free_inode(struct inode *);
 
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+extern bool fscrypt_inode_is_hw_encrypted(const struct inode *inode);
+extern bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1,
+					  const struct inode *inode_2);
+#else
+static inline bool fscrypt_inode_is_hw_encrypted(const struct inode *inode)
+{
+	return false;
+}
+static inline bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1,
+						 const struct inode *inode_2)
+{
+	return true;
+}
+#endif /* CONFIG_FS_ENCRYPTION_HW_CRYPT */
+
 /* fname.c */
 extern int fscrypt_setup_filename(struct inode *, const struct qstr *,
 				int lookup, struct fscrypt_name *);
@@ -226,6 +243,25 @@ extern void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
 extern void fscrypt_pullback_bio_page(struct page **, bool);
 extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t,
 				 unsigned int);
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+extern int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
+				     struct bio *bio, u64 data_unit_num);
+extern void fscrypt_unset_bio_crypt_ctx(struct bio *bio);
+extern int fscrypt_evict_crypt_key(struct inode *inode);
+#else
+static inline int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
+					    struct bio *bio, u64 data_unit_num)
+{
+	return 0;
+}
+
+static inline void fscrypt_unset_bio_crypt_ctx(struct bio *bio) { }
+
+static inline int fscrypt_evict_crypt_key(struct inode *inode)
+{
+	return 0;
+}
+#endif
 
 /* hooks.c */
 extern int fscrypt_file_open(struct inode *inode, struct file *filp);
@@ -351,6 +387,17 @@ static inline void fscrypt_free_inode(struct inode *inode)
 {
 }
 
+static inline bool fscrypt_inode_is_hw_encrypted(const struct inode *inode)
+{
+	return false;
+}
+
+static inline bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1,
+						 const struct inode *inode_2)
+{
+	return true;
+}
+
  /* fname.c */
 static inline int fscrypt_setup_filename(struct inode *dir,
 					 const struct qstr *iname,
@@ -421,6 +468,23 @@ static inline int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 	return -EOPNOTSUPP;
 }
 
+static inline int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
+					    struct bio *bio,
+					    u64 data_unit_num)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline void fscrypt_unset_bio_crypt_ctx(struct bio *bio)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_evict_crypt_key(struct inode *inode)
+{
+	return 0;
+}
+
 /* hooks.c */
 
 static inline int fscrypt_file_open(struct inode *inode, struct file *filp)
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 59c71fa8c553..e8bbdaea4a49 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -224,7 +224,17 @@ struct fsxattr {
 #define FS_POLICY_FLAGS_PAD_32		0x03
 #define FS_POLICY_FLAGS_PAD_MASK	0x03
 #define FS_POLICY_FLAG_DIRECT_KEY	0x04	/* use master key directly */
-#define FS_POLICY_FLAGS_VALID		0x07
+#define FS_POLICY_FLAGS_VALID_BASE	0x07
+
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+#define FS_POLICY_FLAGS_HW_ENCRYPTION	0x08
+#else
+#define FS_POLICY_FLAGS_HW_ENCRYPTION	0x00
+#endif
+
+
+#define FS_POLICY_FLAGS_VALID (FS_POLICY_FLAGS_VALID_BASE | \
+			       FS_POLICY_FLAGS_HW_ENCRYPTION)
 
 /* Encryption algorithms */
 #define FS_ENCRYPTION_MODE_INVALID		0
-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 8/8] f2fs: Wire up f2fs to use inline encryption via fscrypt
  2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
                   ` (6 preceding siblings ...)
  2019-06-05 23:28 ` [RFC PATCH v2 7/8] fscrypt: wire up fscrypt to use blk-crypto Satya Tangirala
@ 2019-06-05 23:28 ` Satya Tangirala
  7 siblings, 0 replies; 16+ messages in thread
From: Satya Tangirala @ 2019-06-05 23:28 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 fs/f2fs/data.c  | 77 ++++++++++++++++++++++++++++++++++++++++++++++---
 fs/f2fs/super.c |  1 +
 2 files changed, 74 insertions(+), 4 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index eda4181d2092..25e691947fb4 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -142,6 +142,8 @@ static bool f2fs_bio_post_read_required(struct bio *bio)
 
 static void f2fs_read_end_io(struct bio *bio)
 {
+	fscrypt_unset_bio_crypt_ctx(bio);
+
 	if (time_to_inject(F2FS_P_SB(bio_first_page_all(bio)),
 						FAULT_READ_IO)) {
 		f2fs_show_injection_info(FAULT_READ_IO);
@@ -165,6 +167,8 @@ static void f2fs_write_end_io(struct bio *bio)
 	struct bio_vec *bvec;
 	struct bvec_iter_all iter_all;
 
+	fscrypt_unset_bio_crypt_ctx(bio);
+
 	if (time_to_inject(sbi, FAULT_WRITE_IO)) {
 		f2fs_show_injection_info(FAULT_WRITE_IO);
 		bio->bi_status = BLK_STS_IOERR;
@@ -282,9 +286,18 @@ static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
 	return bio;
 }
 
+static inline u64 hw_crypt_dun(struct inode *inode, pgoff_t offset)
+{
+	return (((u64)inode->i_ino) << 32) | lower_32_bits(offset);
+}
+
 static inline void __submit_bio(struct f2fs_sb_info *sbi,
 				struct bio *bio, enum page_type type)
 {
+	struct page *page;
+	struct inode *inode;
+	int err = 0;
+
 	if (!is_read_io(bio_op(bio))) {
 		unsigned int start;
 
@@ -326,7 +339,22 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi,
 		trace_f2fs_submit_read_bio(sbi->sb, type, bio);
 	else
 		trace_f2fs_submit_write_bio(sbi->sb, type, bio);
-	submit_bio(bio);
+
+	if (bio_has_data(bio)) {
+		page = bio_page(bio);
+		if (page && page->mapping && page->mapping->host) {
+			inode = page->mapping->host;
+			err = fscrypt_set_bio_crypt_ctx(inode, bio,
+						hw_crypt_dun(inode,
+							     page->index));
+		}
+	}
+	if (err) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+	} else {
+		submit_bio(bio);
+	}
 }
 
 static void __submit_merged_bio(struct f2fs_bio_info *io)
@@ -487,6 +515,9 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
 	enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
 	struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
 	struct page *bio_page;
+	struct inode *fio_inode, *bio_inode;
+	struct page *first_page;
+	u64 next_dun = 0;
 
 	f2fs_bug_on(sbi, is_read_io(fio->op));
 
@@ -513,10 +544,27 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
 
 	inc_page_count(sbi, WB_DATA_TYPE(bio_page));
 
+	fio_inode = fio->page->mapping->host;
+	bio_inode = NULL;
+	first_page = NULL;
+	next_dun = 0;
+	if (io->bio && bio_page(io->bio)->mapping) {
+		first_page = bio_page(io->bio);
+		bio_inode = first_page->mapping->host;
+		if (fscrypt_inode_is_hw_encrypted(bio_inode)) {
+			next_dun = hw_crypt_dun(bio_inode, first_page->index) +
+				   (io->bio->bi_iter.bi_size >> PAGE_SHIFT);
+		}
+	}
 	if (io->bio && (io->last_block_in_bio != fio->new_blkaddr - 1 ||
 	    (io->fio.op != fio->op || io->fio.op_flags != fio->op_flags) ||
-			!__same_bdev(sbi, fio->new_blkaddr, io->bio)))
+			!__same_bdev(sbi, fio->new_blkaddr, io->bio) ||
+			!fscrypt_inode_crypt_mergeable(bio_inode, fio_inode) ||
+			(fscrypt_inode_is_hw_encrypted(bio_inode) &&
+			 next_dun != hw_crypt_dun(fio_inode,
+						  fio->page->index))))
 		__submit_merged_bio(io);
+
 alloc_new:
 	if (io->bio == NULL) {
 		if ((fio->type == DATA || fio->type == NODE) &&
@@ -568,7 +616,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
 	bio->bi_end_io = f2fs_read_end_io;
 	bio_set_op_attrs(bio, REQ_OP_READ, op_flag);
 
-	if (f2fs_encrypted_file(inode))
+	if (f2fs_encrypted_file(inode) && !fscrypt_inode_is_hw_encrypted(inode))
 		post_read_steps |= 1 << STEP_DECRYPT;
 	if (post_read_steps) {
 		ctx = mempool_alloc(bio_post_read_ctx_pool, GFP_NOFS);
@@ -1519,6 +1567,7 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
 					struct f2fs_map_blocks *map,
 					struct bio **bio_ret,
 					sector_t *last_block_in_bio,
+					u64 *next_dun,
 					bool is_readahead)
 {
 	struct bio *bio = *bio_ret;
@@ -1592,6 +1641,13 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
 		__submit_bio(F2FS_I_SB(inode), bio, DATA);
 		bio = NULL;
 	}
+
+	if (bio && fscrypt_inode_is_hw_encrypted(inode) &&
+	    *next_dun != hw_crypt_dun(inode, page->index)) {
+		__submit_bio(F2FS_I_SB(inode), bio, DATA);
+		bio = NULL;
+	}
+
 	if (bio == NULL) {
 		bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
 				is_readahead ? REQ_RAHEAD : 0);
@@ -1611,6 +1667,9 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
 	if (bio_add_page(bio, page, blocksize, 0) < blocksize)
 		goto submit_and_realloc;
 
+	if (fscrypt_inode_is_hw_encrypted(inode))
+		*next_dun = hw_crypt_dun(inode, page->index) + 1;
+
 	inc_page_count(F2FS_I_SB(inode), F2FS_RD_DATA);
 	ClearPageError(page);
 	*last_block_in_bio = block_nr;
@@ -1644,6 +1703,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
 	struct inode *inode = mapping->host;
 	struct f2fs_map_blocks map;
 	int ret = 0;
+	u64 next_dun = 0;
 
 	map.m_pblk = 0;
 	map.m_lblk = 0;
@@ -1667,7 +1727,8 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
 		}
 
 		ret = f2fs_read_single_page(inode, page, nr_pages, &map, &bio,
-					&last_block_in_bio, is_readahead);
+					&last_block_in_bio, &next_dun,
+					is_readahead);
 		if (ret) {
 			SetPageError(page);
 			zero_user_segment(page, 0, PAGE_SIZE);
@@ -2617,6 +2678,8 @@ static void f2fs_dio_end_io(struct bio *bio)
 {
 	struct f2fs_private_dio *dio = bio->bi_private;
 
+	fscrypt_unset_bio_crypt_ctx(bio);
+
 	dec_page_count(F2FS_I_SB(dio->inode),
 			dio->write ? F2FS_DIO_WRITE : F2FS_DIO_READ);
 
@@ -2633,12 +2696,18 @@ static void f2fs_dio_submit_bio(struct bio *bio, struct inode *inode,
 {
 	struct f2fs_private_dio *dio;
 	bool write = (bio_op(bio) == REQ_OP_WRITE);
+	u64 data_unit_num = hw_crypt_dun(inode, file_offset >> PAGE_SHIFT);
 
 	dio = f2fs_kzalloc(F2FS_I_SB(inode),
 			sizeof(struct f2fs_private_dio), GFP_NOFS);
 	if (!dio)
 		goto out;
 
+	if (fscrypt_set_bio_crypt_ctx(inode, bio, data_unit_num) != 0) {
+		kvfree(dio);
+		goto out;
+	}
+
 	dio->inode = inode;
 	dio->orig_end_io = bio->bi_end_io;
 	dio->orig_private = bio->bi_private;
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 6b959bbb336a..6399373777ce 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -2229,6 +2229,7 @@ static const struct fscrypt_operations f2fs_cryptops = {
 	.dummy_context	= f2fs_dummy_context,
 	.empty_dir	= f2fs_empty_dir,
 	.max_namelen	= F2FS_NAME_LEN,
+	.hw_crypt_supp	= true,
 };
 #endif
 
-- 
2.22.0.rc1.311.g5d7573a151-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption
  2019-06-05 23:28 ` [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption Satya Tangirala
@ 2019-06-07 22:28   ` Eric Biggers
  2019-06-12 18:26   ` Eric Biggers
  1 sibling, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2019-06-07 22:28 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, Parshuram Raju Thombare, Ladvine D Almeida,
	Barani Muthukumaran, Kuohong Wang

Hi Satya,

On Wed, Jun 05, 2019 at 04:28:30PM -0700, Satya Tangirala wrote:
> 
> Known issues:
> 1) Keyslot Manager has a performance bug where the same encryption
>    context may be programmed into multiple keyslots at the same time in
>    certain situations when all keyslots are being used.

This is also a correctness bug, since keyslot_manager_evict_key() only evicts
one copy of the key.  It can be fixed by looking for the key again after waiting
for an idle slot.

> +
> +struct keyslot_manager {
> +	unsigned int num_slots;
> +	atomic_t num_idle_slots;
> +	struct keyslot_mgmt_ll_ops ksm_ll_ops;
> +	void *ll_priv_data;
> +	struct mutex lock;
> +	wait_queue_head_t wait_queue;
> +	u64 seq_num;
> +	u64 *last_used_seq_nums;
> +	atomic_t slot_refs[];
> +};

slot_refs[] and last_used_seq_nums[] both contain per-keyslot data.  It would be
cleaner to combine them into a single 'slots' array of 'struct keyslot'.  That
would also make it much easier to add or change per-keyslot data in the future.

> /**
>  * keyslot_manager_create() - Create a keyslot manager
>  * @num_slots: The number of key slots to manage.
>  * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot
>  *		manager will use to perform operations like programming and
>  *		evicting keys.
>  * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
>  *
>  * Allocate memory for and initialize a keyslot manager. Called by for e.g.
>  * storage drivers to set up a keyslot manager in their request_queue.
>  *
>  * Context: May sleep
>  * Return: Pointer to constructed keyslot manager or NULL on error.
>  */
> struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
> 				const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
> 				void *ll_priv_data)
> {
> 	struct keyslot_manager *ksm;
> 
> 	if (num_slots == 0)
> 		return NULL;
> 
> 	/* Check that all ops are specified */
> 	if (ksm_ll_ops->keyslot_program == NULL ||
> 	    ksm_ll_ops->keyslot_evict == NULL ||
> 	    ksm_ll_ops->crypt_mode_supported == NULL ||
> 	    ksm_ll_ops->keyslot_find == NULL)
> 		return NULL;
> 
> 	ksm = kzalloc(struct_size(ksm, slot_refs, num_slots), GFP_KERNEL);
> 	if (!ksm)
> 		return NULL;

This should probably be kvzalloc(), just in case the number of keyslots is too
large to fit comfortably into kmalloc memory.  We don't need physically
contiguous memory here.

> +/**
> + * keyslot_manager_get_slot_for_key() - Program a key into a keyslot.
> + * @ksm: The keyslot manager to program the key into.
> + * @key: Pointer to the bytes of the key to program. Must be the correct length
> + *      for the chosen @crypt_mode; see blk_crypt_modes in blk-crypto.c.
> + * @crypt_mode: Identifier for the encryption algorithm to use.
> + * @data_unit_size: The data unit size to use for en/decryption.
> + *
> + * Get a keyslot that's been programmed with the specified key, crypt_mode, and
> + * data_unit_size.  If one already exists, return it with incremented refcount.
> + * Otherwise, wait for a keyslot to become idle and program it.
> + *
> + * Context: Process context. Takes and releases ksm->lock.
> + * Return: The keyslot on success, else a -errno value.
> + */
> +int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
> +				     const u8 *key,
> +				     enum blk_crypt_mode_num crypt_mode,
> +				     unsigned int data_unit_size)
> +{
> +	int slot;
> +	int err;
> +	int c;
> +	int lru_idle_slot;
> +	u64 min_seq_num;
> +
> +	mutex_lock(&ksm->lock);
> +	slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
> +					    crypt_mode,
> +					    data_unit_size);
> +
> +	if (slot < 0 && slot != -ENOKEY) {
> +		mutex_unlock(&ksm->lock);
> +		return slot;
> +	}

This is the fast path: taking a reference to an existing key slot.  There could
be many processes issuing I/O concurrently, so I'm worried that the per-device
mutex here will be a bottleneck.  How about using a rw_semaphore instead?
->keyslot_find() would be called with (at least) the read lock, while
->keyslot_program() and ->keyslot_evict() would be called with the write lock.

> +	/* Todo: fix linear scan? */
> +	/* Find least recently used idle slot (i.e. slot with minimum number) */
> +	lru_idle_slot  = -1;
> +	min_seq_num = 0;
> +	for (c = 0; c < ksm->num_slots; c++) {
> +		if (atomic_read(&ksm->slot_refs[c]) != 0)
> +			continue;
> +
> +		if (lru_idle_slot == -1 ||
> +		    ksm->last_used_seq_nums[c] < min_seq_num) {
> +			lru_idle_slot = c;
> +			min_seq_num = ksm->last_used_seq_nums[c];
> +		}
> +	}

How about using a real LRU list instead?  I.e., a linked list containing all
keyslots with refs == 0 in order of last use.  Then you could just grab the head
of the list here, which would be much more efficient than iterating through
every keyslot as the code does now.

The current LRU implementation is also broken since it orders the entries by
when they were last removed from the LRU list (i.e. last *started* to be used),
not by when they were last added to the LRU list (i.e. last used).

To better show what I mean, here's an incremental patch (compile-tested only!)
that implements these changes:

diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
index d4a5d6d78d2c..dd2fad8319c2 100644
--- a/block/keyslot-manager.c
+++ b/block/keyslot-manager.c
@@ -34,16 +34,26 @@
 #include <linux/sched.h>
 #include <linux/wait.h>
 
+struct keyslot {
+	atomic_t refs;		/* Number of users of this keyslot */
+	struct list_head lru;	/* Link in idle_slots LRU list (if refs == 0) */
+};
+
 struct keyslot_manager {
 	unsigned int num_slots;
-	atomic_t num_idle_slots;
 	struct keyslot_mgmt_ll_ops ksm_ll_ops;
 	void *ll_priv_data;
-	struct mutex lock;
-	wait_queue_head_t wait_queue;
-	u64 seq_num;
-	u64 *last_used_seq_nums;
-	atomic_t slot_refs[];
+
+	/* Protects programming and evicting keys from the device */
+	struct rw_semaphore lock;
+
+	/* List of slots with refs == 0, with least recently used at front */
+	struct list_head idle_slots;
+	spinlock_t idle_slots_lock;
+	wait_queue_head_t idle_slots_wait_queue;
+
+	/* Per-keyslot data */
+	struct keyslot slots[];
 };
 
 /**
@@ -65,6 +75,7 @@ struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
 				void *ll_priv_data)
 {
 	struct keyslot_manager *ksm;
+	int slot;
 
 	if (num_slots == 0)
 		return NULL;
@@ -76,28 +87,47 @@ struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
 	    ksm_ll_ops->keyslot_find == NULL)
 		return NULL;
 
-	ksm = kzalloc(struct_size(ksm, slot_refs, num_slots), GFP_KERNEL);
+	ksm = kvzalloc(struct_size(ksm, slots, num_slots), GFP_KERNEL);
 	if (!ksm)
 		return NULL;
 
 	ksm->num_slots = num_slots;
-	atomic_set(&ksm->num_idle_slots, num_slots);
 	ksm->ksm_ll_ops = *ksm_ll_ops;
 	ksm->ll_priv_data = ll_priv_data;
 
-	mutex_init(&ksm->lock);
-	init_waitqueue_head(&ksm->wait_queue);
+	init_rwsem(&ksm->lock);
 
-	ksm->last_used_seq_nums = kcalloc(num_slots, sizeof(u64), GFP_KERNEL);
-	if (!ksm->last_used_seq_nums) {
-		kzfree(ksm);
-		ksm = NULL;
-	}
+	INIT_LIST_HEAD(&ksm->idle_slots);
+	spin_lock_init(&ksm->idle_slots_lock);
+	init_waitqueue_head(&ksm->idle_slots_wait_queue);
+	for (slot = 0; slot < num_slots; slot++)
+		list_add_tail(&ksm->slots[slot].lru, &ksm->idle_slots);
 
 	return ksm;
 }
 EXPORT_SYMBOL(keyslot_manager_create);
 
+static int find_and_grab_keyslot(struct keyslot_manager *ksm, const u8 *key,
+				 enum blk_crypt_mode_num crypt_mode,
+				 unsigned int data_unit_size)
+{
+	int slot;
+
+	slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
+					    crypt_mode, data_unit_size);
+	if (slot < 0)
+		return slot;
+	if (WARN_ON(slot >= ksm->num_slots))
+		return -EINVAL;
+	if (atomic_inc_return(&ksm->slots[slot].refs) == 1) {
+		/* Took first reference to this slot; remove it from LRU list */
+		spin_lock(&ksm->idle_slots_lock);
+		list_del(&ksm->slots[slot].lru);
+		spin_unlock(&ksm->idle_slots_lock);
+	}
+	return slot;
+}
+
 /**
  * keyslot_manager_get_slot_for_key() - Program a key into a keyslot.
  * @ksm: The keyslot manager to program the key into.
@@ -110,7 +140,7 @@ EXPORT_SYMBOL(keyslot_manager_create);
  * data_unit_size.  If one already exists, return it with incremented refcount.
  * Otherwise, wait for a keyslot to become idle and program it.
  *
- * Context: Process context. Takes and releases ksm->lock.
+ * Context: Process context.
  * Return: The keyslot on success, else a -errno value.
  */
 int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
@@ -119,103 +149,60 @@ int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
 				     unsigned int data_unit_size)
 {
 	int slot;
+	struct keyslot *slotp;
 	int err;
-	int c;
-	int lru_idle_slot;
-	u64 min_seq_num;
-
-	mutex_lock(&ksm->lock);
-	slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
-					    crypt_mode,
-					    data_unit_size);
-
-	if (slot < 0 && slot != -ENOKEY) {
-		mutex_unlock(&ksm->lock);
-		return slot;
-	}
 
-	if (WARN_ON(slot >= (int)ksm->num_slots)) {
-		mutex_unlock(&ksm->lock);
-		return -EINVAL;
-	}
-
-	/* Try to use the returned slot */
-	if (slot != -ENOKEY) {
-		/*
-		 * NOTE: We may fail to get a slot if the number of refs
-		 * overflows UINT_MAX. I don't think we care enough about
-		 * that possibility to make the refcounts u64, considering
-		 * the only way for that to happen is for at least UINT_MAX
-		 * requests to be in flight at the same time.
-		 */
-		if ((unsigned int)atomic_read(&ksm->slot_refs[slot]) ==
-		    UINT_MAX) {
-			mutex_unlock(&ksm->lock);
-			return -EBUSY;
-		}
-
-		if (atomic_fetch_inc(&ksm->slot_refs[slot]) == 0)
-			atomic_dec(&ksm->num_idle_slots);
-
-		ksm->last_used_seq_nums[slot] = ++ksm->seq_num;
-
-		mutex_unlock(&ksm->lock);
+	/*
+	 * Fast path: take reference to existing keyslot, if there is one.
+	 * For this we only need the read lock.
+	 */
+	down_read(&ksm->lock);
+	slot = find_and_grab_keyslot(ksm, key, crypt_mode, data_unit_size);
+	up_read(&ksm->lock);
+	if (slot != -ENOKEY)
 		return slot;
-	}
 
 	/*
-	 * If we're here, that means there wasn't a slot that
-	 * was already programmed with the key
+	 * Slow path: wait for a slot to become idle, *or* for someone else to
+	 * have programmed the key while we dropped the lock.
 	 */
+	for (;;) {
+		down_write(&ksm->lock);
+		slot = find_and_grab_keyslot(ksm, key, crypt_mode,
+					     data_unit_size);
+		if (slot != -ENOKEY) {
+			up_write(&ksm->lock);
+			return slot;
+		}
 
-	/* Wait till there is a free slot available */
-	while (atomic_read(&ksm->num_idle_slots) == 0) {
-		mutex_unlock(&ksm->lock);
-		wait_event(ksm->wait_queue,
-			   (atomic_read(&ksm->num_idle_slots) > 0));
-		mutex_lock(&ksm->lock);
-	}
+		spin_lock(&ksm->idle_slots_lock);
+		if (!list_empty(&ksm->idle_slots))
+			break;
+		spin_unlock(&ksm->idle_slots_lock);
 
-	/* Todo: fix linear scan? */
-	/* Find least recently used idle slot (i.e. slot with minimum number) */
-	lru_idle_slot  = -1;
-	min_seq_num = 0;
-	for (c = 0; c < ksm->num_slots; c++) {
-		if (atomic_read(&ksm->slot_refs[c]) != 0)
-			continue;
-
-		if (lru_idle_slot == -1 ||
-		    ksm->last_used_seq_nums[c] < min_seq_num) {
-			lru_idle_slot = c;
-			min_seq_num = ksm->last_used_seq_nums[c];
-		}
+		up_write(&ksm->lock);
+		wait_event(ksm->idle_slots_wait_queue,
+			   !list_empty(&ksm->idle_slots));
 	}
 
-	if (WARN_ON(lru_idle_slot == -1)) {
-		mutex_unlock(&ksm->lock);
-		return -EBUSY;
-	}
+	/* Remove least recently used idle slot from LRU list. */
+	slotp = list_first_entry(&ksm->idle_slots, struct keyslot, lru);
+	list_del(&slotp->lru);
+	atomic_set(&slotp->refs, 1);
+	spin_unlock(&ksm->idle_slots_lock);
+	slot = slotp - ksm->slots;
 
-	atomic_dec(&ksm->num_idle_slots);
-	atomic_inc(&ksm->slot_refs[lru_idle_slot]);
+	/* Program the key into it. */
 	err = ksm->ksm_ll_ops.keyslot_program(ksm->ll_priv_data, key,
-					      crypt_mode,
-					      data_unit_size,
-					      lru_idle_slot);
+					      crypt_mode, data_unit_size, slot);
+	up_write(&ksm->lock);
 
 	if (err) {
-		atomic_dec(&ksm->slot_refs[lru_idle_slot]);
-		atomic_inc(&ksm->num_idle_slots);
-		wake_up(&ksm->wait_queue);
-		mutex_unlock(&ksm->lock);
+		/* Oops, programming the key failed.  Return slot to LRU list */
+		keyslot_manager_put_slot(ksm, slot);
 		return err;
 	}
-
-	ksm->seq_num++;
-	ksm->last_used_seq_nums[lru_idle_slot] = ksm->seq_num;
-
-	mutex_unlock(&ksm->lock);
-	return lru_idle_slot;
+	return slot;
 }
 EXPORT_SYMBOL(keyslot_manager_get_slot_for_key);
 
@@ -236,7 +223,7 @@ void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot)
 	if (WARN_ON(slot >= ksm->num_slots))
 		return;
 
-	WARN_ON(atomic_inc_return(&ksm->slot_refs[slot]) < 2);
+	WARN_ON(atomic_inc_return(&ksm->slots[slot].refs) < 2);
 }
 EXPORT_SYMBOL(keyslot_manager_get_slot);
 
@@ -252,9 +239,12 @@ void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot)
 	if (WARN_ON(slot >= ksm->num_slots))
 		return;
 
-	if (atomic_dec_and_test(&ksm->slot_refs[slot])) {
-		atomic_inc(&ksm->num_idle_slots);
-		wake_up(&ksm->wait_queue);
+	if (atomic_dec_and_lock(&ksm->slots[slot].refs,
+				&ksm->idle_slots_lock)) {
+		/* Dropped last reference to this slot; add it to LRU list */
+		list_add_tail(&ksm->slots[slot].lru, &ksm->idle_slots);
+		spin_unlock(&ksm->idle_slots_lock);
+		wake_up(&ksm->idle_slots_wait_queue);
 	}
 }
 EXPORT_SYMBOL(keyslot_manager_put_slot);
@@ -271,7 +261,7 @@ EXPORT_SYMBOL(keyslot_manager_put_slot);
  * the refcount on the slot is 0. Returns -EBUSY if the refcount is not 0, and
  * -errno on error.
  *
- * Context: Process context. Takes and releases ksm->lock.
+ * Context: Process context.
  */
 int keyslot_manager_evict_key(struct keyslot_manager *ksm,
 			      const u8 *key,
@@ -279,37 +269,30 @@ int keyslot_manager_evict_key(struct keyslot_manager *ksm,
 			      unsigned int data_unit_size)
 {
 	int slot;
-	int err = 0;
+	int err;
 
-	mutex_lock(&ksm->lock);
+	down_write(&ksm->lock);
 	slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
-					    crypt_mode,
-					    data_unit_size);
-
+					    crypt_mode, data_unit_size);
 	if (slot < 0) {
-		mutex_unlock(&ksm->lock);
+		up_write(&ksm->lock);
 		return slot;
 	}
 
-	if (atomic_read(&ksm->slot_refs[slot]) == 0) {
+	if (atomic_read(&ksm->slots[slot].refs) == 0) {
 		err = ksm->ksm_ll_ops.keyslot_evict(ksm->ll_priv_data, key,
-						    crypt_mode,
-						    data_unit_size,
+						    crypt_mode, data_unit_size,
 						    slot);
 	} else {
 		err = -EBUSY;
 	}
-
-	mutex_unlock(&ksm->lock);
+	up_write(&ksm->lock);
 	return err;
 }
 EXPORT_SYMBOL(keyslot_manager_evict_key);
 
 void keyslot_manager_destroy(struct keyslot_manager *ksm)
 {
-	if (!ksm)
-		return;
-	kzfree(ksm->last_used_seq_nums);
-	kzfree(ksm);
+	kvfree(ksm);
 }
 EXPORT_SYMBOL(keyslot_manager_destroy);

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 2/8] block: Add encryption context to struct bio
  2019-06-05 23:28 ` [RFC PATCH v2 2/8] block: Add encryption context to struct bio Satya Tangirala
@ 2019-06-12 18:10   ` Eric Biggers
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2019-06-12 18:10 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, Parshuram Raju Thombare, Ladvine D Almeida,
	Barani Muthukumaran, Kuohong Wang

Hi Satya,

On Wed, Jun 05, 2019 at 04:28:31PM -0700, Satya Tangirala wrote:
> We must have some way of letting a storage device driver know what
> encryption context it should use for en/decrypting a request. However,
> it's the filesystem/fscrypt that knows about and manages encryption
> contexts. As such, when the filesystem layer submits a bio to the block
> layer, and this bio eventually reaches a device driver with support for
> inline encryption, the device driver will need to have been told the
> encryption context for that bio.
> 
> We want to communicate the encryption context from the filesystem layer
> to the storage device along with the bio, when the bio is submitted to the
> block layer. To do this, we add a struct bio_crypt_ctx to struct bio, which
> can represent an encryption context (note that we can't use the bi_private
> field in struct bio to do this because that field does not function to pass
> information across layers in the storage stack). We also introduce various
> functions to manipulate the bio_crypt_ctx and make the bio/request merging
> logic aware of the bio_crypt_ctx.
> 
> Signed-off-by: Satya Tangirala <satyat@google.com>
> ---
>  block/bio.c               |  12 ++-
>  block/blk-crypt-ctx.c     |  90 +++++++++++++++++++
>  block/blk-merge.c         |  34 ++++++-
>  block/bounce.c            |   9 +-
>  drivers/md/dm.c           |  15 ++--
>  include/linux/bio.h       | 180 ++++++++++++++++++++++++++++++++++++++
>  include/linux/blk_types.h |  28 ++++++
>  7 files changed, 355 insertions(+), 13 deletions(-)
>  create mode 100644 block/blk-crypt-ctx.c
> 
> diff --git a/block/bio.c b/block/bio.c
> index 683cbb40f051..87aa87288b39 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -16,6 +16,7 @@
>  #include <linux/workqueue.h>
>  #include <linux/cgroup.h>
>  #include <linux/blk-cgroup.h>
> +#include <linux/keyslot-manager.h>

No need to include keyslot-manager.h here.

> @@ -1019,6 +1026,7 @@ void bio_advance(struct bio *bio, unsigned bytes)
>  		bio_integrity_advance(bio, bytes);
>  
>  	bio_advance_iter(bio, &bio->bi_iter, bytes);
> +	bio_crypt_advance(bio, bytes);
>  }
>  EXPORT_SYMBOL(bio_advance);

It would be more logical to do bio_crypt_advance() before bio_advance_iter(), so
that the special features (encryption and integrity) are grouped together.

>  
> diff --git a/block/blk-crypt-ctx.c b/block/blk-crypt-ctx.c
> new file mode 100644
> index 000000000000..174c058ab0c6
> --- /dev/null
> +++ b/block/blk-crypt-ctx.c

It would be more logical for this file to be named "bio-crypt-ctx.c", as that
would match 'struct bio_crypt_ctx' and help distinguish it from "blk-crypto".

> @@ -0,0 +1,90 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2019 Google LLC
> + */
> +
> +#include <linux/bio.h>
> +#include <linux/blkdev.h>
> +#include <linux/slab.h>
> +#include <linux/keyslot-manager.h>
> +
> +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
> +{
> +	return kzalloc(sizeof(struct bio_crypt_ctx), gfp_mask);
> +}

This needs EXPORT_SYMBOL(), since it's called by bio_crypt_set_ctx() which is an
inline function that will be called by places that submit bios.

> +
> +void bio_crypt_free_ctx(struct bio *bio)
> +{
> +	kzfree(bio->bi_crypt_context);
> +	bio->bi_crypt_context = NULL;
> +}
> +
> +int bio_clone_crypt_context(struct bio *dst, struct bio *src, gfp_t gfp_mask)
> +{

How about naming this function bio_crypt_clone(), for consistency with
bio_integrity_clone()?

> +	if (!bio_is_encrypted(src) || bio_crypt_swhandled(src))
> +		return 0;

Why isn't cloning needed when bio_crypt_swhandled(src)?

> +
> +	dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask);
> +	if (!dst->bi_crypt_context)
> +		return -ENOMEM;
> +
> +	*dst->bi_crypt_context = *src->bi_crypt_context;
> +
> +	if (!bio_crypt_has_keyslot(src))
> +		return 0;
> +
> +	keyslot_manager_get_slot(src->bi_crypt_context->processing_ksm,
> +				 src->bi_crypt_context->keyslot);
> +
> +	return 0;
> +}

Nit: a conditional get would be cleaner than an early return here.

	if (bio_crypt_has_keyslot(src))
		keyslot_manager_get_slot(src->bi_crypt_context->processing_ksm,
					 src->bi_crypt_context->keyslot);

Also, this function needs EXPORT_SYMBOL(), since it's called by drivers/md/dm.c,
which can be a loadable module.

> +/*
> + * Checks that two bio crypt contexts are compatible - i.e. that
> + * they are mergeable except for data_unit_num continuity.
> + */
> +bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
> +{
> +	struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
> +	struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
> +
> +	if (bio_is_encrypted(b_1) != bio_is_encrypted(b_2))
> +		return false;
> +
> +	if (!bio_is_encrypted(b_1))
> +		return true;
> +
> +	return bc1->keyslot != bc2->keyslot &&
> +	       bc1->data_unit_size_bits == bc2->data_unit_size_bits;
> +}

It needs to be 'bc1->keyslot == bc2->keyslot'.

> +
> +/*
> + * Checks that two bio crypt contexts are compatible, and also
> + * that their data_unit_nums are continuous (and can hence be merged)
> + */
> +bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
> +				  unsigned int b1_sectors,
> +				  struct bio *b_2)
> +{
> +	struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
> +	struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
> +
> +	if (!bio_crypt_ctx_compatible(b_1, b_2))
> +		return false;
> +
> +	return !bio_is_encrypted(b_1) ||
> +		(bc1->data_unit_num +
> +		(b1_sectors >> (bc1->data_unit_size_bits - 9)) ==
> +		bc2->data_unit_num);
> +}
> +

Unnecessary blank line at end of file.

> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index 0f23b5682640..ba9552932571 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -561,6 +561,186 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags)
>  }
>  #endif
>  
> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +extern int bio_clone_crypt_context(struct bio *dst, struct bio *src,
> +				   gfp_t gfp_mask);
> +
> +static inline bool bio_is_encrypted(struct bio *bio)
> +{
> +	return bio && bio->bi_crypt_context;
> +}

Is the 'bio != NULL' check actually needed?  Most bio helper functions don't
check for NULL, as it's not a meaningful case.

> +
> +static inline bool bio_crypt_has_keyslot(struct bio *bio)
> +{
> +	return bio_is_encrypted(bio) &&
> +	       bio->bi_crypt_context->keyslot >= 0;
> +}
> +

I think the bio_is_encrypted() check here should be dropped, since all callers
check it beforehand anyway.  It doesn't really make sense for someone to call
functions that are meant to access fields of the bio_crypt_ctx, before verifying
that there actually is a bio_crypt_ctx.  Other bio_crypt_* functions don't check
for NULL, so it seems inconsistent that this one does.

> +
> +static inline int bio_crypt_get_slot(struct bio *bio)
> +{
> +	return bio->bi_crypt_context->keyslot;
> +}

For consistency this should be named *_get_keyslot(), not *_get_slot().

> +
> +static inline void bio_crypt_set_keyslot(struct bio *bio,
> +					 unsigned int keyslot,
> +					 struct keyslot_manager *ksm)
> +{
> +	bio->bi_crypt_context->keyslot = keyslot;
> +	bio->bi_crypt_context->processing_ksm = ksm;
> +
> +	bio->bi_crypt_context->crypt_iter = bio->bi_iter;
> +	bio->bi_crypt_context->sw_data_unit_num =
> +		bio->bi_crypt_context->data_unit_num;
> +}
> +
> +static inline void bio_crypt_unset_keyslot(struct bio *bio)
> +{
> +	bio->bi_crypt_context->processing_ksm = NULL;
> +	bio->bi_crypt_context->keyslot = -1;
> +}
> +
> +static inline u8 *bio_crypt_raw_key(struct bio *bio)
> +{
> +	return bio->bi_crypt_context->raw_key;
> +}
> +
> +static inline enum blk_crypt_mode_num bio_crypt_mode(struct bio *bio)
> +{
> +	return bio->bi_crypt_context->crypt_mode;
> +}

bio_crypt_unset_keyslot(), bio_crypt_raw_key(), and bio_crypt_mode() are only
used in blk-crypto.c.  Is there any reason for block users or drivers to need to
call them?  If not, these fields should really just be accessed directly in
blk-crypto.c.  It's not needed to provide these functions in bio.h where they
are available to everyone.

> diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> index aafa96839f95..c111b1ce8d24 100644
> --- a/include/linux/blk_types.h
> +++ b/include/linux/blk_types.h
> @@ -148,6 +148,29 @@ enum blk_crypt_mode_num {
>  	 */
>  };
>  
> +struct bio_crypt_ctx {
> +	int keyslot;
> +	u8 *raw_key;
> +	enum blk_crypt_mode_num crypt_mode;
> +	u64 data_unit_num;
> +	unsigned int data_unit_size_bits;
> +
> +	/*
> +	 * The keyslot manager where the key has been programmed
> +	 * with keyslot.
> +	 */
> +	struct keyslot_manager *processing_ksm;
> +
> +	/*
> +	 * Copy of the bvec_iter when this bio was submitted.
> +	 * We only want to en/decrypt the part of the bio
> +	 * as described by the bvec_iter upon submission because
> +	 * bio might be split before being resubmitted
> +	 */
> +	struct bvec_iter crypt_iter;
> +	u64 sw_data_unit_num;
> +};
> +

How about making this struct definition conditional on
CONFIG_BLK_INLINE_ENCRYPTION?  When !CONFIG_BLK_INLINE_ENCRYPTION, no code is
compiled that dereferences any pointer to this struct.

For consistency with bio_integrity_payload and to avoid an extra #ifdef, I think
this should also be moved to bio.h.

blk_crypt_mode_num can be moved to bio.h too, but it will need to be
unconditional since it's used as a parameter to bio_crypt_set_ctx().

>  /*
>   * main unit of I/O for the block layer and lower layers (ie drivers and
>   * stacking drivers)
> @@ -186,6 +209,11 @@ struct bio {
>  	struct blkcg_gq		*bi_blkg;
>  	struct bio_issue	bi_issue;
>  #endif
> +
> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +	struct bio_crypt_ctx	*bi_crypt_context;
> +#endif
> +
>  	union {
>  #if defined(CONFIG_BLK_DEV_INTEGRITY)
>  		struct bio_integrity_payload *bi_integrity; /* data integrity */
> -- 
> 2.22.0.rc1.311.g5d7573a151-goog
> 

Is it actually meaningful to use the blk_integrity feature in combination with
inline encryption?  How might this be tested?  If the features actually conflict
anyway, bi_crypt_context and bi_integrity could share the same union.

- Eric

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption
  2019-06-05 23:28 ` [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption Satya Tangirala
  2019-06-07 22:28   ` Eric Biggers
@ 2019-06-12 18:26   ` Eric Biggers
  1 sibling, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2019-06-12 18:26 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, Parshuram Raju Thombare, Ladvine D Almeida,
	Barani Muthukumaran, Kuohong Wang

On Wed, Jun 05, 2019 at 04:28:30PM -0700, Satya Tangirala wrote:
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 592669bcc536..f76d5dff27fe 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -385,6 +385,10 @@ static inline int blkdev_reset_zones_ioctl(struct block_device *bdev,
>  
>  #endif /* CONFIG_BLK_DEV_ZONED */
>  
> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +struct keyslot_manager;
> +#endif
> +

This should be placed with the other forward declarations at the beginning of
the file.  It also doesn't need to be behind an #ifdef.  See e.g. struct
blkcg_gq which is another conditional field in struct request_queue.

> diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
> new file mode 100644
> index 000000000000..76a9c255cb7e
> --- /dev/null
> +++ b/include/linux/keyslot-manager.h
[...]
> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +struct keyslot_manager;
> +
> +extern struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
> +				const struct keyslot_mgmt_ll_ops *ksm_ops,
> +				void *ll_priv_data);
> +
> +extern int
> +keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
> +				 const u8 *key,
> +				 enum blk_crypt_mode_num crypt_mode,
> +				 unsigned int data_unit_size);
> +
> +extern void keyslot_manager_get_slot(struct keyslot_manager *ksm,
> +				     unsigned int slot);
> +
> +extern void keyslot_manager_put_slot(struct keyslot_manager *ksm,
> +				     unsigned int slot);
> +
> +extern int keyslot_manager_evict_key(struct keyslot_manager *ksm,
> +				     const u8 *key,
> +				     enum blk_crypt_mode_num crypt_mode,
> +				     unsigned int data_unit_size);
> +
> +extern void keyslot_manager_destroy(struct keyslot_manager *ksm);
> +
> +#else /* CONFIG_BLK_INLINE_ENCRYPTION */
> +struct keyslot_manager {};

This is actually a struct definition, not a declaration.  This doesn't make
sense, since the CONFIG_BLK_INLINE_ENCRYPTION case only needs a forward
declaration here.  Both cases should just use a forward declaration.

> +
> +static inline struct keyslot_manager *
> +keyslot_manager_create(unsigned int num_slots,
> +		       const struct keyslot_mgmt_ll_ops *ksm_ops,
> +		       void *ll_priv_data)
> +{
> +	return NULL;
> +}
> +
> +static inline int
> +keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
> +				 const u8 *key,
> +				 enum blk_crypt_mode_num crypt_mode,
> +				 unsigned int data_unit_size)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline void keyslot_manager_get_slot(struct keyslot_manager *ksm,
> +					    unsigned int slot) { }
> +
> +static inline int keyslot_manager_put_slot(struct keyslot_manager *ksm,
> +					   unsigned int slot)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline int keyslot_manager_evict_key(struct keyslot_manager *ksm,
> +				     const u8 *key,
> +				     enum blk_crypt_mode_num crypt_mode,
> +				     unsigned int data_unit_size)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline void keyslot_manager_destroy(struct keyslot_manager *ksm)
> +{ }
> +
> +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */

However, it seems we don't actually need these stub functions, since the
keyslot_manager_ functions are only called from .c files that are only compiled
when CONFIG_BLK_INLINE_ENCRYPTION, except for the call to
keyslot_manager_evict_key() in fscrypt_evict_crypt_key().  But it would make
more sense to stub out fscrypt_evict_crypt_key() instead.

So I suggest removing the keyslot_manager_* stubs for now.

- Eric

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 3/8] block: blk-crypto for Inline Encryption
  2019-06-05 23:28 ` [RFC PATCH v2 3/8] block: blk-crypto for Inline Encryption Satya Tangirala
@ 2019-06-12 23:34   ` Eric Biggers
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2019-06-12 23:34 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, Parshuram Raju Thombare, Ladvine D Almeida,
	Barani Muthukumaran, Kuohong Wang

On Wed, Jun 05, 2019 at 04:28:32PM -0700, Satya Tangirala wrote:
> We introduce blk-crypto, which manages programming keyslots for struct
> bios. With blk-crypto, filesystems only need to call bio_crypt_set_ctx with
> the encryption key, algorithm and data_unit_num; they don't have to worry
> about getting a keyslot for each encryption context, as blk-crypto handles
> that. Blk-crypto also makes it possible for layered devices like device
> mapper to make use of inline encryption hardware.
> 
> Blk-crypto delegates crypto operations to inline encryption hardware when
> available, and also contains a software fallback to the kernel crypto API.
> For more details, refer to Documentation/block/blk-crypto.txt.
> 
> Known issues:
> 1) We're allocating crypto_skcipher in blk_crypto_keyslot_program, which
>    uses GFP_KERNEL to allocate memory, but this function is on the write
>    path for IO - we need to add support for specifying a different flags
>    to the crypto API.
> 
> Signed-off-by: Satya Tangirala <satyat@google.com>
> ---
>  Documentation/block/blk-crypto.txt | 185 ++++++++++
>  block/Kconfig                      |   8 +
>  block/Makefile                     |   2 +
>  block/bio.c                        |   5 +
>  block/blk-core.c                   |  11 +-
>  block/blk-crypto.c                 | 558 +++++++++++++++++++++++++++++
>  include/linux/blk-crypto.h         |  40 +++
>  7 files changed, 808 insertions(+), 1 deletion(-)
>  create mode 100644 Documentation/block/blk-crypto.txt
>  create mode 100644 block/blk-crypto.c
>  create mode 100644 include/linux/blk-crypto.h
> 
> diff --git a/Documentation/block/blk-crypto.txt b/Documentation/block/blk-crypto.txt
> new file mode 100644
> index 000000000000..96a7983a117d
> --- /dev/null
> +++ b/Documentation/block/blk-crypto.txt
> @@ -0,0 +1,185 @@
> +BLK-CRYPTO and KEYSLOT MANAGER
> +===========================

How about renaming this documentation file to inline-encryption.txt and making
sure it covers the inline encryption feature as a whole?  "blk-crypto" is just
part of it.

> diff --git a/block/Makefile b/block/Makefile
> index eee1b4ceecf9..5d38ea437937 100644
> --- a/block/Makefile
> +++ b/block/Makefile
> @@ -35,3 +35,5 @@ obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
>  obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
>  obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
>  obj-$(CONFIG_BLK_PM)		+= blk-pm.o
> +obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= blk-crypt-ctx.o blk-crypto.o \
> +					     keyslot-manager.o

Two of these .c files were added by earlier patches, but they're not compiled
until now.  The usual practice is to make the code actually compiled after each
patch, e.g. by introducing the kconfig option first.  Otherwise there can be
build errors that don't show up until suddenly all the code is enabled at once.

> diff --git a/block/blk-crypto.c b/block/blk-crypto.c
> new file mode 100644
> index 000000000000..5adb5251ae7e
> --- /dev/null
> +++ b/block/blk-crypto.c
> @@ -0,0 +1,558 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2019 Google LLC
> + */
> +#include <linux/blk-crypto.h>
> +#include <linux/keyslot-manager.h>
> +#include <linux/mempool.h>
> +#include <linux/blk-cgroup.h>
> +#include <crypto/skcipher.h>
> +#include <crypto/algapi.h>
> +
> +struct blk_crypt_mode {
> +	const char *friendly_name;
> +	const char *cipher_str;
> +	size_t keysize;
> +	size_t ivsize;
> +	bool needs_essiv;
> +};

'friendly_name', 'ivsize', and 'needs_essiv' are unused.  So they should be
removed until they're actually needed.

> +
> +static const struct blk_crypt_mode blk_crypt_modes[] = {
> +	[BLK_ENCRYPTION_MODE_AES_256_XTS] = {
> +		.friendly_name = "AES-256-XTS",
> +		.cipher_str = "xts(aes)",
> +		.keysize = 64,
> +		.ivsize = 16,
> +	},
> +	/* TODO: the rest of the algs that fscrypt supports */
> +};

It's arguably a layering violation to mention fscrypt specifically here.  There
will eventually be other users of this too.

> +/* TODO: Do we want to make this user configurable somehow? */
> +#define BLK_CRYPTO_NUM_KEYSLOTS 100

This should be a kernel command line parameter.

> +
> +static unsigned int num_prealloc_bounce_pg = 32;

This should be a kernel command line parameter too.

> +
> +bool bio_crypt_swhandled(struct bio *bio)
> +{
> +	return bio_crypt_has_keyslot(bio) &&
> +	       bio->bi_crypt_context->processing_ksm == blk_crypto_ksm;
> +}

processing_ksm is NULL when there isn't a keyslot, so calling
bio_crypt_has_keyslot() isn't necessary here.

> +
> +/* TODO: handle modes that need essiv */
> +static int blk_crypto_keyslot_program(void *priv, const u8 *key,
> +				      enum blk_crypt_mode_num crypt_mode,
> +				      unsigned int data_unit_size,
> +				      unsigned int slot)
> +{
> +	struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
> +	struct crypto_skcipher *tfm = slotp->tfm;
> +	const struct blk_crypt_mode *mode = &blk_crypt_modes[crypt_mode];
> +	size_t keysize = mode->keysize;
> +	int err;
> +
> +	if (crypt_mode != slotp->crypt_mode || !tfm) {
> +		crypto_free_skcipher(slotp->tfm);
> +		slotp->tfm = NULL;
> +		memset(slotp->key, 0, BLK_CRYPTO_MAX_KEY_SIZE);
> +		tfm = crypto_alloc_skcipher(
> +			mode->cipher_str, 0, 0);
> +		if (IS_ERR(tfm))
> +			return PTR_ERR(tfm);
> +
> +		crypto_skcipher_set_flags(tfm,
> +					  CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
> +		slotp->crypt_mode = crypt_mode;
> +		slotp->tfm = tfm;
> +	}
> +
> +
> +	err = crypto_skcipher_setkey(tfm, key, keysize);
> +
> +	if (err) {
> +		crypto_free_skcipher(tfm);
> +		slotp->tfm = NULL;
> +		return err;
> +	}
> +
> +	memcpy(slotp->key, key, keysize);
> +
> +	return 0;
> +}
> +
> +static int blk_crypto_keyslot_evict(void *priv, const u8 *key,
> +				    enum blk_crypt_mode_num crypt_mode,
> +				    unsigned int data_unit_size,
> +				    unsigned int slot)
> +{
> +	crypto_free_skcipher(blk_crypto_keyslots[slot].tfm);
> +	blk_crypto_keyslots[slot].tfm = NULL;
> +	memset(blk_crypto_keyslots[slot].key, 0, BLK_CRYPTO_MAX_KEY_SIZE);
> +
> +	return 0;
> +}

If the call to crypto_skcipher_setkey() fails, then the ->tfm is set to NULL as
if the keyslot were free, but the raw key isn't wiped.  The state should be kept
consistent: the raw key of a free keyslot should always be zeroed.

The easiest way to handle this would be to add a helper function:

static void evict_keyslot(unsigned int slot)
{
	struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];

	crypto_free_skcipher(slotp->tfm);
	slotp->tfm = NULL;
	memzero_explicit(slotp->key, BLK_CRYPTO_MAX_KEY_SIZE);
}

Then call this from the two places in blk_crypto_keyslot_program(), and from
blk_crypto_keyslot_evict().

(It doesn't really need to be memzero_explicit() instead of memset() here, but
it's good to make the intent of "this is wiping a crypto key" clear.)

> +
> +static int blk_crypto_keyslot_find(void *priv,
> +				   const u8 *key,
> +				   enum blk_crypt_mode_num crypt_mode,
> +				   unsigned int data_unit_size_bytes)
> +{
> +	int slot;
> +	const size_t keysize = blk_crypt_modes[crypt_mode].keysize;
> +
> +	/* TODO: hashmap? */
> +	for (slot = 0; slot < BLK_CRYPTO_NUM_KEYSLOTS; slot++) {
> +		if (blk_crypto_keyslots[slot].crypt_mode == crypt_mode &&
> +		    !crypto_memneq(blk_crypto_keyslots[slot].key, key,
> +				   keysize)) {
> +			return slot;
> +		}

Nit: can drop the braces here and fit the crypto_memneq() parameters on one
line.

> +static bool blk_crypt_mode_supported(void *priv,
> +				     enum blk_crypt_mode_num crypt_mode,
> +				     unsigned int data_unit_size)
> +{
> +	// Of course, blk-crypto supports all blk_crypt_modes.
> +	return true;
> +}

This actually isn't obvious, since there could be modes that are only supported
by particular hardware drivers.  It would be more helpful if the comment was:

	/* All blk_crypt_modes are required to have a software fallback. */

> +static void blk_crypto_put_keyslot(struct bio *bio)
> +{
> +	struct bio_crypt_ctx *crypt_ctx = bio->bi_crypt_context;
> +
> +	keyslot_manager_put_slot(crypt_ctx->processing_ksm, crypt_ctx->keyslot);
> +	bio_crypt_unset_keyslot(bio);
> +}
> +
> +static int blk_crypto_get_keyslot(struct bio *bio,
> +				      struct keyslot_manager *ksm)
> +{
> +	int slot;
> +	enum blk_crypt_mode_num crypt_mode = bio_crypt_mode(bio);
> +
> +	if (!ksm)
> +		return -ENOMEM;
> +
> +	slot = keyslot_manager_get_slot_for_key(ksm,
> +						bio_crypt_raw_key(bio),
> +						crypt_mode, PAGE_SIZE);

Needs to be '1 << crypt_ctx->data_unit_size_bits', not PAGE_SIZE.

> +	if (slot < 0)
> +		return slot;
> +
> +	bio_crypt_set_keyslot(bio, slot, ksm);
> +	return 0;
> +}

Since blk_crypto_{get,put}_keyslot() support any keyslot manager, naming them
blk_crypto is a bit confusing, since it suggests they might only be relevant to
the software fallback (blk_crypto_keyslots).  Maybe they should be renamed to
bio_crypt_{acquire,release}_keyslot() and moved to bio-crypt-ctx.c?

> +static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
> +{
> +	struct bio *src_bio = *bio_ptr;
> +	int slot;
> +	struct skcipher_request *ciph_req = NULL;
> +	DECLARE_CRYPTO_WAIT(wait);
> +	struct bio_vec bv;
> +	struct bvec_iter iter;
> +	int err = 0;
> +	u64 curr_dun;
> +	union {
> +		__le64 dun;
> +		u8 bytes[16];
> +	} iv;
> +	struct scatterlist src, dst;
> +	struct bio *enc_bio;
> +	struct bio_vec *enc_bvec;
> +	int i, j;
> +	unsigned int num_sectors;
> +
> +	if (!blk_crypto_keyslots)
> +		return -ENOMEM;

Why the NULL check for blk_crypto_keyslots?  The kernel already panics if
blk_crypto_init() fails.

> +
> +	/* Split the bio if it's too big for single page bvec */
> +	i = 0;
> +	num_sectors = 0;
> +	bio_for_each_segment(bv, src_bio, iter) {
> +		num_sectors += bv.bv_len >> 9;
> +		if (++i == BIO_MAX_PAGES)
> +			break;
> +	}
> +	if (num_sectors < bio_sectors(src_bio)) {
> +		struct bio *split_bio;
> +
> +		split_bio = bio_split(src_bio, num_sectors, GFP_NOIO, NULL);
> +		if (!split_bio) {
> +			src_bio->bi_status = BLK_STS_RESOURCE;
> +			return -ENOMEM;
> +		}
> +		bio_chain(split_bio, src_bio);
> +		generic_make_request(src_bio);
> +		*bio_ptr = split_bio;
> +	}
> +
> +	src_bio = *bio_ptr;

This line can be moved into the previous 'if' block.

> +
> +	enc_bio = blk_crypto_clone_bio(src_bio);
> +	if (!enc_bio) {
> +		src_bio->bi_status = BLK_STS_RESOURCE;
> +		return -ENOMEM;
> +	}
> +
> +	err = blk_crypto_get_keyslot(src_bio, blk_crypto_ksm);
> +	if (err) {
> +		src_bio->bi_status = BLK_STS_IOERR;
> +		bio_put(enc_bio);
> +		return err;
> +	}
> +	slot = bio_crypt_get_slot(src_bio);
> +
> +	ciph_req = skcipher_request_alloc(blk_crypto_keyslots[slot].tfm,
> +					  GFP_NOIO);
> +	if (!ciph_req) {
> +		src_bio->bi_status = BLK_STS_RESOURCE;
> +		err = -ENOMEM;
> +		bio_put(enc_bio);
> +		goto out_release_keyslot;
> +	}
> +
> +	skcipher_request_set_callback(ciph_req,
> +				      CRYPTO_TFM_REQ_MAY_BACKLOG |
> +				      CRYPTO_TFM_REQ_MAY_SLEEP,
> +				      crypto_req_done, &wait);

This function and blk_crypto_decrypt_bio() are getting long.  To help a tiny
bit, maybe add a helper function blk_crypto_alloc_skcipher_request(bio) and call
it from both places?

> +
> +	curr_dun = bio_crypt_sw_data_unit_num(src_bio);
> +	sg_init_table(&src, 1);
> +	sg_init_table(&dst, 1);
> +	for (i = 0, enc_bvec = enc_bio->bi_io_vec; i < enc_bio->bi_vcnt;
> +	     enc_bvec++, i++) {
> +		struct page *page = enc_bvec->bv_page;
> +		struct page *ciphertext_page =
> +			mempool_alloc(blk_crypto_page_pool, GFP_NOFS);

GFP_NOIO, not GFP_NOFS.

> +
> +		enc_bvec->bv_page = ciphertext_page;
> +
> +		if (!ciphertext_page)
> +			goto no_mem_for_ciph_page;
> +
> +		memset(&iv, 0, sizeof(iv));
> +		iv.dun = cpu_to_le64(curr_dun);
> +
> +		sg_set_page(&src, page, enc_bvec->bv_len, enc_bvec->bv_offset);
> +		sg_set_page(&dst, ciphertext_page, enc_bvec->bv_len,
> +			    enc_bvec->bv_offset);
> +
> +		skcipher_request_set_crypt(ciph_req, &src, &dst,
> +					   enc_bvec->bv_len, iv.bytes);
> +		err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req), &wait);
> +		if (err)
> +			goto no_mem_for_ciph_page;
> +
> +		curr_dun++;
> +		continue;
> +no_mem_for_ciph_page:
> +		err = -ENOMEM;
> +		for (j = i - 1; j >= 0; j--) {
> +			mempool_free(enc_bio->bi_io_vec->bv_page,
> +				     blk_crypto_page_pool);
> +		}

The error path needs to free bi_io_vec[j], not bi_io_vec.

> +/*
> + * TODO: assumption right now is:
> + * each segment in bio has length == the data_unit_size
> + */

This needs to be fixed, or else blk-crypto needs to reject using unsupported
data unit sizes.  But it seems it can be supported pretty easily by just looping
through each data unit in each bio segment.  To get some ideas you could look at
my patches queued in fscrypt.git that handle encrypting/decrypting filesystem
blocks smaller than PAGE_SIZE, e.g.
https://git.kernel.org/pub/scm/fs/fscrypt/fscrypt.git/commit/?id=53bc1d854c64c20d967dab15b111baca02a6d99e

> +static void blk_crypto_decrypt_bio(struct work_struct *w)
> +{
> +	struct work_mem *work_mem =
> +		container_of(w, struct work_mem, crypto_work);
> +	struct bio *bio = work_mem->bio;
> +	int slot = bio_crypt_get_slot(bio);
> +	struct skcipher_request *ciph_req;
> +	DECLARE_CRYPTO_WAIT(wait);
> +	struct bio_vec bv;
> +	struct bvec_iter iter;
> +	u64 curr_dun;
> +	union {
> +		__le64 dun;
> +		u8 bytes[16];
> +	} iv;
> +	struct scatterlist sg;
> +
> +	curr_dun = bio_crypt_sw_data_unit_num(bio);
> +
> +	kmem_cache_free(blk_crypto_work_mem_cache, work_mem);
> +	ciph_req = skcipher_request_alloc(blk_crypto_keyslots[slot].tfm,
> +					  GFP_NOFS);
> +	if (!ciph_req) {
> +		bio->bi_status = BLK_STS_RESOURCE;
> +		goto out;
> +	}
> +
> +	skcipher_request_set_callback(ciph_req,
> +				      CRYPTO_TFM_REQ_MAY_BACKLOG |
> +				      CRYPTO_TFM_REQ_MAY_SLEEP,
> +				      crypto_req_done, &wait);
> +
> +	sg_init_table(&sg, 1);
> +	__bio_for_each_segment(bv, bio, iter,
> +			       bio->bi_crypt_context->crypt_iter) {
> +		struct page *page = bv.bv_page;
> +		int err;
> +
> +		memset(&iv, 0, sizeof(iv));
> +		iv.dun = cpu_to_le64(curr_dun);
> +
> +		sg_set_page(&sg, page, bv.bv_len, bv.bv_offset);
> +		skcipher_request_set_crypt(ciph_req, &sg, &sg,
> +					   bv.bv_len, iv.bytes);
> +		err = crypto_wait_req(crypto_skcipher_decrypt(ciph_req), &wait);
> +		if (err) {
> +			bio->bi_status = BLK_STS_IOERR;
> +			goto out;
> +		}
> +		curr_dun++;
> +	}
> +
> +out:
> +	skcipher_request_free(ciph_req);
> +	blk_crypto_put_keyslot(bio);
> +	bio_endio(bio);
> +}
> +
> +static void blk_crypto_queue_decrypt_bio(struct bio *bio)
> +{
> +	struct work_mem *work_mem =
> +		kmem_cache_zalloc(blk_crypto_work_mem_cache, GFP_ATOMIC);
> +
> +	if (!work_mem) {
> +		bio->bi_status = BLK_STS_RESOURCE;
> +		return bio_endio(bio);
> +	}

The keyslot needs to be released if allocating the work_mem fails.

However, I'm wondering: for software fallback decryption, why is the keyslot
allocated before the bio is submitted, rather than in the workqueue work after
the bio completes?  The actual decryption is already sleepable, so why not just
allocate the keyslot then too?  It would also make it more similar to the
software fallback encryption, which doesn't hold the keyslot during I/O.

> +
> +	INIT_WORK(&work_mem->crypto_work, blk_crypto_decrypt_bio);
> +	work_mem->bio = bio;
> +	queue_work(blk_crypto_wq, &work_mem->crypto_work);
> +}
> +
> +/*
> + * Ensures that:
> + * 1) The bio’s encryption context is programmed into a keyslot in the
> + * keyslot manager (KSM) of the request queue that the bio is being submitted
> + * to (or the software fallback KSM if the request queue doesn’t have a KSM),
> + * and that the processing_ksm in the bi_crypt_context of this bio is set to
> + * this KSM.
> + *
> + * 2) That the bio has a reference to this keyslot in this KSM.
> + */

Make this into a proper kerneldoc comment that has a one-line function overview
and documents the return value?  For example:

/**
 * blk_crypto_submit_bio - handle submitting bio for inline encryption
 *
 * @bio_ptr: pointer to original bio pointer
 *
 * If the bio doesn't have inline encryption enabled or the submitter already
 * specified a keyslot for the target device, do nothing.  Else, a raw key must
 * have been provided, so acquire a device keyslot for it if supported.  Else,
 * use the software crypto fallback.
 * 
 * [Something about the software crypto fallback and how it may update
 * *bio_ptr.]
 *
 * Return: 0 if bio submission should continue; nonzero if bio_endio() was
 *        already called so bio submission should abort.
  */

> +int blk_crypto_submit_bio(struct bio **bio_ptr)
> +{
> +	struct bio *bio = *bio_ptr;
> +	struct request_queue *q;
> +	int err;
> +	enum blk_crypt_mode_num crypt_mode;

The 'crypt_mode' variable is never used.

> +	struct bio_crypt_ctx *crypt_ctx;
> +
> +	if (!bio_has_data(bio))
> +		return 0;
> +
> +	if (!bio_is_encrypted(bio) || bio_crypt_swhandled(bio))
> +		return 0;

Why is bio_crypt_swhandled() checked here?

Also consider reordering these checks to:

	if (!bio_is_encrypted(bio) || !bio_has_data(bio))
		return 0;

	/* comment */
	if (bio_crypt_swhandled(bio))
		return 0;

!bio_is_encrypted() is the most common case, so for efficiency should be checked
first.  !bio_is_encrypted() and !bio_has_data() are also easy to understand and
kind of go together, while bio_crypt_swhandled() seems different; it's harder to
understand and might need a comment.

> +
> +	crypt_ctx = bio->bi_crypt_context;
> +	q = bio->bi_disk->queue;
> +	crypt_mode = bio_crypt_mode(bio);
> +
> +	if (bio_crypt_has_keyslot(bio)) {
> +		/* Key already programmed into device? */
> +		if (q->ksm == crypt_ctx->processing_ksm)
> +			return 0;
> +
> +		/* Nope, release the existing keyslot. */
> +		blk_crypto_put_keyslot(bio);
> +	}
> +
> +	/* Get device keyslot if supported */
> +	if (q->ksm) {
> +		err = blk_crypto_get_keyslot(bio, q->ksm);
> +		if (!err)
> +			return 0;

Perhaps there should be a warning message here, since it may be unexpected for
the software fallback encryption to be used, and it may perform poorly.  E.g.

	pr_warn_once("blk-crypto: failed to acquire keyslot for %s (err=%d).  Falling back to software crypto.\n",
		      bio->bi_disk->disk_name, err);

> +	}
> +
> +	/* Fallback to software crypto */
> +	if (bio_data_dir(bio) == WRITE) {
> +		/* Encrypt the data now */
> +		err = blk_crypto_encrypt_bio(bio_ptr);
> +		if (err)
> +			goto out_encrypt_err;
> +	} else {
> +		err = blk_crypto_get_keyslot(bio, blk_crypto_ksm);
> +		if (err)
> +			goto out_err;
> +	}
> +	return 0;
> +out_err:
> +	bio->bi_status = BLK_STS_IOERR;
> +out_encrypt_err:
> +	bio_endio(bio);
> +	return err;
> +}
> +
> +/*
> + * If the bio is not en/decrypted in software, this function releases the
> + * reference to the keyslot that blk_crypto_submit_bio got.
> + * If blk_crypto_submit_bio decided to fallback to software crypto for this
> + * bio, then if the bio is doing a write, we free the allocated bounce pages,
> + * and if the bio is doing a read, we queue the bio for decryption into a
> + * workqueue and return -EAGAIN. After the bio has been decrypted, we release
> + * the keyslot before we call bio_endio(bio).
> + */
> +bool blk_crypto_endio(struct bio *bio)
> +{
> +	if (!bio_crypt_has_keyslot(bio))
> +		return true;
> +
> +	if (!bio_crypt_swhandled(bio)) {
> +		blk_crypto_put_keyslot(bio);
> +		return true;
> +	}
> +
> +	/* bio_data_dir(bio) == READ. So decrypt bio */
> +	blk_crypto_queue_decrypt_bio(bio);
> +	return false;
> +}
> +
> +int __init blk_crypto_init(void)
> +{
> +	blk_crypto_ksm = keyslot_manager_create(BLK_CRYPTO_NUM_KEYSLOTS,
> +						&blk_crypto_ksm_ll_ops,
> +						NULL);
> +	if (!blk_crypto_ksm)
> +		goto out_ksm;
> +
> +	blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
> +					WQ_UNBOUND | WQ_HIGHPRI,
> +					num_online_cpus());
> +	if (!blk_crypto_wq)
> +		goto out_wq;

WQ_MEM_RECLAIM might be needed here.

- Eric

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 5/8] scsi: ufs: UFS crypto API
  2019-06-05 23:28 ` [RFC PATCH v2 5/8] scsi: ufs: UFS crypto API Satya Tangirala
@ 2019-06-13 17:11   ` Eric Biggers
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2019-06-13 17:11 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, Parshuram Raju Thombare, Ladvine D Almeida,
	Barani Muthukumaran, Kuohong Wang

Hi Satya,

On Wed, Jun 05, 2019 at 04:28:34PM -0700, Satya Tangirala wrote:
> Introduce functions to manipulate UFS inline encryption hardware
> in line with the JEDEC UFSHCI v2.1 specification and to work with the
> block keyslot manager.
> 
> Signed-off-by: Satya Tangirala <satyat@google.com>
> ---
>  drivers/scsi/ufs/Kconfig         |  10 +
>  drivers/scsi/ufs/Makefile        |   1 +
>  drivers/scsi/ufs/ufshcd-crypto.c | 438 +++++++++++++++++++++++++++++++
>  drivers/scsi/ufs/ufshcd-crypto.h |  69 +++++
>  4 files changed, 518 insertions(+)
>  create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c
>  create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h
> 

There is a build error after this patch because it adds code that uses the
crypto fields in struct ufs_hba, but those aren't added until the next patch.

It needs to be possible to compile a working kernel after each patch.
Otherwise it breaks bisection.

So, perhaps add the fields in this patch instead.

> +++ b/drivers/scsi/ufs/ufshcd-crypto.c
> @@ -0,0 +1,438 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2019 Google LLC
> + */
> +
> +#include <crypto/algapi.h>
> +
> +#include "ufshcd.h"
> +#include "ufshcd-crypto.h"
> +
> +bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
> +{
> +	return hba->crypto_capabilities.reg_val != 0;
> +}
> +
> +bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
> +{
> +	return hba->caps & UFSHCD_CAP_CRYPTO;
> +}
> +
> +static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
> +{
> +	return cap_idx < hba->crypto_capabilities.num_crypto_cap;
> +}
> +
> +#define NUM_KEYSLOTS(hba) (hba->crypto_capabilities.config_count + 1)
> +
> +bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
> +{
> +	/*
> +	 * The actual number of configurations supported is (CFGC+1), so slot
> +	 * numbers range from 0 to config_count inclusive.
> +	 */
> +	return slot < NUM_KEYSLOTS(hba);
> +}

Since ufshcd_hba_is_crypto_supported(), ufshcd_is_crypto_enabled(), and
ufshcd_keyslot_valid() are one-liners, don't access any private structures, and
are used outside this file including on the command submission path, how about
making them inline functions in ufshcd-crypto.h?

> +
> +static int ufshcd_crypto_alg_find(void *hba_p,
> +			   enum blk_crypt_mode_num crypt_mode,
> +			   unsigned int data_unit_size)
> +{

Now that the concept of "crypto alg IDs" is gone, rename this to
ufshcd_crypto_cap_find() and rename the crypto_alg_id variables to cap_idx.

This would make it consistent with using cap_idx elsewhere in the code and avoid
confusion with ufs_crypto_cap_entry::algorithm_id.

> +
> +static int ufshcd_crypto_keyslot_program(void *hba_p, const u8 *key,
> +					 enum blk_crypt_mode_num crypt_mode,
> +					 unsigned int data_unit_size,
> +					 unsigned int slot)
> +{
> +	struct ufs_hba *hba = hba_p;
> +	int err = 0;
> +	u8 data_unit_mask;
> +	union ufs_crypto_cfg_entry cfg;
> +	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
> +	int crypto_alg_id;
> +
> +	crypto_alg_id = ufshcd_crypto_alg_find(hba_p, crypt_mode,
> +					       data_unit_size);
> +
> +	if (!ufshcd_is_crypto_enabled(hba) ||
> +	    !ufshcd_keyslot_valid(hba, slot) ||
> +	    !ufshcd_cap_idx_valid(hba, crypto_alg_id))
> +		return -EINVAL;
> +
> +	data_unit_mask = get_data_unit_size_mask(data_unit_size);
> +
> +	if (!(data_unit_mask &
> +	      hba->crypto_cap_array[crypto_alg_id].sdus_mask))
> +		return -EINVAL;

Nit: the 'if' expression with data_unit_mask fits on one line.

> +static int ufshcd_crypto_keyslot_find(void *hba_p,
> +				      const u8 *key,
> +				      enum blk_crypt_mode_num crypt_mode,
> +				      unsigned int data_unit_size)
> +{
> +	struct ufs_hba *hba = hba_p;
> +	int err = 0;
> +	int slot;
> +	u8 data_unit_mask;
> +	union ufs_crypto_cfg_entry cfg;
> +	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
> +	int crypto_alg_id;
> +
> +	crypto_alg_id = ufshcd_crypto_alg_find(hba_p, crypt_mode,
> +					       data_unit_size);
> +
> +	if (!ufshcd_is_crypto_enabled(hba) ||
> +	    !ufshcd_cap_idx_valid(hba, crypto_alg_id))
> +		return -EINVAL;
> +
> +	data_unit_mask = get_data_unit_size_mask(data_unit_size);
> +
> +	if (!(data_unit_mask &
> +	      hba->crypto_cap_array[crypto_alg_id].sdus_mask))
> +		return -EINVAL;

Same here.

> +	for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++) {
> +		if ((cfg_arr[slot].config_enable &
> +		     UFS_CRYPTO_CONFIGURATION_ENABLE) &&
> +		    data_unit_mask == cfg_arr[slot].data_unit_size &&
> +		    crypto_alg_id == cfg_arr[slot].crypto_cap_idx &&
> +		    crypto_memneq(&cfg.crypto_key, cfg_arr[slot].crypto_key,
> +				  UFS_CRYPTO_KEY_MAX_SIZE) == 0) {
> +			memzero_explicit(&cfg, sizeof(cfg));
> +			return slot;
> +		}
> +	}

Nit: as I've mentioned before, I think !crypto_memneq() is easier to read than
'crypto_memneq() == 0'.

> +	hba->crypto_cap_array =
> +		devm_kcalloc(hba->dev,
> +			     hba->crypto_capabilities.num_crypto_cap,
> +			     sizeof(hba->crypto_cap_array[0]),
> +			     GFP_KERNEL);
> +	if (!hba->crypto_cap_array) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +
> +	hba->crypto_cfgs =
> +		devm_kcalloc(hba->dev,
> +			     hba->crypto_capabilities.config_count + 1,
> +			     sizeof(union ufs_crypto_cfg_entry),
> +			     GFP_KERNEL);
> +	if (!hba->crypto_cfgs) {
> +		err = -ENOMEM;
> +		goto out_cfg_mem;
> +	}

Nit: use 'sizeof(hba->crypto_cfgs[0])' rather than 'sizeof(union
ufs_crypto_cfg_entry)', for consistency with the other array allocation.

Thanks,

- Eric

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 6/8] scsi: ufs: Add inline encryption support to UFS
  2019-06-05 23:28 ` [RFC PATCH v2 6/8] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
@ 2019-06-13 17:22   ` Eric Biggers
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2019-06-13 17:22 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, Parshuram Raju Thombare, Ladvine D Almeida,
	Barani Muthukumaran, Kuohong Wang

On Wed, Jun 05, 2019 at 04:28:35PM -0700, Satya Tangirala wrote:
> +static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
> +					     struct scsi_cmnd *cmd,
> +					     struct ufshcd_lrb *lrbp)
> +{
> +	int key_slot;
> +
> +	if (!bio_crypt_should_process(cmd->request->bio,
> +					cmd->request->q)) {
> +		lrbp->crypto_enable = false;
> +		return 0;
> +	}

Nit: this 'if' expression fits on one line.

>  static int ufshcd_slave_configure(struct scsi_device *sdev)
>  {
>  	struct request_queue *q = sdev->request_queue;
> +	struct ufs_hba *hba = shost_priv(sdev->host);
>  
>  	blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
>  	blk_queue_max_segment_size(q, PRDT_DATA_BYTE_COUNT_MAX);
>  
> +	ufshcd_crypto_setup_rq_keyslot_manager(hba, q);
> +
>  	return 0;
>  }
>  
> @@ -4598,6 +4660,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
>  static void ufshcd_slave_destroy(struct scsi_device *sdev)
>  {
>  	struct ufs_hba *hba;
> +	struct request_queue *q = sdev->request_queue;
>  
>  	hba = shost_priv(sdev->host);
>  	/* Drop the reference as it won't be needed anymore */
> @@ -4608,6 +4671,8 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev)
>  		hba->sdev_ufs_device = NULL;
>  		spin_unlock_irqrestore(hba->host->host_lock, flags);
>  	}
> +
> +	ufshcd_crypto_destroy_rq_keyslot_manager(q);
>  }

Each scsi_device is still getting its own keyslot manager.  As discussed before,
this is wrong because the keyslots are per-host controller, not per-device.

So the keyslot manager needs to be a property of the ufs_hba instead, and each
device's request_queue needs to reference that same keyslot manager.

> diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> index d3b6a6b57a37..283014e0924f 100644
> --- a/drivers/scsi/ufs/ufshcd.h
> +++ b/drivers/scsi/ufs/ufshcd.h
> @@ -167,6 +167,9 @@ struct ufs_pm_lvl_states {
>   * @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation)
>   * @issue_time_stamp: time stamp for debug purposes
>   * @compl_time_stamp: time stamp for statistics
> + * @crypto_enable: whether or not the request needs inline crypto operations
> + * @crypto_key_slot: the key slot to use for inline crypto
> + * @data_unit_num: the data unit number for the first block for inline crypto
>   * @req_abort_skip: skip request abort task flag
>   */
>  struct ufshcd_lrb {
> @@ -191,6 +194,9 @@ struct ufshcd_lrb {
>  	bool intr_cmd;
>  	ktime_t issue_time_stamp;
>  	ktime_t compl_time_stamp;
> +	bool crypto_enable;
> +	u8 crypto_key_slot;
> +	u64 data_unit_num;

Maybe these fields should be conditional on CONFIG_SCSI_UFS_CRYPTO too?

>  
>  	bool req_abort_skip;
>  };
> @@ -501,6 +507,10 @@ struct ufs_stats {
>   * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
>   *  device is known or not.
>   * @scsi_block_reqs_cnt: reference counting for scsi block requests
> + * @crypto_capabilities: Content of crypto capabilities register (0x100)
> + * @crypto_cap_array: Array of crypto capabilities
> + * @crypto_cfg_register: Start of the crypto cfg array
> + * @crypto_cfgs: Array of crypto configurations (i.e. config for each slot)
>   */
>  struct ufs_hba {
>  	void __iomem *mmio_base;
> @@ -711,6 +721,14 @@ struct ufs_hba {
>  
>  	struct device		bsg_dev;
>  	struct request_queue	*bsg_queue;
> +
> +#ifdef CONFIG_SCSI_UFS_CRYPTO
> +	/* crypto */
> +	union ufs_crypto_capabilities crypto_capabilities;
> +	union ufs_crypto_cap_entry *crypto_cap_array;
> +	u32 crypto_cfg_register;
> +	union ufs_crypto_cfg_entry *crypto_cfgs;
> +#endif /* CONFIG_SCSI_UFS_CRYPTO */
>  };
>  
>  /* Returns true if clocks can be gated. Otherwise false */
> -- 
> 2.22.0.rc1.311.g5d7573a151-goog
> 

- Eric

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 7/8] fscrypt: wire up fscrypt to use blk-crypto
  2019-06-05 23:28 ` [RFC PATCH v2 7/8] fscrypt: wire up fscrypt to use blk-crypto Satya Tangirala
@ 2019-06-13 18:55   ` Eric Biggers
  0 siblings, 0 replies; 16+ messages in thread
From: Eric Biggers @ 2019-06-13 18:55 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, Parshuram Raju Thombare, Ladvine D Almeida,
	Barani Muthukumaran, Kuohong Wang

Hi Satya,

On Wed, Jun 05, 2019 at 04:28:36PM -0700, Satya Tangirala wrote:
> Introduce fscrypt_set_bio_crypt_ctx for filesystems to call to set up
> encryption contexts in bios, and fscrypt_evict_crypt_key to evict
> the encryption context associated with an inode.
> 
> Inline encryption is controlled by a policy flag in the fscrypt_info
> in the inode, and filesystems may check if an inode should use inline
> encryption by calling fscrypt_inode_is_hw_encrypted. Files can be marked
> as inline encrypted from userspace by appropriately modifying the flags
> (OR-ing FS_POLICY_FLAGS_HW_ENCRYPTION to it) in the fscrypt_policy
> passed to fscrypt_ioctl_set_policy.
> 
> To test inline encryption with the fscrypt dummy context, add
> ctx.flags |= FS_POLICY_FLAGS_HW_ENCRYPTION
> when setting up the dummy context in fs/crypto/keyinfo.c.
> 
> Note that blk-crypto will fall back to software en/decryption in the
> absence of inline crypto hardware, so setting up the ctx.flags in the
> dummy context without inline crypto hardware serves as a test for
> the software fallback in blk-crypto.
> 
> Signed-off-by: Satya Tangirala <satyat@google.com>

I think a fscrypt_policy flag is the right approach, but the "HW_ENCRYPTION"
name is really confusing and wrong.

What it really enables is a cryptosystem and on-disk format change where, for
the purpose of working better with inline encryption, file contents are
encrypted with the master key directly (or for v2 encryption policies it will be
a per-mode derived key as it really should be, once we can actually get the v2
encryption policy support reviewed and merged), and the inode numbers are added
to the IVs.  As we know, when ext4 support is added, this will also preclude the
filesystem from being resized.

It's not necessarily "hardware encryption".  You're implementing it using the
block layer encryption, but that can fallback to the crypto API in blk-crypto.

Moreover fscrypt already supports hardware encryption via the crypto API, just
not *inline* hardware encryption.

So calling it "hardware encryption" is wrong.

I think a much better name would be something like
FS_POLICY_FLAG_DIRECT_CONTENTS or FS_POLICY_FLAG_INLINECRYPT_OPTIMIZED.

Similarly for everywhere else in this patch that references "hardware
encryption" -- usually it should be "inline encryption".

> diff --git a/block/blk-crypto.c b/block/blk-crypto.c
> index 5adb5251ae7e..7e98acd2b963 100644
> --- a/block/blk-crypto.c
> +++ b/block/blk-crypto.c
> @@ -82,7 +82,6 @@ static int blk_crypto_keyslot_program(void *priv, const u8 *key,
>  		slotp->tfm = tfm;
>  	}
>  
> -
>  	err = crypto_skcipher_setkey(tfm, key, keysize);
>  
>  	if (err) {

This should be folded into an earlier patch.

> diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig
> index 24ed99e2eca0..aa5b2bc6c8dd 100644
> --- a/fs/crypto/Kconfig
> +++ b/fs/crypto/Kconfig
> @@ -15,3 +15,10 @@ config FS_ENCRYPTION
>  	  efficient since it avoids caching the encrypted and
>  	  decrypted pages in the page cache.  Currently Ext4,
>  	  F2FS and UBIFS make use of this feature.
> +
> +config FS_ENCRYPTION_HW_CRYPT
> +	tristate "Enable fscrypt to use inline crypto"
> +	default n
> +	depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION
> +	help
> +	  Enables fscrypt to use inline crypto hardware if available.

"Enable fscrypt to use inline crypto" isn't a loadable module, so it needs to be
a bool, not a tristate.

That also means use '#ifdef CONFIG_...', not '#if IS_ENABLED(CONFIG...)'.

Also no need for 'default n', since 'n' is already the default.

> diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
> index b46021ebde85..b06b1a2be99b 100644
> --- a/fs/crypto/bio.c
> +++ b/fs/crypto/bio.c
> @@ -24,17 +24,24 @@
>  #include <linux/module.h>
>  #include <linux/bio.h>
>  #include <linux/namei.h>
> +#include <linux/keyslot-manager.h>
> +#include <linux/blkdev.h>
> +#include <crypto/algapi.h>
>  #include "fscrypt_private.h"
>  
> -static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
> +static void __fscrypt_decrypt_bio(struct bio *bio, bool done, bool decrypt)
>  {
>  	struct bio_vec *bv;
>  	struct bvec_iter_all iter_all;
>  
>  	bio_for_each_segment_all(bv, bio, iter_all) {
>  		struct page *page = bv->bv_page;
> -		int ret = fscrypt_decrypt_page(page->mapping->host, page,
> -				PAGE_SIZE, 0, page->index);
> +		int ret = 0;
> +
> +		if (decrypt) {
> +			ret = fscrypt_decrypt_page(page->mapping->host, page,
> +						   PAGE_SIZE, 0, page->index);
> +		}
>  
>  		if (ret)
>  			SetPageError(page);
> @@ -47,7 +54,7 @@ static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
>  
>  void fscrypt_decrypt_bio(struct bio *bio)
>  {
> -	__fscrypt_decrypt_bio(bio, false);
> +	__fscrypt_decrypt_bio(bio, false, true);
>  }
>  EXPORT_SYMBOL(fscrypt_decrypt_bio);
>  
> @@ -57,16 +64,27 @@ static void completion_pages(struct work_struct *work)
>  		container_of(work, struct fscrypt_ctx, r.work);
>  	struct bio *bio = ctx->r.bio;
>  
> -	__fscrypt_decrypt_bio(bio, true);
> +	__fscrypt_decrypt_bio(bio, true, true);
> +	fscrypt_release_ctx(ctx);
> +	bio_put(bio);
> +}
> +
> +static void decrypt_bio_hwcrypt(struct fscrypt_ctx *ctx, struct bio *bio)
> +{
> +	__fscrypt_decrypt_bio(bio, true, false);
>  	fscrypt_release_ctx(ctx);
>  	bio_put(bio);
>  }
>  
>  void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
>  {
> -	INIT_WORK(&ctx->r.work, completion_pages);
> -	ctx->r.bio = bio;
> -	fscrypt_enqueue_decrypt_work(&ctx->r.work);
> +	if (bio_is_encrypted(bio)) {
> +		decrypt_bio_hwcrypt(ctx, bio);
> +	} else {
> +		INIT_WORK(&ctx->r.work, completion_pages);
> +		ctx->r.bio = bio;
> +		fscrypt_enqueue_decrypt_work(&ctx->r.work);
> +	}
>  }
>  EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);

I don't think we should be repurposing the normal fscrypt decryption path like
this.  In the case of inline decryption, the decryption *already happened*, so
there's no need for the filesystem to call into fs/crypto/ after the I/O
completes.  It's also inefficient since there is a fscrypt_ctx being allocated
for every bio, but it's useless for inline decryption since the only purpose of
fscrypt_ctx is to hold the bio decryption workqueue item.

Instead I think we should just make the filesystems' ->readpages() correctly
check whether post-read decryption is needed.  I.e. instead of checking...

	IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)

add a helper function...

	fscrypt_needs_fs_layer_crypto()

(there may be a better name for it)

that returns true if IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
FS_POLICY_FLAG_INLINECRYPT_OPTIMIZED (or whatever we decide to call it) is unset
in the inode's fscrypt policy.

Likewise for encryption: only call fscrypt_encrypt_page() when
fscrypt_needs_fs_layer_crypto().

> +int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
> +				 struct bio *bio, u64 data_unit_num)
> +{
> +	struct fscrypt_info *ci = inode->i_crypt_info;
> +
> +	/* If inode is not hw encrypted, nothing to do. */
> +	if (!fscrypt_inode_is_hw_encrypted(inode))
> +		return 0;
> +
> +	return bio_crypt_set_ctx(bio, ci->ci_master_key->mk_raw,
> +			get_blk_crypto_alg_for_fscryptalg(ci->ci_data_mode),
> +			data_unit_num,
> +			PAGE_SHIFT);
> +}
> +EXPORT_SYMBOL(fscrypt_set_bio_crypt_ctx);

To be ready for ext4 encryption with block_size < PAGE_SIZE this needs to pass
inode->i_blkbits for the dun_bits, not inode->i_blkbits.

> diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
> index dcd91a3fbe49..c00d986799d5 100644
> --- a/fs/crypto/keyinfo.c
> +++ b/fs/crypto/keyinfo.c
> @@ -25,6 +25,21 @@ static struct crypto_shash *essiv_hash_tfm;
>  static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */
>  static DEFINE_SPINLOCK(fscrypt_master_keys_lock);
>  
> +#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
> +static inline bool __flags_hw_encrypted(u8 flags,
> +					const struct inode *inode)
> +{
> +	return inode && (flags & FS_POLICY_FLAGS_HW_ENCRYPTION) &&
> +	       S_ISREG(inode->i_mode);
> +}
> +#else
> +static inline bool __flags_hw_encrypted(u8 flags,
> +					const struct inode *inode)
> +{
> +	return false;
> +}
> +#endif /* CONFIG_FS_ENCRYPTION_HW_CRYPT */
> +
>  /*
>   * Key derivation function.  This generates the derived key by encrypting the
>   * master key with AES-128-ECB using the inode's nonce as the AES key.
> @@ -220,6 +235,9 @@ static int find_and_derive_key(const struct inode *inode,
>  			memcpy(derived_key, payload->raw, mode->keysize);
>  			err = 0;
>  		}
> +	} else if (__flags_hw_encrypted(ctx->flags, inode)) {
> +		memcpy(derived_key, payload->raw, mode->keysize);
> +		err = 0;
>  	} else {
>  		err = derive_key_aes(payload->raw, ctx, derived_key,
>  				     mode->keysize);
> @@ -269,16 +287,6 @@ allocate_skcipher_for_mode(struct fscrypt_mode *mode, const u8 *raw_key,
>  	return ERR_PTR(err);
>  }

As I mentioned, to follow crypto best practices this really should use a
per-mode key and not the master key directly...  We can do that pretty easily
after the v2 encryption policy support is merged.

> -static void put_master_key(struct fscrypt_master_key *mk)
> +static void put_master_key(struct fscrypt_master_key *mk,
> +			   struct inode *inode)
>  {
>  	if (!refcount_dec_and_lock(&mk->mk_refcount, &fscrypt_master_keys_lock))
>  		return;
>  	hash_del(&mk->mk_node);
>  	spin_unlock(&fscrypt_master_keys_lock);
>  
> +	fscrypt_evict_crypt_key(inode);
>  	free_master_key(mk);
>  }
>  
> @@ -360,11 +370,13 @@ fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode,
>  		return ERR_PTR(-ENOMEM);
>  	refcount_set(&mk->mk_refcount, 1);
>  	mk->mk_mode = mode;
> -	mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
> -	if (IS_ERR(mk->mk_ctfm)) {
> -		err = PTR_ERR(mk->mk_ctfm);
> -		mk->mk_ctfm = NULL;
> -		goto err_free_mk;
> +	if (!__flags_hw_encrypted(ci->ci_flags, inode)) {
> +		mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
> +		if (IS_ERR(mk->mk_ctfm)) {
> +			err = PTR_ERR(mk->mk_ctfm);
> +			mk->mk_ctfm = NULL;
> +			goto err_free_mk;
> +		}
>  	}
>  	memcpy(mk->mk_descriptor, ci->ci_master_key_descriptor,
>  	       FS_KEY_DESCRIPTOR_SIZE);
> @@ -456,7 +468,8 @@ static int setup_crypto_transform(struct fscrypt_info *ci,
>  	struct crypto_skcipher *ctfm;
>  	int err;
>  
> -	if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) {
> +	if ((ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) ||
> +	    __flags_hw_encrypted(ci->ci_flags, inode)) {
>  		mk = fscrypt_get_master_key(ci, mode, raw_key, inode);
>  		if (IS_ERR(mk))
>  			return PTR_ERR(mk);
> @@ -486,13 +499,13 @@ static int setup_crypto_transform(struct fscrypt_info *ci,
>  	return 0;
>  }
>  
> -static void put_crypt_info(struct fscrypt_info *ci)
> +static void put_crypt_info(struct fscrypt_info *ci, struct inode *inode)
>  {
>  	if (!ci)
>  		return;
>  
>  	if (ci->ci_master_key) {
> -		put_master_key(ci->ci_master_key);
> +		put_master_key(ci->ci_master_key, inode);
>  	} else {
>  		crypto_free_skcipher(ci->ci_ctfm);
>  		crypto_free_cipher(ci->ci_essiv_tfm);

Using the struct fscrypt_master_key in this way is wrong because they are
identified by (ci_mode, ci_descriptor, and ci_raw_key).  That means:

- The same fscrypt_master_key might be used by both inodes with and without the
  FS_POLICY_FLAGS_HW_ENCRYPTION (or whatever it's renamed to) flag set.  This
  will cause a NULL dereference on ->ci_ctfm if the fscrypt_master_key was
  initially created for a policy with FS_POLICY_FLAGS_HW_ENCRYPTION, and then
  later used by a policy without FS_POLICY_FLAGS_HW_ENCRYPTION.

- The same fscrypt_master_key can be used by inodes on multiple filesystems.
  This patch only makes the key be evicted from the keyslots on the last device
  to be used.

We can fix this by extending the identifier for fscrypt_master_key to (ci_mode,
ci_descriptor, ci_raw_key, super_block, ci_ctfm != NULL).  So you'd get separate
fscrypt_master_key's for different filesystems, and for policies with and
without FS_POLICY_FLAGS_HW_ENCRYPTION.

Of course, you are screwed if you use the same master key for inline encryption
on multiple filesystems anyway, since IVs will be reused.  What we really should
be doing is use HKDF to derive the inline encryption key from (master_key,
contents_encryption_mode, filesystem_uuid).  Which again, depends on v2 policy
support, which hopefully I can convince people should be merged so we don't have
to keep piling on these cryptographically questionable hacks :-)

> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -224,7 +224,17 @@ struct fsxattr {
>  #define FS_POLICY_FLAGS_PAD_32		0x03
>  #define FS_POLICY_FLAGS_PAD_MASK	0x03
>  #define FS_POLICY_FLAG_DIRECT_KEY	0x04	/* use master key directly */
> -#define FS_POLICY_FLAGS_VALID		0x07
> +#define FS_POLICY_FLAGS_VALID_BASE	0x07
> +
> +#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
> +#define FS_POLICY_FLAGS_HW_ENCRYPTION	0x08
> +#else
> +#define FS_POLICY_FLAGS_HW_ENCRYPTION	0x00
> +#endif
> +
> +
> +#define FS_POLICY_FLAGS_VALID (FS_POLICY_FLAGS_VALID_BASE | \
> +			       FS_POLICY_FLAGS_HW_ENCRYPTION)
>  
>  /* Encryption algorithms */
>  #define FS_ENCRYPTION_MODE_INVALID		0
> -- 

Checking the kernel config is meaningless in UAPI headers.  Everyone gets the
same <linux/fs.h> header, and they can build programs with it and run them on
any arbitrary kernel.  So flag needs to be unconditional.

Thanks,

- Eric

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-06-13 18:56 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-05 23:28 [RFC PATCH v2 0/8] Inline Encryption Support Satya Tangirala
2019-06-05 23:28 ` [RFC PATCH v2 1/8] block: Keyslot Manager for Inline Encryption Satya Tangirala
2019-06-07 22:28   ` Eric Biggers
2019-06-12 18:26   ` Eric Biggers
2019-06-05 23:28 ` [RFC PATCH v2 2/8] block: Add encryption context to struct bio Satya Tangirala
2019-06-12 18:10   ` Eric Biggers
2019-06-05 23:28 ` [RFC PATCH v2 3/8] block: blk-crypto for Inline Encryption Satya Tangirala
2019-06-12 23:34   ` Eric Biggers
2019-06-05 23:28 ` [RFC PATCH v2 4/8] scsi: ufs: UFS driver v2.1 spec crypto additions Satya Tangirala
2019-06-05 23:28 ` [RFC PATCH v2 5/8] scsi: ufs: UFS crypto API Satya Tangirala
2019-06-13 17:11   ` Eric Biggers
2019-06-05 23:28 ` [RFC PATCH v2 6/8] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
2019-06-13 17:22   ` Eric Biggers
2019-06-05 23:28 ` [RFC PATCH v2 7/8] fscrypt: wire up fscrypt to use blk-crypto Satya Tangirala
2019-06-13 18:55   ` Eric Biggers
2019-06-05 23:28 ` [RFC PATCH v2 8/8] f2fs: Wire up f2fs to use inline encryption via fscrypt Satya Tangirala

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).