linux-fscrypt.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/9] Inline Encryption Support
@ 2020-02-21 11:50 Satya Tangirala
  2020-02-21 11:50 ` [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption Satya Tangirala
                   ` (9 more replies)
  0 siblings, 10 replies; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala

This patch series adds support for Inline Encryption to the block layer,
UFS, fscrypt, f2fs and ext4.

Note that the patches in this series for the block layer (i.e. patches 1, 2
and 3) can be applied independently of the subsequent patches in this
series.

Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size, etc.)
along with a data transfer request to a storage device, and the inline
encryption hardware will use that context to en/decrypt the data. The
inline encryption hardware is part of the storage device, and it
conceptually sits on the data path between system memory and the storage
device. Inline Encryption hardware has become increasingly common, and we
want to support it in the kernel.

Inline Encryption hardware implementations often function around the
concept of a limited number of "keyslots", which can hold an encryption
context each. The storage device can be directed to en/decrypt any
particular request with the encryption context stored in any particular
keyslot.

Patch 1 introduces a Keyslot Manager to efficiently manage keyslots.
The keyslot manager also functions as the interface that blk-crypto
(introduced in Patch 2), will use to program keys into inline encryption
hardware. For more information on the Keyslot Manager, refer to
documentation found in block/keyslot-manager.c and linux/keyslot-manager.h.

Patch 2 adds the block layer changes for inline encryption support. It
introduces struct bio_crypt_ctx, and a ptr to one in struct bio, which
allows struct bio to represent an encryption context that can be passed
down the storage stack from the filesystem layer to the storage driver.

Patch 3 introduces blk-crypto-fallback - a kernel crypto API fallback for
blk-crypto to use when inline encryption hardware isn't present. This
allows filesystems to specify encryption contexts for bios without
having to worry about whether the underlying hardware has inline
encryption support, and allows for testing without real hardware inline
encryption support. This fallback is separately configurable from
blk-crypto, and can be disabled if desired while keeping inline
encryption support. It may also be possible to remove file content
en/decryption from fscrypt and simply use blk-crypto-fallback in a future
patch. For more details on blk-crypto and the fallback, refer to
Documentation/block/inline-encryption.rst.

Patches 4-6 add support for inline encryption into the UFS driver according
to the JEDEC UFS HCI v2.1 specification. Inline encryption support for
other drivers (like eMMC) may be added in the same way - the device driver
should set up a Keyslot Manager in the device's request_queue (refer to
the UFS crypto additions in ufshcd-crypto.c and ufshcd.c for an example).

Patch 7 adds support to fscrypt - to use inline encryption with fscrypt,
the filesystem must be mounted with '-o inlinecrypt' - when this option is
specified, the contents of any AES-256-XTS encrypted file will be
encrypted using blk-crypto.

Patches 8 and 9 add support to f2fs and ext4 respectively, so that we have
a complete stack that can make use of inline encryption.

The patches were tested running kvm-xfstests, by specifying the introduced
"inlinecrypt" mount option, so that en/decryption happens with the
blk-crypto fallback. The patches were also tested on a Pixel 4 with UFS
hardware that has support for inline encryption.

There have been a few patch sets addressing Inline Encryption Support in
the past. Briefly, this patch set differs from those as follows:

1) "crypto: qce: ice: Add support for Inline Crypto Engine"
is specific to certain hardware, while our patch set's Inline
Encryption support for UFS is implemented according to the JEDEC UFS
specification.

2) "scsi: ufs: UFS Host Controller crypto changes" registers inline
encryption support as a kernel crypto algorithm. Our patch views inline
encryption as being fundamentally different from a generic crypto
provider (in that inline encryption is tied to a device), and so does
not use the kernel crypto API to represent inline encryption hardware.

3) "scsi: ufs: add real time/inline crypto support to UFS HCD" requires
the device mapper to work - our patch does not.

Changes v6 => v7:
 - Keyslot management is now done on a per-request basis rather than a
   per-bio basis.
 - Storage drivers can now specify the maximum number of bytes they
   can accept for the data unit number (DUN) for each crypto algorithm,
   and upper layers can specify the minimum number of bytes of DUN they
   want with the blk_crypto_key they send with the bio - a driver is
   only considered to support a blk_crypto_key if the driver supports at
   least as many DUN bytes as the upper layer wants. This is necessary
   because storage drivers may not support as many bytes as the
   algorithm specification dictates (for e.g. UFS only supports 8 byte
   DUNs for AES-256-XTS, even though the algorithm specification
   says DUNs are 16 bytes long).
 - Introduce SB_INLINECRYPT to keep track of whether inline encryption
   is enabled for a filesystem (instead of using an fscrypt_operation).
 - Expose keyslot manager declaration and embed it within ufs_hba to
   clean up code.
 - Make blk-crypto preclude blk-integrity.
 - Some bug fixes
 - Introduce UFSHCD_QUIRK_BROKEN_CRYPTO for UFS drivers that don't
   support inline encryption (yet)

Changes v5 => v6:
 - Blk-crypto's kernel crypto API fallback is no longer restricted to
   8-byte DUNs. It's also now separately configurable from blk-crypto, and
   can be disabled entirely, while still allowing the kernel to use inline
   encryption hardware. Further, struct bio_crypt_ctx takes up less space,
   and no longer contains the information needed by the crypto API
   fallback - the fallback allocates the required memory when necessary.
 - Blk-crypto now supports all file content encryption modes supported by
   fscrypt.
 - Fixed bio merging logic in blk-merge.c
 - Fscrypt now supports inline encryption with the direct key policy, since
   blk-crypto now has support for larger DUNs.
 - Keyslot manager now uses a hashtable to lookup which keyslot contains
   any particular key (thanks Eric!)
 - Fscrypt support for inline encryption now handles filesystems with
   multiple underlying block devices (thanks Eric!)
 - Numerous cleanups

Changes v4 => v5:
 - The fscrypt patch has been separated into 2. The first adds support
   for the IV_INO_LBLK_64 policy (which was called INLINE_CRYPT_OPTIMIZED
   in past versions of this series). This policy is now purely an on disk
   format, and doesn't dictate whether blk-crypto is used for file content
   encryption or not. Instead, this is now decided based on the
   "inlinecrypt" mount option.
 - Inline crypto key eviction is now handled by blk-crypto instead of
   fscrypt.
 - More refactoring.

Changes v3 => v4:
 - Fixed the issue with allocating crypto_skcipher in
   blk_crypto_keyslot_program.
 - bio_crypto_alloc_ctx is now mempool backed.
 - In f2fs, a bio's bi_crypt_context is now set up when the
   bio is allocated, rather than just before the bio is
   submitted - this fixes bugs in certain cases, like when an
   encrypted block is being moved without decryption.
 - Lots of refactoring and cleanup of blk-crypto - thanks Eric!

Changes v2 => v3:
 - Overhauled keyslot manager's get keyslot logic and optimized LRU.
 - Block crypto en/decryption fallback now supports data unit sizes
   that divide the bvec length (instead of requiring each bvec's length
   to be the same as the data unit size).
 - fscrypt master key is now keyed additionally by super_block and
   ci_ctfm != NULL.
 - all references of "hw encryption" are replaced by inline encryption.
 - address various other review comments from Eric.

Changes v1 => v2:
 - Block layer and UFS changes are split into 3 patches each.
 - We now only have a ptr to a struct bio_crypt_ctx in struct bio, instead
   of the struct itself.
 - struct bio_crypt_ctx no longer has flags.
 - blk-crypto now correctly handles the case when it fails to init
   (because of insufficient memory), but kernel continues to boot.
 - ufshcd-crypto now works on big endian cpus.
 - Many cleanups.

Eric Biggers (1):
  ext4: add inline encryption support

Satya Tangirala (8):
  block: Keyslot Manager for Inline Encryption
  block: Inline encryption support for blk-mq
  block: blk-crypto-fallback for Inline Encryption
  scsi: ufs: UFS driver v2.1 spec crypto additions
  scsi: ufs: UFS crypto API
  scsi: ufs: Add inline encryption support to UFS
  fscrypt: add inline encryption support
  f2fs: add inline encryption support

 Documentation/admin-guide/ext4.rst        |   4 +
 Documentation/block/index.rst             |   1 +
 Documentation/block/inline-encryption.rst | 162 ++++++
 Documentation/filesystems/f2fs.txt        |   4 +
 block/Kconfig                             |  17 +
 block/Makefile                            |   2 +
 block/bio-integrity.c                     |   5 +
 block/bio.c                               |   7 +-
 block/blk-core.c                          |  16 +
 block/blk-crypto-fallback.c               | 673 ++++++++++++++++++++++
 block/blk-crypto-internal.h               |  51 ++
 block/blk-crypto.c                        | 441 ++++++++++++++
 block/blk-map.c                           |   1 +
 block/blk-merge.c                         |  11 +
 block/blk-mq.c                            |  16 +
 block/blk.h                               |   3 +
 block/bounce.c                            |   2 +
 block/keyslot-manager.c                   | 426 ++++++++++++++
 drivers/md/dm.c                           |   2 +
 drivers/scsi/ufs/Kconfig                  |   9 +
 drivers/scsi/ufs/Makefile                 |   1 +
 drivers/scsi/ufs/ufs-hisi.c               |   8 +
 drivers/scsi/ufs/ufs-qcom.c               |   7 +
 drivers/scsi/ufs/ufshcd-crypto.c          | 393 +++++++++++++
 drivers/scsi/ufs/ufshcd-crypto.h          |  68 +++
 drivers/scsi/ufs/ufshcd.c                 |  61 +-
 drivers/scsi/ufs/ufshcd.h                 |  33 ++
 drivers/scsi/ufs/ufshci.h                 |  67 ++-
 fs/buffer.c                               |   7 +-
 fs/crypto/Kconfig                         |   6 +
 fs/crypto/Makefile                        |   1 +
 fs/crypto/bio.c                           |  55 ++
 fs/crypto/crypto.c                        |   2 +-
 fs/crypto/fname.c                         |   4 +-
 fs/crypto/fscrypt_private.h               | 121 +++-
 fs/crypto/inline_crypt.c                  | 324 +++++++++++
 fs/crypto/keyring.c                       |   4 +-
 fs/crypto/keysetup.c                      |  95 ++-
 fs/crypto/keysetup_v1.c                   |  16 +-
 fs/ext4/ext4.h                            |   1 +
 fs/ext4/inode.c                           |   4 +-
 fs/ext4/page-io.c                         |   6 +-
 fs/ext4/readpage.c                        |  11 +-
 fs/ext4/super.c                           |  19 +
 fs/f2fs/compress.c                        |   2 +-
 fs/f2fs/data.c                            |  67 ++-
 fs/f2fs/f2fs.h                            |   3 +
 fs/f2fs/super.c                           |  35 ++
 include/linux/bio.h                       |   1 +
 include/linux/blk-crypto.h                | 271 +++++++++
 include/linux/blk_types.h                 |  12 +
 include/linux/blkdev.h                    |  16 +
 include/linux/fs.h                        |   1 +
 include/linux/fscrypt.h                   |  57 ++
 include/linux/keyslot-manager.h           | 108 ++++
 55 files changed, 3654 insertions(+), 86 deletions(-)
 create mode 100644 Documentation/block/inline-encryption.rst
 create mode 100644 block/blk-crypto-fallback.c
 create mode 100644 block/blk-crypto-internal.h
 create mode 100644 block/blk-crypto.c
 create mode 100644 block/keyslot-manager.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h
 create mode 100644 fs/crypto/inline_crypt.c
 create mode 100644 include/linux/blk-crypto.h
 create mode 100644 include/linux/keyslot-manager.h

-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-21 17:04   ` Christoph Hellwig
  2020-02-27 18:48   ` Eric Biggers
  2020-02-21 11:50 ` [PATCH v7 2/9] block: Inline encryption support for blk-mq Satya Tangirala
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala

Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size) along
with a data transfer request to a storage device, and the inline encryption
hardware will use that context to en/decrypt the data. The inline
encryption hardware is part of the storage device, and it conceptually sits
on the data path between system memory and the storage device.

Inline Encryption hardware implementations often function around the
concept of "keyslots". These implementations often have a limited number
of "keyslots", each of which can hold an encryption context (we say that
an encryption context can be "programmed" into a keyslot). Requests made
to the storage device may have a keyslot associated with them, and the
inline encryption hardware will en/decrypt the data in the requests using
the encryption context programmed into that associated keyslot.

As keyslots are limited, and programming keys may be expensive in many
implementations, and multiple requests may use exactly the same encryption
contexts, we introduce a Keyslot Manager to efficiently manage keyslots.

We also introduce a blk_crypto_key, which will represent the key that's
programmed into keyslots managed by keyslot managers. The keyslot manager
also functions as the interface that upper layers will use to program keys
into inline encryption hardware. For more information on the Keyslot
Manager, refer to documentation found in block/keyslot-manager.c and
linux/keyslot-manager.h.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 block/Kconfig                   |   7 +
 block/Makefile                  |   1 +
 block/keyslot-manager.c         | 426 ++++++++++++++++++++++++++++++++
 include/linux/bio.h             |   1 +
 include/linux/blk-crypto.h      |  45 ++++
 include/linux/blkdev.h          |   6 +
 include/linux/keyslot-manager.h | 108 ++++++++
 7 files changed, 594 insertions(+)
 create mode 100644 block/keyslot-manager.c
 create mode 100644 include/linux/blk-crypto.h
 create mode 100644 include/linux/keyslot-manager.h

diff --git a/block/Kconfig b/block/Kconfig
index 3bc76bb113a0..c04a1d500842 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -185,6 +185,13 @@ config BLK_SED_OPAL
 	Enabling this option enables users to setup/unlock/lock
 	Locking ranges for SED devices using the Opal protocol.
 
+config BLK_INLINE_ENCRYPTION
+	bool "Enable inline encryption support in block layer"
+	help
+	  Build the blk-crypto subsystem. Enabling this lets the
+	  block layer handle encryption, so users can take
+	  advantage of inline encryption hardware if present.
+
 menu "Partition Types"
 
 source "block/partitions/Kconfig"
diff --git a/block/Makefile b/block/Makefile
index 1a43750f4b01..ef3a05dcf1f2 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -37,3 +37,4 @@ obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
 obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
 obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
 obj-$(CONFIG_BLK_PM)		+= blk-pm.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= keyslot-manager.o
diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
new file mode 100644
index 000000000000..15dfc0c12c7f
--- /dev/null
+++ b/block/keyslot-manager.c
@@ -0,0 +1,426 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+/**
+ * DOC: The Keyslot Manager
+ *
+ * Many devices with inline encryption support have a limited number of "slots"
+ * into which encryption contexts may be programmed, and requests can be tagged
+ * with a slot number to specify the key to use for en/decryption.
+ *
+ * As the number of slots are limited, and programming keys is expensive on
+ * many inline encryption hardware, we don't want to program the same key into
+ * multiple slots - if multiple requests are using the same key, we want to
+ * program just one slot with that key and use that slot for all requests.
+ *
+ * The keyslot manager manages these keyslots appropriately, and also acts as
+ * an abstraction between the inline encryption hardware and the upper layers.
+ *
+ * Lower layer devices will set up a keyslot manager in their request queue
+ * and tell it how to perform device specific operations like programming/
+ * evicting keys from keyslots.
+ *
+ * Upper layers will call blk_ksm_get_slot_for_key() to program a
+ * key into some slot in the inline encryption hardware.
+ */
+#include <crypto/algapi.h>
+#include <linux/keyslot-manager.h>
+#include <linux/atomic.h>
+#include <linux/mutex.h>
+#include <linux/pm_runtime.h>
+#include <linux/wait.h>
+#include <linux/blkdev.h>
+
+struct keyslot {
+	atomic_t slot_refs;
+	struct list_head idle_slot_node;
+	struct hlist_node hash_node;
+	struct blk_crypto_key key;
+};
+
+#ifdef CONFIG_PM
+static inline void blk_ksm_set_dev(struct keyslot_manager *ksm,
+				   struct device *dev)
+{
+	ksm->dev = dev;
+}
+
+/* If there's an underlying device and it's suspended, resume it. */
+static inline void blk_ksm_pm_get(struct keyslot_manager *ksm)
+{
+	if (ksm->dev)
+		pm_runtime_get_sync(ksm->dev);
+}
+
+static inline void blk_ksm_pm_put(struct keyslot_manager *ksm)
+{
+	if (ksm->dev)
+		pm_runtime_put_sync(ksm->dev);
+}
+#else /* CONFIG_PM */
+static inline void blk_ksm_set_dev(struct keyslot_manager *ksm,
+				   struct device *dev)
+{
+}
+
+static inline void blk_ksm_pm_get(struct keyslot_manager *ksm)
+{
+}
+
+static inline void blk_ksm_pm_put(struct keyslot_manager *ksm)
+{
+}
+#endif /* !CONFIG_PM */
+
+static inline void blk_ksm_hw_enter(struct keyslot_manager *ksm)
+{
+	/*
+	 * Calling into the driver requires ksm->lock held and the device
+	 * resumed.  But we must resume the device first, since that can acquire
+	 * and release ksm->lock via blk_ksm_reprogram_all_keys().
+	 */
+	blk_ksm_pm_get(ksm);
+	down_write(&ksm->lock);
+}
+
+static inline void blk_ksm_hw_exit(struct keyslot_manager *ksm)
+{
+	up_write(&ksm->lock);
+	blk_ksm_pm_put(ksm);
+}
+
+/**
+ * blk_ksm_init() - Initialize a keyslot manager
+ * @ksm: The keyslot_manager to initialize.
+ * @dev: Device for runtime power management (NULL if none)
+ * @num_slots: The number of key slots to manage.
+ *
+ * Allocate memory for keyslots and initialize a keyslot manager. Called by
+ * e.g. storage drivers to set up a keyslot manager in their request_queue.
+ *
+ * Return: 0 on success, or else a negative error code.
+ */
+int blk_ksm_init(struct keyslot_manager *ksm, struct device *dev,
+		 unsigned int num_slots)
+{
+	unsigned int slot;
+	unsigned int i;
+
+	memset(ksm, 0, sizeof(*ksm));
+
+	if (num_slots == 0)
+		return -EINVAL;
+
+	ksm->slots = kvzalloc(sizeof(ksm->slots[0]) * num_slots, GFP_KERNEL);
+	if (!ksm->slots)
+		return -ENOMEM;
+
+	ksm->num_slots = num_slots;
+	blk_ksm_set_dev(ksm, dev);
+
+	init_rwsem(&ksm->lock);
+
+	init_waitqueue_head(&ksm->idle_slots_wait_queue);
+	INIT_LIST_HEAD(&ksm->idle_slots);
+
+	for (slot = 0; slot < num_slots; slot++) {
+		list_add_tail(&ksm->slots[slot].idle_slot_node,
+			      &ksm->idle_slots);
+	}
+
+	spin_lock_init(&ksm->idle_slots_lock);
+
+	ksm->slot_hashtable_size = roundup_pow_of_two(num_slots);
+	ksm->slot_hashtable = kvmalloc_array(ksm->slot_hashtable_size,
+					     sizeof(ksm->slot_hashtable[0]),
+					     GFP_KERNEL);
+	if (!ksm->slot_hashtable)
+		goto err_free_ksm;
+	for (i = 0; i < ksm->slot_hashtable_size; i++)
+		INIT_HLIST_HEAD(&ksm->slot_hashtable[i]);
+
+	return 0;
+
+err_free_ksm:
+	blk_ksm_destroy(ksm);
+	return -ENOMEM;
+}
+EXPORT_SYMBOL_GPL(blk_ksm_init);
+
+static inline struct hlist_head *
+blk_ksm_hash_bucket_for_key(struct keyslot_manager *ksm,
+			    const struct blk_crypto_key *key)
+{
+	return &ksm->slot_hashtable[key->hash & (ksm->slot_hashtable_size - 1)];
+}
+
+static void blk_ksm_remove_slot_from_lru_list(struct keyslot_manager *ksm,
+					      struct keyslot *slot)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ksm->idle_slots_lock, flags);
+	list_del(&slot->idle_slot_node);
+	spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
+}
+
+static struct keyslot *blk_ksm_find_keyslot(struct keyslot_manager *ksm,
+					    const struct blk_crypto_key *key)
+{
+	const struct hlist_head *head = blk_ksm_hash_bucket_for_key(ksm, key);
+	struct keyslot *slotp;
+
+	hlist_for_each_entry(slotp, head, hash_node) {
+		if (slotp->key.hash == key->hash &&
+		    slotp->key.crypto_mode == key->crypto_mode &&
+		    slotp->key.data_unit_size == key->data_unit_size &&
+		    !crypto_memneq(slotp->key.raw, key->raw, key->size))
+			return slotp;
+	}
+	return NULL;
+}
+
+static struct keyslot *blk_ksm_find_and_grab_keyslot(
+					struct keyslot_manager *ksm,
+					const struct blk_crypto_key *key)
+{
+	struct keyslot *slot;
+
+	slot = blk_ksm_find_keyslot(ksm, key);
+	if (!slot)
+		return NULL;
+	if (atomic_inc_return(&slot->slot_refs) == 1) {
+		/* Took first reference to this slot; remove it from LRU list */
+		blk_ksm_remove_slot_from_lru_list(ksm, slot);
+	}
+	return slot;
+}
+
+static unsigned int blk_ksm_get_slot_idx(struct keyslot *slot,
+					 struct keyslot_manager *ksm)
+{
+	return slot - ksm->slots;
+}
+
+/**
+ * blk_ksm_get_slot_for_key() - Program a key into a keyslot.
+ * @ksm: The keyslot manager to program the key into.
+ * @key: Pointer to the key object to program, including the raw key, crypto
+ *	 mode, and data unit size.
+ *
+ * Get a keyslot that's been programmed with the specified key.  If one already
+ * exists, return it with incremented refcount.  Otherwise, wait for a keyslot
+ * to become idle and program it.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ * Return: The keyslot on success, else a -errno value.
+ */
+int blk_ksm_get_slot_for_key(struct keyslot_manager *ksm,
+			     const struct blk_crypto_key *key)
+{
+	struct keyslot *slot;
+	int slot_idx;
+	int err;
+
+	down_read(&ksm->lock);
+	slot = blk_ksm_find_and_grab_keyslot(ksm, key);
+	up_read(&ksm->lock);
+	if (slot != NULL)
+		return blk_ksm_get_slot_idx(slot, ksm);
+
+	for (;;) {
+		blk_ksm_hw_enter(ksm);
+		slot = blk_ksm_find_and_grab_keyslot(ksm, key);
+		if (slot != NULL) {
+			blk_ksm_hw_exit(ksm);
+			return blk_ksm_get_slot_idx(slot, ksm);
+		}
+
+		/*
+		 * If we're here, that means there wasn't a slot that was
+		 * already programmed with the key. So try to program it.
+		 */
+		if (!list_empty(&ksm->idle_slots))
+			break;
+
+		blk_ksm_hw_exit(ksm);
+		wait_event(ksm->idle_slots_wait_queue,
+			   !list_empty(&ksm->idle_slots));
+	}
+
+	slot = list_first_entry(&ksm->idle_slots, struct keyslot,
+				idle_slot_node);
+	slot_idx = blk_ksm_get_slot_idx(slot, ksm);
+
+	err = ksm->ksm_ll_ops.keyslot_program(ksm, key, slot_idx);
+	if (err) {
+		wake_up(&ksm->idle_slots_wait_queue);
+		blk_ksm_hw_exit(ksm);
+		return err;
+	}
+
+	/* Move this slot to the hash list for the new key. */
+	if (slot->key.crypto_mode != BLK_ENCRYPTION_MODE_INVALID)
+		hlist_del(&slot->hash_node);
+	hlist_add_head(&slot->hash_node, blk_ksm_hash_bucket_for_key(ksm, key));
+
+	atomic_set(&slot->slot_refs, 1);
+	slot->key = *key;
+
+	blk_ksm_remove_slot_from_lru_list(ksm, slot);
+
+	blk_ksm_hw_exit(ksm);
+	return slot_idx;
+}
+
+/**
+ * blk_ksm_get_slot() - Increment the refcount on the specified slot.
+ * @ksm: The keyslot manager that we want to modify.
+ * @slot: The slot to increment the refcount of.
+ *
+ * This function assumes that there is already an active reference to that slot
+ * and simply increments the refcount. This is useful when cloning a bio that
+ * already has a reference to a keyslot, and we want the cloned bio to also have
+ * its own reference.
+ *
+ * Context: Any context.
+ */
+void blk_ksm_get_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+	if (WARN_ON(slot >= ksm->num_slots))
+		return;
+
+	WARN_ON(atomic_inc_return(&ksm->slots[slot].slot_refs) < 2);
+}
+
+/**
+ * blk_ksm_put_slot() - Release a reference to a slot
+ * @ksm: The keyslot manager to release the reference from.
+ * @slot: The slot to release the reference from.
+ *
+ * Context: Any context.
+ */
+void blk_ksm_put_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+	unsigned long flags;
+
+	if (WARN_ON(slot >= ksm->num_slots))
+		return;
+
+	if (atomic_dec_and_lock_irqsave(&ksm->slots[slot].slot_refs,
+					&ksm->idle_slots_lock, flags)) {
+		list_add_tail(&ksm->slots[slot].idle_slot_node,
+			      &ksm->idle_slots);
+		spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
+		wake_up(&ksm->idle_slots_wait_queue);
+	}
+}
+
+/**
+ * blk_ksm_crypto_mode_supported() - Find out if a crypto_mode/data unit size
+ *				     combination is supported by a ksm.
+ * @ksm: The keyslot manager to check
+ * @crypto_mode: The crypto mode to check for.
+ * @blk_crypto_dun_bytes: The minimum number of bytes needed for specifying DUNs
+ * @data_unit_size: The data_unit_size for the mode.
+ *
+ * Checks for crypto_mode/data unit size support.
+ *
+ * Return: Whether or not this ksm supports the specified crypto_mode/
+ *	   data_unit_size combo.
+ */
+bool blk_ksm_crypto_mode_supported(struct keyslot_manager *ksm,
+				   const struct blk_crypto_key *key)
+{
+	if (!ksm)
+		return false;
+	if (WARN_ON((unsigned int)key->crypto_mode >= BLK_ENCRYPTION_MODE_MAX))
+		return false;
+	if (WARN_ON(!is_power_of_2(key->data_unit_size)))
+		return false;
+	return (ksm->crypto_modes_supported[key->crypto_mode] &
+		key->data_unit_size) &&
+	       (ksm->max_dun_bytes_supported[key->crypto_mode] >=
+		key->dun_bytes);
+}
+
+/**
+ * blk_ksm_evict_key() - Evict a key from the lower layer device.
+ * @ksm: The keyslot manager to evict from
+ * @key: The key to evict
+ *
+ * Find the keyslot that the specified key was programmed into, and evict that
+ * slot from the lower layer device if that slot is not currently in use.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ * Return: 0 on success or if there's no keyslot with the specified key, -EBUSY
+ *	   if the key is still in use, or another -errno value on other error.
+ */
+int blk_ksm_evict_key(struct keyslot_manager *ksm,
+		      const struct blk_crypto_key *key)
+{
+	struct keyslot *slot;
+	int err = 0;
+
+	blk_ksm_hw_enter(ksm);
+	slot = blk_ksm_find_keyslot(ksm, key);
+	if (!slot)
+		goto out_unlock;
+
+	if (atomic_read(&slot->slot_refs) != 0) {
+		err = -EBUSY;
+		goto out_unlock;
+	}
+	err = ksm->ksm_ll_ops.keyslot_evict(ksm, key,
+					    blk_ksm_get_slot_idx(slot, ksm));
+	if (err)
+		goto out_unlock;
+
+	hlist_del(&slot->hash_node);
+	memzero_explicit(&slot->key, sizeof(slot->key));
+	err = 0;
+out_unlock:
+	blk_ksm_hw_exit(ksm);
+	return err;
+}
+
+/**
+ * blk_ksm_reprogram_all_keys() - Re-program all keyslots.
+ * @ksm: The keyslot manager
+ *
+ * Re-program all keyslots that are supposed to have a key programmed.  This is
+ * intended only for use by drivers for hardware that loses its keys on reset.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ */
+void blk_ksm_reprogram_all_keys(struct keyslot_manager *ksm)
+{
+	unsigned int slot;
+
+	/* This is for device initialization, so don't resume the device */
+	down_write(&ksm->lock);
+	for (slot = 0; slot < ksm->num_slots; slot++) {
+		const struct keyslot *slotp = &ksm->slots[slot];
+		int err;
+
+		if (slotp->key.crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
+			continue;
+
+		err = ksm->ksm_ll_ops.keyslot_program(ksm, &slotp->key, slot);
+		WARN_ON(err);
+	}
+	up_write(&ksm->lock);
+}
+EXPORT_SYMBOL_GPL(blk_ksm_reprogram_all_keys);
+
+void blk_ksm_destroy(struct keyslot_manager *ksm)
+{
+	if (!ksm)
+		return;
+	kvfree(ksm->slot_hashtable);
+	kvfree(ksm->slots);
+	memzero_explicit(ksm, sizeof(*ksm));
+}
+EXPORT_SYMBOL_GPL(blk_ksm_destroy);
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 853d92ceee64..b2103e207ed5 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -8,6 +8,7 @@
 #include <linux/highmem.h>
 #include <linux/mempool.h>
 #include <linux/ioprio.h>
+#include <linux/blk-crypto.h>
 
 #ifdef CONFIG_BLOCK
 /* struct bio, bio_vec and BIO_* flags are defined in blk_types.h */
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
new file mode 100644
index 000000000000..b8d54eca1c0d
--- /dev/null
+++ b/include/linux/blk-crypto.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_BLK_CRYPTO_H
+#define __LINUX_BLK_CRYPTO_H
+
+enum blk_crypto_mode_num {
+	BLK_ENCRYPTION_MODE_INVALID,
+	BLK_ENCRYPTION_MODE_AES_256_XTS,
+	BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+	BLK_ENCRYPTION_MODE_ADIANTUM,
+	BLK_ENCRYPTION_MODE_MAX,
+};
+
+#define BLK_CRYPTO_MAX_KEY_SIZE		64
+
+/**
+ * struct blk_crypto_key - an inline encryption key
+ * @crypto_mode: encryption algorithm this key is for
+ * @data_unit_size: the data unit size for all encryption/decryptions with this
+ *	key.  This is the size in bytes of each individual plaintext and
+ *	ciphertext.  This is always a power of 2.  It might be e.g. the
+ *	filesystem block size or the disk sector size.
+ * @data_unit_size_bits: log2 of data_unit_size
+ * @dun_bytes: the number of bytes of DUN used when using this key
+ * @size: size of this key in bytes (determined by @crypto_mode)
+ * @hash: hash of this key, for keyslot manager use only
+ * @raw: the raw bytes of this key.  Only the first @size bytes are used.
+ *
+ * A blk_crypto_key is immutable once created, and many bios can reference it at
+ * the same time.  It must not be freed until all bios using it have completed.
+ */
+struct blk_crypto_key {
+	enum blk_crypto_mode_num crypto_mode;
+	unsigned int data_unit_size;
+	unsigned int data_unit_size_bits;
+	unsigned int dun_bytes;
+	unsigned int size;
+	unsigned int hash;
+	u8 raw[BLK_CRYPTO_MAX_KEY_SIZE];
+};
+
+#endif /* __LINUX_BLK_CRYPTO_H */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 053ea4b51988..8881b25ef58b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -43,6 +43,7 @@ struct pr_ops;
 struct rq_qos;
 struct blk_queue_stats;
 struct blk_stat_callback;
+struct keyslot_manager;
 
 #define BLKDEV_MIN_RQ	4
 #define BLKDEV_MAX_RQ	128	/* Default maximum */
@@ -474,6 +475,11 @@ struct request_queue {
 	unsigned int		dma_pad_mask;
 	unsigned int		dma_alignment;
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	/* Inline crypto capabilities */
+	struct keyslot_manager *ksm;
+#endif
+
 	unsigned int		rq_timeout;
 	int			poll_nsec;
 
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
new file mode 100644
index 000000000000..c26a3477f6dc
--- /dev/null
+++ b/include/linux/keyslot-manager.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_KEYSLOT_MANAGER_H
+#define __LINUX_KEYSLOT_MANAGER_H
+
+#include <linux/bio.h>
+
+struct keyslot_manager;
+
+/**
+ * struct keyslot_mgmt_ll_ops - functions to manage keyslots in hardware
+ * @keyslot_program:	Program the specified key into the specified slot in the
+ *			inline encryption hardware.
+ * @keyslot_evict:	Evict key from the specified keyslot in the hardware.
+ *			The key is provided so that e.g. dm layers can evict
+ *			keys from the devices that they map over.
+ *			Returns 0 on success, -errno otherwise.
+ *
+ * This structure should be provided by storage device drivers when they set up
+ * a keyslot manager - this structure holds the function ptrs that the keyslot
+ * manager will use to manipulate keyslots in the hardware.
+ */
+struct keyslot_mgmt_ll_ops {
+	int (*keyslot_program)(struct keyslot_manager *ksm,
+			       const struct blk_crypto_key *key,
+			       unsigned int slot);
+	int (*keyslot_evict)(struct keyslot_manager *ksm,
+			     const struct blk_crypto_key *key,
+			     unsigned int slot);
+};
+
+struct keyslot_manager {
+	unsigned int num_slots;
+
+	/*
+	 * The struct keyslot_mgmt_ll_ops that this keyslot manager will use
+	 * to perform operations like programming and evicting keys on the
+	 * device
+	 */
+	struct keyslot_mgmt_ll_ops ksm_ll_ops;
+
+	/*
+	 * Array of size BLK_ENCRYPTION_MODE_MAX of bitmasks that represents
+	 * whether a crypto mode and data unit size are supported. The i'th
+	 * bit of crypto_mode_supported[crypto_mode] is set iff a data unit
+	 * size of (1 << i) is supported. We only support data unit sizes
+	 * that are powers of 2.
+	 */
+	unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+	/*
+	 * Array of size BLK_ENCRYPTION_MODE_MAX. The i'th entry specifies the
+	 * maximum number of dun bytes supported by the i'th crypto mode.
+	 */
+	unsigned int max_dun_bytes_supported[BLK_ENCRYPTION_MODE_MAX];
+
+	/* Private data unused by keyslot_manager. */
+	void *ll_priv_data;
+
+#ifdef CONFIG_PM
+	/* Device for runtime power management (NULL if none) */
+	struct device *dev;
+#endif
+
+	/* Protects programming and evicting keys from the device */
+	struct rw_semaphore lock;
+
+	/* List of idle slots, with least recently used slot at front */
+	wait_queue_head_t idle_slots_wait_queue;
+	struct list_head idle_slots;
+	spinlock_t idle_slots_lock;
+
+	/*
+	 * Hash table which maps key hashes to keyslots, so that we can find a
+	 * key's keyslot in O(1) time rather than O(num_slots).  Protected by
+	 * 'lock'.  A cryptographic hash function is used so that timing attacks
+	 * can't leak information about the raw keys.
+	 */
+	struct hlist_head *slot_hashtable;
+	unsigned int slot_hashtable_size;
+
+	/* Per-keyslot data */
+	struct keyslot *slots;
+};
+
+int blk_ksm_init(struct keyslot_manager *ksm, struct device *dev,
+		 unsigned int num_slots);
+
+int blk_ksm_get_slot_for_key(struct keyslot_manager *ksm,
+			     const struct blk_crypto_key *key);
+
+void blk_ksm_get_slot(struct keyslot_manager *ksm, unsigned int slot);
+
+void blk_ksm_put_slot(struct keyslot_manager *ksm, unsigned int slot);
+
+bool blk_ksm_crypto_mode_supported(struct keyslot_manager *ksm,
+				   const struct blk_crypto_key *key);
+
+int blk_ksm_evict_key(struct keyslot_manager *ksm,
+			      const struct blk_crypto_key *key);
+
+void blk_ksm_reprogram_all_keys(struct keyslot_manager *ksm);
+
+void blk_ksm_destroy(struct keyslot_manager *ksm);
+
+#endif /* __LINUX_KEYSLOT_MANAGER_H */
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 2/9] block: Inline encryption support for blk-mq
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
  2020-02-21 11:50 ` [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-21 17:22   ` Christoph Hellwig
  2020-02-21 11:50 ` [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption Satya Tangirala
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala

We must have some way of letting a storage device driver know what
encryption context it should use for en/decrypting a request. However,
it's the upper layers (like the filesystem/fscrypt) that knows about
and manages encryption contexts. As such, when the upper layer submits
a bio to the block layer, and this bio eventually reaches a device
driver with support for inline encryption, the device driver will need
to have been told the encryption context for that bio.

We want to communicate the encryption context from the upper layer
to the storage device along with the bio, when the bio is submitted to the
block layer. To do this, we add a struct bio_crypt_ctx to struct bio, which
can represent an encryption context (note that we can't use the bi_private
field in struct bio to do this because that field does not function to pass
information across layers in the storage stack). We also introduce various
functions to manipulate the bio_crypt_ctx and make the bio/request merging
logic aware of the bio_crypt_ctx.

We also make changes to blk-mq to make it handle bios with encryption
contexts. blk-mq can merge many bios into the same request. These bios need
to have contiguous data unit numbers (the necessary changes to blk-merge
are also made to ensure this) - as such, it suffices to keep the data unit
number of just the first bio, since that's all a storage driver needs to
infer the data unit number to use for each data block in each bio in a
request. blk-mq keeps track of the encryption context to be used for all
the bios in a request with the request's rq_crypt_ctx. When the first bio
is added to an empty request, blk-mq will program the encryption context
of that bio into the request_queue's keyslot manager, and store the
returned keyslot in the request's rq_crypt_ctx. All the functions to
operate on encryption contexts are in blk-crypto.c.

For now, blk-crypto and blk-integrity can't be used with each other
(and any attempts to do both on a bio will cause the bio to fail).

Upper layers only need to call bio_crypt_set_ctx with the encryption key,
algorithm and data_unit_num; they don't have to worry about getting a
keyslot for each encryption context, as blk-mq/blk-crypto handles that.
Blk-crypto also makes it possible for request-based layered devices like
dm-rq to make use of inline encryption hardware by cloning the
rq_crypt_ctx and programming a keyslot in the new request_queue when
necessary.

Note that any user of the block layer can submit bios with an
encryption context, such as filesystems, device-mapper targets, etc.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 block/Makefile              |   2 +-
 block/bio-integrity.c       |   5 +
 block/bio.c                 |   7 +-
 block/blk-core.c            |  16 ++
 block/blk-crypto-internal.h |  19 ++
 block/blk-crypto.c          | 412 ++++++++++++++++++++++++++++++++++++
 block/blk-map.c             |   1 +
 block/blk-merge.c           |  11 +
 block/blk-mq.c              |  16 ++
 block/blk.h                 |   3 +
 block/bounce.c              |   2 +
 drivers/md/dm.c             |   2 +
 include/linux/blk-crypto.h  | 211 ++++++++++++++++++
 include/linux/blk_types.h   |   6 +
 include/linux/blkdev.h      |  10 +
 15 files changed, 721 insertions(+), 2 deletions(-)
 create mode 100644 block/blk-crypto-internal.h
 create mode 100644 block/blk-crypto.c

diff --git a/block/Makefile b/block/Makefile
index ef3a05dcf1f2..82f42ca3f769 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -37,4 +37,4 @@ obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
 obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
 obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
 obj-$(CONFIG_BLK_PM)		+= blk-pm.o
-obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= keyslot-manager.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= keyslot-manager.o blk-crypto.o
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index bf62c25cde8f..bce563031e7c 100644
--- a/block/bio-integrity.c
+++ b/block/bio-integrity.c
@@ -42,6 +42,11 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
 	struct bio_set *bs = bio->bi_pool;
 	unsigned inline_vecs;
 
+	if (bio_has_crypt_ctx(bio)) {
+		pr_warn("blk-integrity can't be used together with blk-crypto en/decryption.");
+		return ERR_PTR(-EOPNOTSUPP);
+	}
+
 	if (!bs || !mempool_initialized(&bs->bio_integrity_pool)) {
 		bip = kmalloc(struct_size(bip, bip_inline_vecs, nr_vecs), gfp_mask);
 		inline_vecs = nr_vecs;
diff --git a/block/bio.c b/block/bio.c
index 94d697217887..115fd5960508 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -17,6 +17,7 @@
 #include <linux/cgroup.h>
 #include <linux/blk-cgroup.h>
 #include <linux/highmem.h>
+#include <linux/blk-crypto.h>
 
 #include <trace/events/block.h>
 #include "blk.h"
@@ -236,6 +237,8 @@ void bio_uninit(struct bio *bio)
 
 	if (bio_integrity(bio))
 		bio_integrity_free(bio);
+
+	bio_crypt_free_ctx(bio);
 }
 EXPORT_SYMBOL(bio_uninit);
 
@@ -664,11 +667,12 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs)
 
 	__bio_clone_fast(b, bio);
 
+	bio_crypt_clone(b, bio, gfp_mask);
+
 	if (bio_integrity(bio)) {
 		int ret;
 
 		ret = bio_integrity_clone(b, bio, gfp_mask);
-
 		if (ret < 0) {
 			bio_put(b);
 			return NULL;
@@ -1046,6 +1050,7 @@ void bio_advance(struct bio *bio, unsigned bytes)
 	if (bio_integrity(bio))
 		bio_integrity_advance(bio, bytes);
 
+	bio_crypt_advance(bio, bytes);
 	bio_advance_iter(bio, &bio->bi_iter, bytes);
 }
 EXPORT_SYMBOL(bio_advance);
diff --git a/block/blk-core.c b/block/blk-core.c
index 089e890ab208..1d8f3fa5cb6c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -38,6 +38,7 @@
 #include <linux/debugfs.h>
 #include <linux/bpf.h>
 #include <linux/psi.h>
+#include <linux/blk-crypto.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/block.h>
@@ -120,6 +121,9 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
 	rq->start_time_ns = ktime_get_ns();
 	rq->part = NULL;
 	refcount_set(&rq->ref, 1);
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	rq->rq_crypt_ctx.keyslot = -EINVAL;
+#endif
 }
 EXPORT_SYMBOL(blk_rq_init);
 
@@ -617,6 +621,8 @@ bool bio_attempt_back_merge(struct request *req, struct bio *bio,
 	req->biotail = bio;
 	req->__data_len += bio->bi_iter.bi_size;
 
+	bio_crypt_free_ctx(bio);
+
 	blk_account_io_start(req, false);
 	return true;
 }
@@ -641,6 +647,8 @@ bool bio_attempt_front_merge(struct request *req, struct bio *bio,
 	req->__sector = bio->bi_iter.bi_sector;
 	req->__data_len += bio->bi_iter.bi_size;
 
+	blk_crypto_rq_bio_prep(req, bio);
+
 	blk_account_io_start(req, false);
 	return true;
 }
@@ -1258,6 +1266,9 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
 	    should_fail_request(&rq->rq_disk->part0, blk_rq_bytes(rq)))
 		return BLK_STS_IOERR;
 
+	if (blk_crypto_insert_cloned_request(q, rq))
+		return BLK_STS_IOERR;
+
 	if (blk_queue_io_stat(q))
 		blk_account_io_start(rq, true);
 
@@ -1646,6 +1657,8 @@ int blk_rq_prep_clone(struct request *rq, struct request *rq_src,
 
 	__blk_rq_prep_clone(rq, rq_src);
 
+	blk_crypto_rq_prep_clone(rq, rq_src);
+
 	return 0;
 
 free_and_out:
@@ -1813,5 +1826,8 @@ int __init blk_dev_init(void)
 	blk_debugfs_root = debugfs_create_dir("block", NULL);
 #endif
 
+	if (bio_crypt_ctx_init() < 0)
+		panic("Failed to allocate mem for bio crypt ctxs\n");
+
 	return 0;
 }
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
new file mode 100644
index 000000000000..2452b5dee140
--- /dev/null
+++ b/block/blk-crypto-internal.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_BLK_CRYPTO_INTERNAL_H
+#define __LINUX_BLK_CRYPTO_INTERNAL_H
+
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+
+/* Represents a crypto mode supported by blk-crypto  */
+struct blk_crypto_mode {
+	const char *cipher_str; /* crypto API name (for fallback case) */
+	unsigned int keysize; /* key size in bytes */
+	unsigned int ivsize; /* iv size in bytes */
+};
+
+#endif /* __LINUX_BLK_CRYPTO_INTERNAL_H */
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
new file mode 100644
index 000000000000..b10b01c83907
--- /dev/null
+++ b/block/blk-crypto.c
@@ -0,0 +1,412 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#define pr_fmt(fmt) "blk-crypto: " fmt
+
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/keyslot-manager.h>
+#include <linux/module.h>
+#include <linux/random.h>
+#include <linux/siphash.h>
+#include <linux/slab.h>
+
+#include "blk-crypto-internal.h"
+
+const struct blk_crypto_mode blk_crypto_modes[] = {
+	[BLK_ENCRYPTION_MODE_AES_256_XTS] = {
+		.cipher_str = "xts(aes)",
+		.keysize = 64,
+		.ivsize = 16,
+	},
+	[BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV] = {
+		.cipher_str = "essiv(cbc(aes),sha256)",
+		.keysize = 16,
+		.ivsize = 16,
+	},
+	[BLK_ENCRYPTION_MODE_ADIANTUM] = {
+		.cipher_str = "adiantum(xchacha12,aes)",
+		.keysize = 32,
+		.ivsize = 32,
+	},
+};
+
+static int num_prealloc_crypt_ctxs = 128;
+
+module_param(num_prealloc_crypt_ctxs, int, 0444);
+MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
+		"Number of bio crypto contexts to preallocate");
+
+static struct kmem_cache *bio_crypt_ctx_cache;
+static mempool_t *bio_crypt_ctx_pool;
+
+int __init bio_crypt_ctx_init(void)
+{
+	size_t i;
+
+	bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0);
+	if (!bio_crypt_ctx_cache)
+		return -ENOMEM;
+
+	bio_crypt_ctx_pool = mempool_create_slab_pool(num_prealloc_crypt_ctxs,
+						      bio_crypt_ctx_cache);
+	if (!bio_crypt_ctx_pool)
+		return -ENOMEM;
+
+	/* This is assumed in various places. */
+	BUILD_BUG_ON(BLK_ENCRYPTION_MODE_INVALID != 0);
+
+	/* Sanity check that no algorithm exceeds the defined limits. */
+	for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) {
+		BUG_ON(blk_crypto_modes[i].keysize > BLK_CRYPTO_MAX_KEY_SIZE);
+		BUG_ON(blk_crypto_modes[i].ivsize > BLK_CRYPTO_MAX_IV_SIZE);
+	}
+
+	return 0;
+}
+
+struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
+{
+	return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
+}
+
+void bio_crypt_free_ctx(struct bio *bio)
+{
+	mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool);
+	bio->bi_crypt_context = NULL;
+}
+
+void bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask)
+{
+	const struct bio_crypt_ctx *src_bc = src->bi_crypt_context;
+
+	if (!src_bc)
+		return;
+
+	dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask);
+	*dst->bi_crypt_context = *src_bc;
+}
+EXPORT_SYMBOL_GPL(bio_crypt_clone);
+
+/*
+ * Checks that two bio crypt contexts are compatible - i.e. that
+ * they are mergeable except for data_unit_num continuity.
+ */
+static bool bio_crypt_ctx_compatible(struct bio_crypt_ctx *bc1,
+				     struct bio_crypt_ctx *bc2)
+{
+	if (!bc1)
+		return !bc2;
+
+	return bc2 && bc1->bc_key == bc2->bc_key;
+}
+
+bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio)
+{
+	return bio_crypt_ctx_compatible(rq->rq_crypt_ctx.bc,
+					bio->bi_crypt_context);
+}
+
+/*
+ * Checks that two bio crypt contexts are compatible, and also
+ * that their data_unit_nums are continuous (and can hence be merged)
+ * in the order b_1 followed by b_2.
+ */
+static bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1,
+				    unsigned int bc1_bytes,
+				    struct bio_crypt_ctx *bc2)
+{
+	if (!bio_crypt_ctx_compatible(bc1, bc2))
+		return false;
+
+	return !bc1 || bio_crypt_dun_is_contiguous(bc1, bc1_bytes, bc2->bc_dun);
+}
+
+bool bio_crypt_ctx_back_mergeable(struct request *req, struct bio *bio)
+{
+	return bio_crypt_ctx_mergeable(req->rq_crypt_ctx.bc, blk_rq_bytes(req),
+				       bio->bi_crypt_context);
+}
+
+bool bio_crypt_ctx_front_mergeable(struct request *req, struct bio *bio)
+{
+	return bio_crypt_ctx_mergeable(bio->bi_crypt_context,
+				       bio->bi_iter.bi_size,
+				       req->rq_crypt_ctx.bc);
+}
+
+bool bio_crypt_ctx_merge_rq(struct request *req, struct request *next)
+{
+	return bio_crypt_ctx_mergeable(req->rq_crypt_ctx.bc, blk_rq_bytes(req),
+				       next->rq_crypt_ctx.bc);
+}
+
+/* Check that all I/O segments are data unit aligned */
+static int bio_crypt_check_alignment(struct bio *bio)
+{
+	const unsigned int data_unit_size =
+				bio->bi_crypt_context->bc_key->data_unit_size;
+	struct bvec_iter iter;
+	struct bio_vec bv;
+
+	bio_for_each_segment(bv, bio, iter) {
+		if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size))
+			return -EIO;
+	}
+	return 0;
+}
+
+/* Return: 0 on success, negative on error */
+int rq_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
+				  struct keyslot_manager *ksm,
+				  struct rq_crypt_ctx *rc)
+{
+	rc->keyslot = blk_ksm_get_slot_for_key(ksm, bc->bc_key);
+	return rc->keyslot >= 0 ? 0 : rc->keyslot;
+}
+
+void rq_crypt_ctx_release_keyslot(struct keyslot_manager *ksm,
+				  struct rq_crypt_ctx *rc)
+{
+	if (rc->keyslot >= 0)
+		blk_ksm_put_slot(ksm, rc->keyslot);
+	rc->keyslot = -EINVAL;
+}
+
+/**
+ * blk_crypto_init_request - Initializes the request's rq_crypt_ctx based on the
+ *			     bio to be added to the request, and prepares it for
+ *			     hardware inline encryption.
+ *
+ * @rq: The request to init
+ * @q: The request queue this request will be submitted to
+ * @bio: The bio that will (eventually) be added to @rq.
+ *
+ * Initializes the request's rq_crypt_ctx to appropriate default values.
+ * Then, if the bio doesn't have an encryption context, there's nothing to do.
+ * Otherwise, we're relying on the underlying device's inline encryption HW for
+ * en/decryption, so get a keyslot for the bio crypt ctx.
+ *
+ * Return: 0 on success, and negative error code otherwise.
+ */
+int blk_crypto_init_request(struct request *rq, struct request_queue *q,
+			    struct bio *bio)
+{
+	struct rq_crypt_ctx *rc = &rq->rq_crypt_ctx;
+	struct bio_crypt_ctx *bc;
+	int err;
+
+	rc->bc = NULL;
+	rc->keyslot = -EINVAL;
+
+	if (!bio)
+		return 0;
+
+	bc = bio->bi_crypt_context;
+	if (!bc)
+		return 0;
+
+	err = rq_crypt_ctx_acquire_keyslot(bc, q->ksm, rc);
+	if (err)
+		pr_warn_once("Failed to acquire keyslot for %s (err=%d).\n",
+			     bio->bi_disk->disk_name, err);
+	return err;
+}
+
+/**
+ * blk_crypto_free_request - Uninitialize the rq_crypt_ctx of a request.
+ *
+ * @rq: The request whose rq_crypt_ctx to uninitialize.
+ *
+ * Completely uninitializes the rq_crypt_ctx of a request. If a keyslot has been
+ * programmed into some inline encryption hardware, that keyslot is released.
+ * The bio_crypt_ctx pointed to by the rq_crypt_ctx is also freed, if any.
+ */
+void blk_crypto_free_request(struct request *rq)
+{
+	struct rq_crypt_ctx *rc = &rq->rq_crypt_ctx;
+
+	rq_crypt_ctx_release_keyslot(rq->q->ksm, rc);
+	mempool_free(rc->bc, bio_crypt_ctx_pool);
+	rc->bc = NULL;
+}
+
+/**
+ * blk_crypto_bio_prep - Prepare bio for inline encryption
+ *
+ * @bio_ptr: pointer to original bio pointer
+ *
+ * Succeeds if the bio doesn't have inline encryption enabled or if the bio
+ * crypt context provided for the bio is supported by the underlying device's
+ * inline encryption hardware. Ends the bio with error otherwise.
+ *
+ * Return: 0 on success; nonzero on error (and bio_endio() will have been called
+ *	   so bio submission should abort).
+ */
+int blk_crypto_bio_prep(struct bio **bio_ptr)
+{
+	struct bio *bio = *bio_ptr;
+	struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+	int err;
+
+	if (!bc)
+		return 0;
+
+	/*
+	 * If bio has no data, just pretend it didn't have an encryption
+	 * context.
+	 */
+	if (!bio_has_data(bio)) {
+		bio_crypt_free_ctx(bio);
+		return 0;
+	}
+
+	err = bio_crypt_check_alignment(bio);
+	if (err)
+		goto fail;
+
+	/* Success if device supports the encryption context */
+	if (blk_ksm_crypto_mode_supported(bio->bi_disk->queue->ksm, bc->bc_key))
+		return 0;
+
+fail:
+	bio->bi_status = BLK_STS_IOERR;
+	bio_endio(*bio_ptr);
+	return err;
+}
+
+/**
+ * blk_crypto_rq_bio_prep - Prepare a request when its first bio is inserted
+ *
+ * @rq: The request to prepare
+ * @bio: The first bio being inserted into the request
+ *
+ * Frees the bio crypt context in the request's old rq_crypt_ctx, if any, and
+ * moves the bio crypt context of the bio into the request's rq_crypt_ctx.
+ */
+void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio)
+{
+	mempool_free(rq->rq_crypt_ctx.bc, bio_crypt_ctx_pool);
+	rq->rq_crypt_ctx.bc = bio->bi_crypt_context;
+	bio->bi_crypt_context = NULL;
+}
+
+void blk_crypto_rq_prep_clone(struct request *dst, struct request *src)
+{
+	dst->rq_crypt_ctx.bc = NULL;
+	dst->rq_crypt_ctx.keyslot = -EINVAL;
+
+	if (src->rq_crypt_ctx.keyslot < 0) {
+		/* src doesn't have any crypto info, so nothing to do */
+		return;
+	}
+
+	dst->rq_crypt_ctx.bc = src->rq_crypt_ctx.bc;
+}
+
+/**
+ * blk_crypto_insert_cloned_request - Prepare a cloned request to be inserted
+ *				      into a request queue.
+ * @q: the queue to submit the request
+ * @rq: the request being queued
+ *
+ * Return: 0 on success, nonzero on error.
+ */
+int blk_crypto_insert_cloned_request(struct request_queue *q,
+				     struct request *rq)
+{
+	int err;
+
+	if (!rq->bio)
+		return 0;
+
+	/*
+	 * Pretend that the bio had the encryption ctx before calling
+	 * blk_crypto_init_request
+	 */
+	rq->bio->bi_crypt_context = rq->rq_crypt_ctx.bc;
+	err = blk_crypto_init_request(rq, q, rq->bio);
+	/*
+	 * blk_crypto_init_request *always* clears the rq->rq_crypt_ctx to
+	 * defaults, so regardless of what err is, restore the rq->rq_crypt_ctx.
+	 */
+	blk_crypto_rq_bio_prep(rq, rq->bio);
+
+	return err;
+}
+
+/**
+ * blk_crypto_init_key() - Prepare a key for use with blk-crypto
+ * @blk_key: Pointer to the blk_crypto_key to initialize.
+ * @raw_key: Pointer to the raw key. Must be the correct length for the chosen
+ *	     @crypto_mode; see blk_crypto_modes[].
+ * @crypto_mode: identifier for the encryption algorithm to use
+ * @blk_crypto_dun_bytes: number of bytes that will be used to specify the DUN
+ *			  when this key is used
+ * @data_unit_size: the data unit size to use for en/decryption
+ *
+ * Return: 0 on success, -errno on failure.  The caller is responsible for
+ *	   zeroizing both blk_key and raw_key when done with them.
+ */
+int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+			enum blk_crypto_mode_num crypto_mode,
+			unsigned int blk_crypto_dun_bytes,
+			unsigned int data_unit_size)
+{
+	const struct blk_crypto_mode *mode;
+	static siphash_key_t hash_key;
+
+	memset(blk_key, 0, sizeof(*blk_key));
+
+	if (crypto_mode >= ARRAY_SIZE(blk_crypto_modes))
+		return -EINVAL;
+
+	mode = &blk_crypto_modes[crypto_mode];
+	if (mode->keysize == 0)
+		return -EINVAL;
+
+	if (!is_power_of_2(data_unit_size))
+		return -EINVAL;
+
+	blk_key->crypto_mode = crypto_mode;
+	blk_key->dun_bytes = blk_crypto_dun_bytes;
+	blk_key->data_unit_size = data_unit_size;
+	blk_key->data_unit_size_bits = ilog2(data_unit_size);
+	blk_key->size = mode->keysize;
+	memcpy(blk_key->raw, raw_key, mode->keysize);
+
+	/*
+	 * The keyslot manager uses the SipHash of the key to implement O(1) key
+	 * lookups while avoiding leaking information about the keys.  It's
+	 * precomputed here so that it only needs to be computed once per key.
+	 */
+	get_random_once(&hash_key, sizeof(hash_key));
+	blk_key->hash = siphash(raw_key, mode->keysize, &hash_key);
+
+	return 0;
+}
+
+/**
+ * blk_crypto_evict_key() - Evict a key from any inline encryption hardware
+ *			    it may have been programmed into
+ * @q: The request queue who's keyslot manager this key might have been
+ *     programmed into
+ * @key: The key to evict
+ *
+ * Upper layers (filesystems) should call this function to ensure that a key
+ * is evicted from hardware that it might have been programmed into. This
+ * will call blk_ksm_evict_key on the queue's keyslot manager, if one
+ * exists, and supports the crypto algorithm with the specified data unit size.
+ *
+ * Return: 0 on success or if key is not present in the q's ksm, -err on error.
+ */
+int blk_crypto_evict_key(struct request_queue *q,
+			 const struct blk_crypto_key *key)
+{
+	if (q->ksm && blk_ksm_crypto_mode_supported(q->ksm, key))
+		return blk_ksm_evict_key(q->ksm, key);
+
+	return 0;
+}
diff --git a/block/blk-map.c b/block/blk-map.c
index b0790268ed9d..4484e37d316e 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -41,6 +41,7 @@ int blk_rq_append_bio(struct request *rq, struct bio **bio)
 		rq->biotail->bi_next = *bio;
 		rq->biotail = *bio;
 		rq->__data_len += (*bio)->bi_iter.bi_size;
+		bio_crypt_free_ctx(*bio);
 	}
 
 	return 0;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 1534ed736363..a0c24b6e0eb3 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -596,6 +596,8 @@ int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs)
 	if (blk_integrity_rq(req) &&
 	    integrity_req_gap_back_merge(req, bio))
 		return 0;
+	if (!bio_crypt_ctx_back_mergeable(req, bio))
+		return 0;
 	if (blk_rq_sectors(req) + bio_sectors(bio) >
 	    blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
 		req_set_nomerge(req->q, req);
@@ -612,6 +614,8 @@ int ll_front_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs
 	if (blk_integrity_rq(req) &&
 	    integrity_req_gap_front_merge(req, bio))
 		return 0;
+	if (!bio_crypt_ctx_front_mergeable(req, bio))
+		return 0;
 	if (blk_rq_sectors(req) + bio_sectors(bio) >
 	    blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
 		req_set_nomerge(req->q, req);
@@ -661,6 +665,9 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
 	if (blk_integrity_merge_rq(q, req, next) == false)
 		return 0;
 
+	if (!bio_crypt_ctx_merge_rq(req, next))
+		return 0;
+
 	/* Merge is OK... */
 	req->nr_phys_segments = total_phys_segments;
 	return 1;
@@ -885,6 +892,10 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (blk_integrity_merge_bio(rq->q, rq, bio) == false)
 		return false;
 
+	/* Only merge if the crypt contexts are compatible */
+	if (!bio_crypt_rq_ctx_compatible(rq, bio))
+		return false;
+
 	/* must be using the same buffer */
 	if (req_op(rq) == REQ_OP_WRITE_SAME &&
 	    !blk_write_same_mergeable(rq->bio, bio))
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a12b1763508d..6b437637d029 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -26,6 +26,7 @@
 #include <linux/delay.h>
 #include <linux/crash_dump.h>
 #include <linux/prefetch.h>
+#include <linux/blk-crypto.h>
 
 #include <trace/events/block.h>
 
@@ -316,6 +317,10 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
 	rq->nr_phys_segments = 0;
 #if defined(CONFIG_BLK_DEV_INTEGRITY)
 	rq->nr_integrity_segments = 0;
+#endif
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	rq->rq_crypt_ctx.bc = NULL;
+	rq->rq_crypt_ctx.keyslot = -EINVAL;
 #endif
 	/* tag was already set */
 	rq->extra_len = 0;
@@ -474,6 +479,7 @@ static void __blk_mq_free_request(struct request *rq)
 	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
 	const int sched_tag = rq->internal_tag;
 
+	blk_crypto_free_request(rq);
 	blk_pm_mark_last_busy(rq);
 	rq->mq_hctx = NULL;
 	if (rq->tag != -1)
@@ -1968,6 +1974,9 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	unsigned int nr_segs;
 	blk_qc_t cookie;
 
+	if (blk_crypto_bio_prep(&bio))
+		return BLK_QC_T_NONE;
+
 	blk_queue_bounce(q, &bio);
 	__blk_queue_split(q, &bio, &nr_segs);
 
@@ -1998,6 +2007,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 
 	cookie = request_to_qc_t(data.hctx, rq);
 
+	if (blk_crypto_init_request(rq, q, bio)) {
+		bio->bi_status = BLK_STS_RESOURCE;
+		bio_endio(bio);
+		blk_mq_end_request(rq, BLK_STS_RESOURCE);
+		return BLK_QC_T_NONE;
+	}
+
 	blk_mq_bio_to_request(rq, bio, nr_segs);
 
 	plug = blk_mq_plug(q, bio);
diff --git a/block/blk.h b/block/blk.h
index 0b8884353f6b..3929900fca02 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -4,6 +4,7 @@
 
 #include <linux/idr.h>
 #include <linux/blk-mq.h>
+#include <linux/blk-crypto.h>
 #include <xen/xen.h>
 #include "blk-mq.h"
 #include "blk-mq-sched.h"
@@ -117,6 +118,8 @@ static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio,
 
 	if (bio->bi_disk)
 		rq->rq_disk = bio->bi_disk;
+
+	blk_crypto_rq_bio_prep(rq, bio);
 }
 
 #ifdef CONFIG_BLK_DEV_INTEGRITY
diff --git a/block/bounce.c b/block/bounce.c
index f8ed677a1bf7..c3aaed070124 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -267,6 +267,8 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
 		break;
 	}
 
+	bio_crypt_clone(bio, bio_src, gfp_mask);
+
 	if (bio_integrity(bio_src)) {
 		int ret;
 
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index b89f07ee2eff..e5b5ffa73619 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1304,6 +1304,8 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio,
 
 	__bio_clone_fast(clone, bio);
 
+	bio_crypt_clone(clone, bio, GFP_NOIO);
+
 	if (bio_integrity(bio)) {
 		int r;
 
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
index b8d54eca1c0d..f855895770f6 100644
--- a/include/linux/blk-crypto.h
+++ b/include/linux/blk-crypto.h
@@ -6,6 +6,8 @@
 #ifndef __LINUX_BLK_CRYPTO_H
 #define __LINUX_BLK_CRYPTO_H
 
+#include <linux/types.h>
+
 enum blk_crypto_mode_num {
 	BLK_ENCRYPTION_MODE_INVALID,
 	BLK_ENCRYPTION_MODE_AES_256_XTS,
@@ -42,4 +44,213 @@ struct blk_crypto_key {
 	u8 raw[BLK_CRYPTO_MAX_KEY_SIZE];
 };
 
+#define BLK_CRYPTO_MAX_IV_SIZE		32
+#define BLK_CRYPTO_DUN_ARRAY_SIZE	(BLK_CRYPTO_MAX_IV_SIZE/sizeof(u64))
+
+/**
+ * struct bio_crypt_ctx - an inline encryption context
+ * @bc_key: the key, algorithm, and data unit size to use
+ * @bc_dun: the data unit number (starting IV) to use
+ * @bc_keyslot: the keyslot that has been assigned for this key in @bc_ksm,
+ *		or -1 if no keyslot has been assigned yet.
+ * @bc_ksm: the keyslot manager into which the key has been programmed with
+ *	    @bc_keyslot, or NULL if this key hasn't yet been programmed.
+ *
+ * A bio_crypt_ctx specifies that the contents of the bio will be encrypted (for
+ * write requests) or decrypted (for read requests) inline by the storage device
+ * or controller.
+ */
+struct bio_crypt_ctx {
+	const struct blk_crypto_key	*bc_key;
+	u64				bc_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+};
+
+#ifdef CONFIG_BLOCK
+
+#include <linux/blk_types.h>
+
+struct request;
+struct request_queue;
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+int bio_crypt_ctx_init(void);
+
+struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
+
+void bio_crypt_free_ctx(struct bio *bio);
+
+static inline bool bio_has_crypt_ctx(struct bio *bio)
+{
+	return bio->bi_crypt_context;
+}
+
+void bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask);
+
+static inline void bio_crypt_set_ctx(struct bio *bio,
+				     const struct blk_crypto_key *key,
+				     const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+				     gfp_t gfp_mask)
+{
+	struct bio_crypt_ctx *bc = bio_crypt_alloc_ctx(gfp_mask);
+
+	bc->bc_key = key;
+	memcpy(bc->bc_dun, dun, sizeof(bc->bc_dun));
+
+	bio->bi_crypt_context = bc;
+}
+
+static inline bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
+					       unsigned int bytes,
+					u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
+{
+	int i = 0;
+	unsigned int inc = bytes >> bc->bc_key->data_unit_size_bits;
+
+	while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
+		if (bc->bc_dun[i] + inc != next_dun[i])
+			return false;
+		inc = ((bc->bc_dun[i] + inc)  < inc);
+		i++;
+	}
+
+	return true;
+}
+
+static inline void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+					   unsigned int inc)
+{
+	int i = 0;
+
+	while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
+		dun[i] += inc;
+		inc = (dun[i] < inc);
+		i++;
+	}
+}
+
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
+{
+	struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+
+	if (!bc)
+		return;
+
+	bio_crypt_dun_increment(bc->bc_dun,
+				bytes >> bc->bc_key->data_unit_size_bits);
+}
+
+bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio);
+
+bool bio_crypt_ctx_front_mergeable(struct request *req, struct bio *bio);
+
+bool bio_crypt_ctx_back_mergeable(struct request *req, struct bio *bio);
+
+bool bio_crypt_ctx_merge_rq(struct request *req, struct request *next);
+
+void blk_crypto_bio_back_merge(struct request *req, struct bio *bio);
+
+void blk_crypto_bio_front_merge(struct request *req, struct bio *bio);
+
+void blk_crypto_free_request(struct request *rq);
+
+int blk_crypto_init_request(struct request *rq, struct request_queue *q,
+			    struct bio *bio);
+
+int blk_crypto_bio_prep(struct bio **bio_ptr);
+
+void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio);
+
+void blk_crypto_rq_prep_clone(struct request *dst, struct request *src);
+
+int blk_crypto_insert_cloned_request(struct request_queue *q,
+				     struct request *rq);
+
+int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+			enum blk_crypto_mode_num crypto_mode,
+			unsigned int blk_crypto_dun_bytes,
+			unsigned int data_unit_size);
+
+int blk_crypto_evict_key(struct request_queue *q,
+			 const struct blk_crypto_key *key);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline int bio_crypt_ctx_init(void)
+{
+	return 0;
+}
+
+static inline bool bio_has_crypt_ctx(struct bio *bio)
+{
+	return false;
+}
+
+static inline void bio_crypt_clone(struct bio *dst, struct bio *src,
+				   gfp_t gfp_mask) { }
+
+static inline void bio_crypt_free_ctx(struct bio *bio) { }
+
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) { }
+
+static inline bool bio_crypt_rq_ctx_compatible(struct request *rq,
+					       struct bio *bio)
+{
+	return true;
+}
+
+static inline bool bio_crypt_ctx_front_mergeable(struct request *req,
+						 struct bio *bio)
+{
+	return true;
+}
+
+static inline bool bio_crypt_ctx_back_mergeable(struct request *req,
+						struct bio *bio)
+{
+	return true;
+}
+
+static inline bool bio_crypt_ctx_merge_rq(struct request *req,
+					  struct request *next)
+{
+	return true;
+}
+
+static inline void blk_crypto_bio_back_merge(struct request *req,
+					     struct bio *bio) { }
+
+static inline void blk_crypto_bio_front_merge(struct request *req,
+					      struct bio *bio) { }
+
+static inline void blk_crypto_free_request(struct request *rq) { }
+
+static inline int blk_crypto_init_request(struct request *rq,
+					  struct request_queue *q,
+					  struct bio *bio)
+{
+	return 0;
+}
+
+static inline int blk_crypto_bio_prep(struct bio **bio_ptr)
+{
+	return 0;
+}
+
+static inline void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio)
+{ }
+
+static inline void blk_crypto_rq_prep_clone(struct request *dst,
+					    struct request *src) { }
+
+static inline int blk_crypto_insert_cloned_request(struct request_queue *q,
+						   struct request *rq)
+{
+	return 0;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#endif /* CONFIG_BLOCK */
+
 #endif /* __LINUX_BLK_CRYPTO_H */
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 70254ae11769..1996689c51d3 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -18,6 +18,7 @@ struct block_device;
 struct io_context;
 struct cgroup_subsys_state;
 typedef void (bio_end_io_t) (struct bio *);
+struct bio_crypt_ctx;
 
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
@@ -173,6 +174,11 @@ struct bio {
 	u64			bi_iocost_cost;
 #endif
 #endif
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	struct bio_crypt_ctx	*bi_crypt_context;
+#endif
+
 	union {
 #if defined(CONFIG_BLK_DEV_INTEGRITY)
 		struct bio_integrity_payload *bi_integrity; /* data integrity */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 8881b25ef58b..a90332299f29 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -27,6 +27,7 @@
 #include <linux/percpu-refcount.h>
 #include <linux/scatterlist.h>
 #include <linux/blkzoned.h>
+#include <linux/blk-crypto.h>
 
 struct module;
 struct scsi_ioctl_command;
@@ -124,6 +125,11 @@ enum mq_rq_state {
 	MQ_RQ_COMPLETE		= 2,
 };
 
+struct rq_crypt_ctx {
+	struct bio_crypt_ctx *bc;
+	int keyslot;
+};
+
 /*
  * Try to put the fields that are referenced together in the same cacheline.
  *
@@ -224,6 +230,10 @@ struct request {
 	unsigned short nr_integrity_segments;
 #endif
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	struct rq_crypt_ctx rq_crypt_ctx;
+#endif
+
 	unsigned short write_hint;
 	unsigned short ioprio;
 
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
  2020-02-21 11:50 ` [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption Satya Tangirala
  2020-02-21 11:50 ` [PATCH v7 2/9] block: Inline encryption support for blk-mq Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-21 16:51   ` Randy Dunlap
                     ` (3 more replies)
  2020-02-21 11:50 ` [PATCH v7 4/9] scsi: ufs: UFS driver v2.1 spec crypto additions Satya Tangirala
                   ` (6 subsequent siblings)
  9 siblings, 4 replies; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala

Blk-crypto delegates crypto operations to inline encryption hardware when
available. The separately configurable blk-crypto-fallback contains a
software fallback to the kernel crypto API - when enabled, blk-crypto
will use this fallback for en/decryption when inline encryption hardware is
not available. This lets upper layers not have to worry about whether or
not the underlying device has support for inline encryption before
deciding to specify an encryption context for a bio, and also allows for
testing without actual inline encryption hardware. For more details, refer
to Documentation/block/inline-encryption.rst.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 Documentation/block/index.rst             |   1 +
 Documentation/block/inline-encryption.rst | 162 ++++++
 block/Kconfig                             |  10 +
 block/Makefile                            |   1 +
 block/bio-integrity.c                     |   2 +-
 block/blk-crypto-fallback.c               | 673 ++++++++++++++++++++++
 block/blk-crypto-internal.h               |  32 +
 block/blk-crypto.c                        |  43 +-
 include/linux/blk-crypto.h                |  17 +-
 include/linux/blk_types.h                 |   6 +
 10 files changed, 938 insertions(+), 9 deletions(-)
 create mode 100644 Documentation/block/inline-encryption.rst
 create mode 100644 block/blk-crypto-fallback.c

diff --git a/Documentation/block/index.rst b/Documentation/block/index.rst
index 3fa7a52fafa4..026addfc69bc 100644
--- a/Documentation/block/index.rst
+++ b/Documentation/block/index.rst
@@ -14,6 +14,7 @@ Block
    cmdline-partition
    data-integrity
    deadline-iosched
+   inline-encryption
    ioprio
    kyber-iosched
    null_blk
diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst
new file mode 100644
index 000000000000..02abea993975
--- /dev/null
+++ b/Documentation/block/inline-encryption.rst
@@ -0,0 +1,162 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================
+Inline Encryption
+=================
+
+Background
+==========
+
+Inline encryption hardware sit logically between memory and the disk, and can
+en/decrypt data as it goes in/out of the disk. Inline encryption hardware have a
+fixed number of "keyslots" - slots into which encryption contexts (i.e. the
+encryption key, encryption algorithm, data unit size) can be programmed by the
+kernel at any time. Each request sent to the disk can be tagged with the index
+of a keyslot (and also a data unit number to act as an encryption tweak), and
+the inline encryption hardware will en/decrypt the data in the request with the
+encryption context programmed into that keyslot. This is very different from
+full disk encryption solutions like self encrypting drives/TCG OPAL/ATA
+Security standards, since with inline encryption, any block on disk could be
+encrypted with any encryption context the kernel chooses.
+
+
+Objective
+=========
+
+We want to support inline encryption (IE) in the kernel.
+To allow for testing, we also want a crypto API fallback when actual
+IE hardware is absent. We also want IE to work with layered devices
+like dm and loopback (i.e. we want to be able to use the IE hardware
+of the underlying devices if present, or else fall back to crypto API
+en/decryption).
+
+
+Constraints and notes
+=====================
+
+- IE hardware have a limited number of "keyslots" that can be programmed
+  with an encryption context (key, algorithm, data unit size, etc.) at any time.
+  One can specify a keyslot in a data request made to the device, and the
+  device will en/decrypt the data using the encryption context programmed into
+  that specified keyslot. When possible, we want to make multiple requests with
+  the same encryption context share the same keyslot.
+
+- We need a way for upper layers like filesystems to specify an encryption
+  context to use for en/decrypting a struct bio, and a device driver (like UFS)
+  needs to be able to use that encryption context when it processes the bio.
+
+- We need a way for device drivers to expose their capabilities in a unified
+  way to the upper layers.
+
+
+Design
+======
+
+We add a :c:type:`struct bio_crypt_ctx` to :c:type:`struct bio` that can
+represent an encryption context, because we need to be able to pass this
+encryption context from the FS layer to the device driver to act upon.
+
+While IE hardware works on the notion of keyslots, the FS layer has no
+knowledge of keyslots - it simply wants to specify an encryption context to
+use while en/decrypting a bio.
+
+We introduce a keyslot manager (KSM) that handles the translation from
+encryption contexts specified by the FS to keyslots on the IE hardware.
+This KSM also serves as the way IE hardware can expose their capabilities to
+upper layers. The generic mode of operation is: each device driver that wants
+to support IE will construct a KSM and set it up in its struct request_queue.
+Upper layers that want to use IE on this device can then use this KSM in
+the device's struct request_queue to translate an encryption context into
+a keyslot. The presence of the KSM in the request queue shall be used to mean
+that the device supports IE.
+
+On the device driver end of the interface, the device driver needs to tell the
+KSM how to actually manipulate the IE hardware in the device to do things like
+programming the crypto key into the IE hardware into a particular keyslot. All
+this is achieved through the :c:type:`struct keyslot_mgmt_ll_ops` that the
+device driver passes to the KSM when creating it.
+
+It uses refcounts to track which keyslots are idle (either they have no
+encryption context programmed, or there are no in-flight struct bios
+referencing that keyslot). When a new encryption context needs a keyslot, it
+tries to find a keyslot that has already been programmed with the same
+encryption context, and if there is no such keyslot, it evicts the least
+recently used idle keyslot and programs the new encryption context into that
+one. If no idle keyslots are available, then the caller will sleep until there
+is at least one.
+
+
+blk-mq changes and blk-crypto-fallback
+======================================
+
+We add a :c:type:`struct rq_crypt_ctx` to :c:type:`struct request`. This field
+holds a pointer to a ``bi_crypt_context`` and a keyslot that this
+``bi_crypt_context`` has been programmed into in the keyslot manager of the
+``request_queue`` that this request is being sent to.
+
+We introduce ``block/blk-crypto-fallback.c``, which allows upper layers to remain
+blissfully unaware of whether or not real inline encryption hardware is present
+underneath.
+
+When a bio is submitted to blk-mq with a target ``request_queue``
+that doesn't support the encryption context specified with the bio, blk-mq
+will en/decrypt the bio with the blk-crypto-fallback.
+
+If the bio is a ``WRITE`` bio, a bounce bio is allocated, and the data in the bio
+is encrypted stored in the bounce bio - blk-mq will then proceed to process the
+bounce bio as if it were not encrypted at all (except when blk-integrity is
+concerned). ``blk-crypto-fallback`` sets the bounce bio's ``bi_end_io`` to an
+internal function that cleans up the bounce bio and ends the original bio.
+
+If the bio is a ``READ`` bio, the bio's ``bi_end_io`` (and also ``bi_private``)
+is saved and overwritten by ``blk-crypto-fallback`` to
+``bio_crypto_fallback_decrypt_bio``.  The bio's ``bi_crypt_context`` is also
+overwritten with ``NULL``, so that to the rest of the stack, the bio looks
+as if it was a regular bio that never had an encryption context specified.
+``bio_crypto_fallback_decrypt_bio`` will decrypt the bio, restore the original
+``bi_end_io`` (and also ``bi_private``) and end the bio again.
+
+If we reach a point when a :c:type:`struct request` needs to be allocated for a
+bio that still has an encryption context, that means that the bio was not
+handled by the ``blk-crypto-fallback``, which means that the underlying inline
+encryption hardware claimed to support the encryption. So in this situation,
+blk-mq tries to program the encryption context into the ``request_queue``'s
+keyslot_manager, and obtain a keyslot, which it stores in its ``rq_crypt_ctx``.
+This keyslot is released when the request is completed.
+
+When a bio is added to a request, the request takes over ownership of the
+``bi_crypt_context`` of the bio - in particular, the request keeps the
+``bi_crypt_context`` of the first bio in its bio-list, and frees the rest
+(blk-mq needs to be careful to maintain this invariant during bio and request
+merges).
+
+To make it possible for inline encryption to work with request queue based
+layered devices, when a request is cloned, its ``rq_crypt_ctx`` is cloned as
+well. When the cloned request is submitted, blk-mq programs the
+``bi_crypt_context`` of the request into the clone's request_queue's keyslot
+manager, and stores the returned keyslot in the clone's ``rq_crypt_ctx``.
+
+
+Layered Devices
+===============
+
+Request queue based layered devices like dm-rq that wish to support IE need to
+create their own keyslot manager for their request queue, and expose whatever
+functionality they choose. When a layered device wants to pass a clone of that
+request to another ``request_queue``, blk-crypto will initialize and prepare the
+clone as necessary - see ``blk_crypto_rq_prep_clone`` and
+``blk_crypto_insert_cloned_request`` in ``blk-crypto.c``.
+
+
+Future Optimizations for layered devices
+========================================
+
+Creating a keyslot manager for a layered device uses up memory for each
+keyslot, and in general, a layered device merely passes the request on to a
+"child" device, so the keyslots in the layered device itself are completely
+unused, and don't need any refcounting or keyslot programming. We can instead
+define a new type of KSM; the "passthrough KSM", that layered devices can use
+to advertise an unlimited number of keyslots, and support for any encryption
+algorithms they choose, while not actually using any memory for each keyslot.
+Another use case for the "passthrough KSM" is for IE devices that do not have a
+limited number of keyslots.
diff --git a/block/Kconfig b/block/Kconfig
index c04a1d500842..0af387623774 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -192,6 +192,16 @@ config BLK_INLINE_ENCRYPTION
 	  block layer handle encryption, so users can take
 	  advantage of inline encryption hardware if present.
 
+config BLK_INLINE_ENCRYPTION_FALLBACK
+	bool "Enable crypto API fallback for blk-crypto"
+	depends on BLK_INLINE_ENCRYPTION
+	select CRYPTO
+	select CRYPTO_SKCIPHER
+	help
+	  Enabling this lets the block layer handle inline encryption
+	  by falling back to the kernel crypto API when inline
+	  encryption hardware is not present.
+
 menu "Partition Types"
 
 source "block/partitions/Kconfig"
diff --git a/block/Makefile b/block/Makefile
index 82f42ca3f769..9464fb6ae423 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -38,3 +38,4 @@ obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
 obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
 obj-$(CONFIG_BLK_PM)		+= blk-pm.o
 obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= keyslot-manager.o blk-crypto.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK)	+= blk-crypto-fallback.o
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index bce563031e7c..6f6da02b10ef 100644
--- a/block/bio-integrity.c
+++ b/block/bio-integrity.c
@@ -42,7 +42,7 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
 	struct bio_set *bs = bio->bi_pool;
 	unsigned inline_vecs;
 
-	if (bio_has_crypt_ctx(bio)) {
+	if (bio_has_crypt_ctx(bio) || (bio->bi_opf & REQ_NO_SPECIAL)) {
 		pr_warn("blk-integrity can't be used together with blk-crypto en/decryption.");
 		return ERR_PTR(-EOPNOTSUPP);
 	}
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
new file mode 100644
index 000000000000..964855b3b29e
--- /dev/null
+++ b/block/blk-crypto-fallback.c
@@ -0,0 +1,673 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * Refer to Documentation/block/inline-encryption.rst for detailed explanation.
+ */
+
+#define pr_fmt(fmt) "blk-crypto-fallback: " fmt
+
+#include <crypto/skcipher.h>
+#include <linux/blk-cgroup.h>
+#include <linux/blk-crypto.h>
+#include <linux/crypto.h>
+#include <linux/keyslot-manager.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/random.h>
+
+#include "blk-crypto-internal.h"
+
+static unsigned int num_prealloc_bounce_pg = 32;
+module_param(num_prealloc_bounce_pg, uint, 0);
+MODULE_PARM_DESC(num_prealloc_bounce_pg,
+		 "Number of preallocated bounce pages for the blk-crypto crypto API fallback");
+
+static unsigned int blk_crypto_num_keyslots = 100;
+module_param_named(num_keyslots, blk_crypto_num_keyslots, uint, 0);
+MODULE_PARM_DESC(num_keyslots,
+		 "Number of keyslots for the blk-crypto crypto API fallback");
+
+static unsigned int num_prealloc_fallback_crypt_ctxs = 128;
+module_param(num_prealloc_fallback_crypt_ctxs, uint, 0);
+MODULE_PARM_DESC(num_prealloc_crypt_fallback_ctxs,
+		 "Number of preallocated bio fallback crypto contexts for blk-crypto to use during crypto API fallback");
+
+struct bio_fallback_crypt_ctx {
+	struct bio_crypt_ctx crypt_ctx;
+	/*
+	 * Copy of the bvec_iter when this bio was submitted.
+	 * We only want to en/decrypt the part of the bio as described by the
+	 * bvec_iter upon submission because bio might be split before being
+	 * resubmitted
+	 */
+	struct bvec_iter crypt_iter;
+	u64 fallback_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+	union {
+		struct {
+			struct work_struct work;
+			struct bio *bio;
+		};
+		struct {
+			void *bi_private_orig;
+			bio_end_io_t *bi_end_io_orig;
+		};
+	};
+};
+
+static struct kmem_cache *bio_fallback_crypt_ctx_cache;
+static mempool_t *bio_fallback_crypt_ctx_pool;
+
+/*
+ * Allocating a crypto tfm during I/O can deadlock, so we have to preallocate
+ * all of a mode's tfms when that mode starts being used. Since each mode may
+ * need all the keyslots at some point, each mode needs its own tfm for each
+ * keyslot; thus, a keyslot may contain tfms for multiple modes.  However, to
+ * match the behavior of real inline encryption hardware (which only supports a
+ * single encryption context per keyslot), we only allow one tfm per keyslot to
+ * be used at a time - the rest of the unused tfms have their keys cleared.
+ */
+static DEFINE_MUTEX(tfms_init_lock);
+static bool tfms_inited[BLK_ENCRYPTION_MODE_MAX];
+
+static struct blk_crypto_keyslot {
+	enum blk_crypto_mode_num crypto_mode;
+	struct crypto_skcipher *tfms[BLK_ENCRYPTION_MODE_MAX];
+} *blk_crypto_keyslots;
+
+static struct keyslot_manager blk_crypto_ksm;
+static struct workqueue_struct *blk_crypto_wq;
+static mempool_t *blk_crypto_bounce_page_pool;
+
+/*
+ * This is the key we set when evicting a keyslot. This *should* be the all 0's
+ * key, but AES-XTS rejects that key, so we use some random bytes instead.
+ */
+static u8 blank_key[BLK_CRYPTO_MAX_KEY_SIZE];
+
+static void blk_crypto_evict_keyslot(unsigned int slot)
+{
+	struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
+	enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode;
+	int err;
+
+	WARN_ON(slotp->crypto_mode == BLK_ENCRYPTION_MODE_INVALID);
+
+	/* Clear the key in the skcipher */
+	err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], blank_key,
+				     blk_crypto_modes[crypto_mode].keysize);
+	WARN_ON(err);
+	slotp->crypto_mode = BLK_ENCRYPTION_MODE_INVALID;
+}
+
+static int blk_crypto_keyslot_program(struct keyslot_manager *ksm,
+				      const struct blk_crypto_key *key,
+				      unsigned int slot)
+{
+	struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
+	const enum blk_crypto_mode_num crypto_mode = key->crypto_mode;
+	int err;
+
+	if (crypto_mode != slotp->crypto_mode &&
+	    slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID)
+		blk_crypto_evict_keyslot(slot);
+
+	slotp->crypto_mode = crypto_mode;
+	err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->raw,
+				     key->size);
+	if (err) {
+		blk_crypto_evict_keyslot(slot);
+		return err;
+	}
+	return 0;
+}
+
+static int blk_crypto_keyslot_evict(struct keyslot_manager *ksm,
+				    const struct blk_crypto_key *key,
+				    unsigned int slot)
+{
+	blk_crypto_evict_keyslot(slot);
+	return 0;
+}
+
+/*
+ * The crypto API fallback KSM ops - only used for a bio when it specifies a
+ * blk_crypto_key that was not supported by the device's inline encryption
+ * hardware.
+ */
+static const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = {
+	.keyslot_program	= blk_crypto_keyslot_program,
+	.keyslot_evict		= blk_crypto_keyslot_evict,
+};
+
+static void blk_crypto_fallback_encrypt_endio(struct bio *enc_bio)
+{
+	struct bio *src_bio = enc_bio->bi_private;
+	int i;
+
+	for (i = 0; i < enc_bio->bi_vcnt; i++)
+		mempool_free(enc_bio->bi_io_vec[i].bv_page,
+			     blk_crypto_bounce_page_pool);
+
+	src_bio->bi_status = enc_bio->bi_status;
+
+	bio_put(enc_bio);
+	bio_endio(src_bio);
+}
+
+static struct bio *blk_crypto_clone_bio(struct bio *bio_src)
+{
+	struct bvec_iter iter;
+	struct bio_vec bv;
+	struct bio *bio;
+
+	BUG_ON(bio_integrity(bio_src));
+
+	bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL);
+	if (!bio)
+		return NULL;
+	bio->bi_disk		= bio_src->bi_disk;
+	bio->bi_opf		= bio_src->bi_opf;
+	bio->bi_ioprio		= bio_src->bi_ioprio;
+	bio->bi_write_hint	= bio_src->bi_write_hint;
+	bio->bi_iter.bi_sector	= bio_src->bi_iter.bi_sector;
+	bio->bi_iter.bi_size	= bio_src->bi_iter.bi_size;
+
+	bio_for_each_segment(bv, bio_src, iter)
+		bio->bi_io_vec[bio->bi_vcnt++] = bv;
+
+	bio->bi_opf |= REQ_NO_SPECIAL;
+
+	bio_clone_blkg_association(bio, bio_src);
+	blkcg_bio_issue_init(bio);
+
+	return bio;
+}
+
+static int blk_crypto_alloc_cipher_req(struct bio *src_bio,
+				       struct rq_crypt_ctx *rc,
+				       struct skcipher_request **ciph_req_ret,
+				       struct crypto_wait *wait)
+{
+	struct skcipher_request *ciph_req;
+	const struct blk_crypto_keyslot *slotp;
+
+	slotp = &blk_crypto_keyslots[rc->keyslot];
+	ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode],
+					  GFP_NOIO);
+	if (!ciph_req) {
+		src_bio->bi_status = BLK_STS_RESOURCE;
+		return -ENOMEM;
+	}
+
+	skcipher_request_set_callback(ciph_req,
+				      CRYPTO_TFM_REQ_MAY_BACKLOG |
+				      CRYPTO_TFM_REQ_MAY_SLEEP,
+				      crypto_req_done, wait);
+	*ciph_req_ret = ciph_req;
+	return 0;
+}
+
+static int blk_crypto_split_bio_if_needed(struct bio **bio_ptr)
+{
+	struct bio *bio = *bio_ptr;
+	unsigned int i = 0;
+	unsigned int num_sectors = 0;
+	struct bio_vec bv;
+	struct bvec_iter iter;
+
+	bio_for_each_segment(bv, bio, iter) {
+		num_sectors += bv.bv_len >> SECTOR_SHIFT;
+		if (++i == BIO_MAX_PAGES)
+			break;
+	}
+	if (num_sectors < bio_sectors(bio)) {
+		struct bio *split_bio;
+
+		split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL);
+		if (!split_bio) {
+			bio->bi_status = BLK_STS_RESOURCE;
+			return -ENOMEM;
+		}
+		bio_chain(split_bio, bio);
+		generic_make_request(bio);
+		*bio_ptr = split_bio;
+	}
+	return 0;
+}
+
+union blk_crypto_iv {
+	__le64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+	u8 bytes[BLK_CRYPTO_MAX_IV_SIZE];
+};
+
+static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+				 union blk_crypto_iv *iv)
+{
+	int i;
+
+	for (i = 0; i < BLK_CRYPTO_DUN_ARRAY_SIZE; i++)
+		iv->dun[i] = cpu_to_le64(dun[i]);
+}
+
+/*
+ * The crypto API fallback's encryption routine.
+ * Allocate a bounce bio for encryption, encrypt the input bio using crypto API,
+ * and replace *bio_ptr with the bounce bio. May split input bio if it's too
+ * large.
+ */
+static int blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
+{
+	struct bio *src_bio, *enc_bio;
+	struct bio_crypt_ctx *bc;
+	struct rq_crypt_ctx rc;
+	int data_unit_size;
+	struct skcipher_request *ciph_req = NULL;
+	DECLARE_CRYPTO_WAIT(wait);
+	u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+	struct scatterlist src, dst;
+	union blk_crypto_iv iv;
+	unsigned int i, j;
+	int err = 0;
+
+	/* Split the bio if it's too big for single page bvec */
+	err = blk_crypto_split_bio_if_needed(bio_ptr);
+	if (err)
+		return err;
+
+	src_bio = *bio_ptr;
+	bc = src_bio->bi_crypt_context;
+	data_unit_size = bc->bc_key->data_unit_size;
+
+	/* Allocate bounce bio for encryption */
+	enc_bio = blk_crypto_clone_bio(src_bio);
+	if (!enc_bio) {
+		src_bio->bi_status = BLK_STS_RESOURCE;
+		return -ENOMEM;
+	}
+
+	/*
+	 * Use the crypto API fallback keyslot manager to get a crypto_skcipher
+	 * for the algorithm and key specified for this bio.
+	 */
+	err = rq_crypt_ctx_acquire_keyslot(bc, &blk_crypto_ksm, &rc);
+	if (err) {
+		src_bio->bi_status = BLK_STS_IOERR;
+		goto out_put_enc_bio;
+	}
+
+	/* and then allocate an skcipher_request for it */
+	err = blk_crypto_alloc_cipher_req(src_bio, &rc, &ciph_req, &wait);
+	if (err)
+		goto out_release_keyslot;
+
+	memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun));
+	sg_init_table(&src, 1);
+	sg_init_table(&dst, 1);
+
+	skcipher_request_set_crypt(ciph_req, &src, &dst, data_unit_size,
+				   iv.bytes);
+
+	/* Encrypt each page in the bounce bio */
+	for (i = 0; i < enc_bio->bi_vcnt; i++) {
+		struct bio_vec *enc_bvec = &enc_bio->bi_io_vec[i];
+		struct page *plaintext_page = enc_bvec->bv_page;
+		struct page *ciphertext_page =
+			mempool_alloc(blk_crypto_bounce_page_pool, GFP_NOIO);
+
+		enc_bvec->bv_page = ciphertext_page;
+
+		if (!ciphertext_page) {
+			src_bio->bi_status = BLK_STS_RESOURCE;
+			err = -ENOMEM;
+			goto out_free_bounce_pages;
+		}
+
+		sg_set_page(&src, plaintext_page, data_unit_size,
+			    enc_bvec->bv_offset);
+		sg_set_page(&dst, ciphertext_page, data_unit_size,
+			    enc_bvec->bv_offset);
+
+		/* Encrypt each data unit in this page */
+		for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) {
+			blk_crypto_dun_to_iv(curr_dun, &iv);
+			err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req),
+					      &wait);
+			if (err) {
+				i++;
+				src_bio->bi_status = BLK_STS_IOERR;
+				goto out_free_bounce_pages;
+			}
+			bio_crypt_dun_increment(curr_dun, 1);
+			src.offset += data_unit_size;
+			dst.offset += data_unit_size;
+		}
+	}
+
+	enc_bio->bi_private = src_bio;
+	enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio;
+	*bio_ptr = enc_bio;
+
+	enc_bio = NULL;
+	err = 0;
+	goto out_free_ciph_req;
+
+out_free_bounce_pages:
+	while (i > 0)
+		mempool_free(enc_bio->bi_io_vec[--i].bv_page,
+			     blk_crypto_bounce_page_pool);
+out_free_ciph_req:
+	skcipher_request_free(ciph_req);
+out_release_keyslot:
+	rq_crypt_ctx_release_keyslot(&blk_crypto_ksm, &rc);
+out_put_enc_bio:
+	if (enc_bio)
+		bio_put(enc_bio);
+
+	return err;
+}
+
+/*
+ * The crypto API fallback's main decryption routine.
+ * Decrypts input bio in place, and calls bio_endio on the bio.
+ */
+static void blk_crypto_fallback_decrypt_bio(struct work_struct *work)
+{
+	struct bio_fallback_crypt_ctx *f_ctx =
+		container_of(work, struct bio_fallback_crypt_ctx, work);
+	struct bio *bio = f_ctx->bio;
+	struct bio_crypt_ctx *bc = &f_ctx->crypt_ctx;
+	struct rq_crypt_ctx rc;
+	struct skcipher_request *ciph_req = NULL;
+	DECLARE_CRYPTO_WAIT(wait);
+	u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+	union blk_crypto_iv iv;
+	struct scatterlist sg;
+	struct bio_vec bv;
+	struct bvec_iter iter;
+	const int data_unit_size = bc->bc_key->data_unit_size;
+	unsigned int i;
+	int err;
+
+	/*
+	 * Use the crypto API fallback keyslot manager to get a crypto_skcipher
+	 * for the algorithm and key specified for this bio.
+	 */
+	if (rq_crypt_ctx_acquire_keyslot(bc, &blk_crypto_ksm, &rc)) {
+		bio->bi_status = BLK_STS_RESOURCE;
+		goto out_no_keyslot;
+	}
+
+	/* and then allocate an skcipher_request for it */
+	err = blk_crypto_alloc_cipher_req(bio, &rc, &ciph_req, &wait);
+	if (err)
+		goto out;
+
+	memcpy(curr_dun, f_ctx->fallback_dun, sizeof(curr_dun));
+	sg_init_table(&sg, 1);
+	skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size,
+				   iv.bytes);
+
+	/* Decrypt each segment in the bio */
+	__bio_for_each_segment(bv, bio, iter, f_ctx->crypt_iter) {
+		struct page *page = bv.bv_page;
+
+		sg_set_page(&sg, page, data_unit_size, bv.bv_offset);
+
+		/* Decrypt each data unit in the segment */
+		for (i = 0; i < bv.bv_len; i += data_unit_size) {
+			blk_crypto_dun_to_iv(curr_dun, &iv);
+			if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req),
+					    &wait)) {
+				bio->bi_status = BLK_STS_IOERR;
+				goto out;
+			}
+			bio_crypt_dun_increment(curr_dun, 1);
+			sg.offset += data_unit_size;
+		}
+	}
+
+out:
+	skcipher_request_free(ciph_req);
+	rq_crypt_ctx_release_keyslot(&blk_crypto_ksm, &rc);
+out_no_keyslot:
+	mempool_free(f_ctx, bio_fallback_crypt_ctx_pool);
+	bio_endio(bio);
+}
+
+/**
+ * blk_crypto_fallback_decrypt_endio - clean up bio w.r.t fallback decryption
+ *
+ * @bio: the bio to clean up.
+ *
+ * Restore bi_private and bi_end_io, and queue the bio for decryption into a
+ * workqueue, since this function will be called from an atomic context.
+ */
+static void blk_crypto_fallback_decrypt_endio(struct bio *bio)
+{
+	struct bio_fallback_crypt_ctx *f_ctx = bio->bi_private;
+
+	bio->bi_private = f_ctx->bi_private_orig;
+	bio->bi_end_io = f_ctx->bi_end_io_orig;
+
+	/* If there was an IO error, don't queue for decrypt. */
+	if (bio->bi_status) {
+		mempool_free(f_ctx, bio_fallback_crypt_ctx_pool);
+		bio_endio(bio);
+		return;
+	}
+
+	INIT_WORK(&f_ctx->work, blk_crypto_fallback_decrypt_bio);
+	f_ctx->bio = bio;
+	queue_work(blk_crypto_wq, &f_ctx->work);
+}
+
+/**
+ * blk_crypto_fallback_bio_prep - Prepare a bio to use fallback en/decryption
+ *
+ * @bio_ptr: pointer to the bio to prepare
+ *
+ * If bio is doing a WRITE operation, we split the bio into two parts, resubmit
+ * the second part. Allocates a bounce bio for the first part, encrypts it, and
+ * update bio_ptr to point to the bounce bio.
+ *
+ * For a READ operation, we mark the bio for decryption by using bi_private and
+ * bi_end_io.
+ *
+ * In either case, this function will make the bio look like a regular bio (i.e.
+ * as if no encryption context was ever specified) for the purposes of the rest
+ * of the stack except for blk-integrity (blk-integrity and blk-crypto are not
+ * currently supported together).
+ */
+int blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
+{
+	struct bio *bio = *bio_ptr;
+	struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+	struct bio_fallback_crypt_ctx *f_ctx;
+
+	if (!tfms_inited[bc->bc_key->crypto_mode]) {
+		bio->bi_status = BLK_STS_IOERR;
+		return -EIO;
+	}
+
+	if (!blk_ksm_crypto_mode_supported(&blk_crypto_ksm, bc->bc_key)) {
+		bio->bi_status = BLK_STS_NOTSUPP;
+		return -EOPNOTSUPP;
+	}
+
+	if (bio_data_dir(bio) == WRITE)
+		return blk_crypto_fallback_encrypt_bio(bio_ptr);
+
+	/*
+	 * bio READ case: Set up a f_ctx in the bio's bi_private and set the
+	 * bi_end_io appropriately to trigger decryption when the bio is ended.
+	 * Also set REQ_NO_SPECIAL in bi_opf to trigger error if blk-integrity
+	 * tries to process this bio.
+	 */
+	f_ctx = mempool_alloc(bio_fallback_crypt_ctx_pool, GFP_NOIO);
+	f_ctx->crypt_ctx = *bc;
+	memcpy(f_ctx->fallback_dun, bc->bc_dun, sizeof(f_ctx->fallback_dun));
+	f_ctx->crypt_iter = bio->bi_iter;
+	f_ctx->bi_private_orig = bio->bi_private;
+	f_ctx->bi_end_io_orig = bio->bi_end_io;
+	bio->bi_private = (void *)f_ctx;
+	bio->bi_end_io = blk_crypto_fallback_decrypt_endio;
+	bio_crypt_free_ctx(bio);
+
+	bio->bi_opf |= REQ_NO_SPECIAL;
+
+	return 0;
+}
+
+int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
+{
+	return blk_ksm_evict_key(&blk_crypto_ksm, key);
+}
+
+static bool blk_crypto_fallback_inited;
+static int blk_crypto_fallback_init(void)
+{
+	int i;
+	int err = -ENOMEM;
+
+	if (blk_crypto_fallback_inited)
+		return 0;
+
+	prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
+
+	err = blk_ksm_init(&blk_crypto_ksm, NULL, blk_crypto_num_keyslots);
+	if (err)
+		goto out;
+	err = -ENOMEM;
+
+	blk_crypto_ksm.ksm_ll_ops = blk_crypto_ksm_ll_ops;
+
+	/* All blk-crypto modes have a crypto API fallback. */
+	for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) {
+		blk_crypto_ksm.crypto_modes_supported[i] = 0xFFFFFFFF;
+		blk_crypto_ksm.max_dun_bytes_supported[i] =
+						blk_crypto_modes[i].ivsize;
+	}
+	blk_crypto_ksm.crypto_modes_supported[BLK_ENCRYPTION_MODE_INVALID] = 0;
+	blk_crypto_ksm.max_dun_bytes_supported[BLK_ENCRYPTION_MODE_INVALID] = 0;
+
+	blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
+					WQ_UNBOUND | WQ_HIGHPRI |
+					WQ_MEM_RECLAIM, num_online_cpus());
+	if (!blk_crypto_wq)
+		goto fail_free_ksm;
+
+	blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots,
+				      sizeof(blk_crypto_keyslots[0]),
+				      GFP_KERNEL);
+	if (!blk_crypto_keyslots)
+		goto fail_free_wq;
+
+	blk_crypto_bounce_page_pool =
+		mempool_create_page_pool(num_prealloc_bounce_pg, 0);
+	if (!blk_crypto_bounce_page_pool)
+		goto fail_free_keyslots;
+
+	bio_fallback_crypt_ctx_cache = KMEM_CACHE(bio_fallback_crypt_ctx, 0);
+	if (!bio_fallback_crypt_ctx_cache)
+		goto fail_free_bounce_page_pool;
+
+	bio_fallback_crypt_ctx_pool =
+		mempool_create_slab_pool(num_prealloc_fallback_crypt_ctxs,
+					 bio_fallback_crypt_ctx_cache);
+	if (!bio_fallback_crypt_ctx_pool)
+		goto fail_free_crypt_ctx_cache;
+
+	blk_crypto_fallback_inited = true;
+
+	return 0;
+fail_free_crypt_ctx_cache:
+	kmem_cache_destroy(bio_fallback_crypt_ctx_cache);
+fail_free_bounce_page_pool:
+	mempool_destroy(blk_crypto_bounce_page_pool);
+fail_free_keyslots:
+	kfree(blk_crypto_keyslots);
+fail_free_wq:
+	destroy_workqueue(blk_crypto_wq);
+fail_free_ksm:
+	blk_ksm_destroy(&blk_crypto_ksm);
+out:
+	return err;
+}
+
+/**
+ * blk_crypto_start_using_key() - Start using a blk_crypto_key on a device
+ * @key: A key to use on the device
+ * @q: the request queue for the device
+ *
+ * Upper layers must call this function to ensure that the crypto API fallback
+ * has transforms for the algorithm/data_unit_size/dun_bytes combo specified by
+ * the key, if it becomes necessary.
+ *
+ * Return: 0 on success and -err on error.
+ */
+int blk_crypto_start_using_key(struct blk_crypto_key *key,
+			       struct request_queue *q)
+{
+	enum blk_crypto_mode_num mode_num = key->crypto_mode;
+	struct blk_crypto_keyslot *slotp;
+	unsigned int i;
+	int err = 0;
+
+	/*
+	 * Fast path
+	 * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
+	 * for each i are visible before we try to access them.
+	 */
+	if (likely(smp_load_acquire(&tfms_inited[mode_num])))
+		return 0;
+
+	/*
+	 * If the keyslot manager of the request queue supports this
+	 * crypto mode, then we don't need to allocate this mode.
+	 */
+	if (blk_ksm_crypto_mode_supported(q->ksm, key))
+		return 0;
+
+	mutex_lock(&tfms_init_lock);
+	err = blk_crypto_fallback_init();
+	if (err)
+		goto out;
+
+	if (tfms_inited[mode_num])
+		goto out;
+
+	for (i = 0; i < blk_crypto_num_keyslots; i++) {
+		slotp = &blk_crypto_keyslots[i];
+		slotp->tfms[mode_num] = crypto_alloc_skcipher(
+					blk_crypto_modes[mode_num].cipher_str,
+					0, 0);
+		if (IS_ERR(slotp->tfms[mode_num])) {
+			err = PTR_ERR(slotp->tfms[mode_num]);
+			slotp->tfms[mode_num] = NULL;
+			goto out_free_tfms;
+		}
+
+		crypto_skcipher_set_flags(slotp->tfms[mode_num],
+					  CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
+	}
+
+	/*
+	 * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
+	 * for each i are visible before we set tfms_inited[mode_num].
+	 */
+	smp_store_release(&tfms_inited[mode_num], true);
+	goto out;
+
+out_free_tfms:
+	for (i = 0; i < blk_crypto_num_keyslots; i++) {
+		slotp = &blk_crypto_keyslots[i];
+		crypto_free_skcipher(slotp->tfms[mode_num]);
+		slotp->tfms[mode_num] = NULL;
+	}
+out:
+	mutex_unlock(&tfms_init_lock);
+	return err;
+}
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index 2452b5dee140..4cccaa9b81ef 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -16,4 +16,36 @@ struct blk_crypto_mode {
 	unsigned int ivsize; /* iv size in bytes */
 };
 
+extern const struct blk_crypto_mode blk_crypto_modes[];
+
+void rq_crypt_ctx_release_keyslot(struct keyslot_manager *ksm,
+				  struct rq_crypt_ctx *rc);
+
+int rq_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
+				 struct keyslot_manager *ksm,
+				 struct rq_crypt_ctx *rc);
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
+
+int blk_crypto_fallback_bio_prep(struct bio **bio_ptr);
+
+int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+static inline int blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
+{
+	pr_warn_once("crypto API fallback disabled; failing request.\n");
+	(*bio_ptr)->bi_status = BLK_STS_NOTSUPP;
+	return false;
+}
+
+static inline int
+blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
+{
+	return 0;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
 #endif /* __LINUX_BLK_CRYPTO_INTERNAL_H */
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index b10b01c83907..d1280fc78239 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -3,6 +3,10 @@
  * Copyright 2019 Google LLC
  */
 
+/*
+ * Refer to Documentation/block/inline-encryption.rst for detailed explanation.
+ */
+
 #define pr_fmt(fmt) "blk-crypto: " fmt
 
 #include <linux/bio.h>
@@ -178,7 +182,8 @@ void rq_crypt_ctx_release_keyslot(struct keyslot_manager *ksm,
 /**
  * blk_crypto_init_request - Initializes the request's rq_crypt_ctx based on the
  *			     bio to be added to the request, and prepares it for
- *			     hardware inline encryption.
+ *			     hardware inline encryption (as opposed to using the
+ *			     crypto API fallback).
  *
  * @rq: The request to init
  * @q: The request queue this request will be submitted to
@@ -208,6 +213,10 @@ int blk_crypto_init_request(struct request *rq, struct request_queue *q,
 	if (!bc)
 		return 0;
 
+	/*
+	 * If we have a bio crypt context here, that means we didn't fallback
+	 * to crypto API, so try to program a keyslot now.
+	 */
 	err = rq_crypt_ctx_acquire_keyslot(bc, q->ksm, rc);
 	if (err)
 		pr_warn_once("Failed to acquire keyslot for %s (err=%d).\n",
@@ -238,9 +247,17 @@ void blk_crypto_free_request(struct request *rq)
  *
  * @bio_ptr: pointer to original bio pointer
  *
- * Succeeds if the bio doesn't have inline encryption enabled or if the bio
- * crypt context provided for the bio is supported by the underlying device's
- * inline encryption hardware. Ends the bio with error otherwise.
+ * If the bio doesn't have inline encryption enabled, or if the bio crypt
+ * context provided for the bio is supported by the underlying device's inline
+ * encryption hardware, do nothing.
+ *
+ * Otherwise, try to perform en/decryption for this bio by falling back to the
+ * kernel crypto API. When the crypto API fallback is used for encryption,
+ * blk-crypto may choose to split the bio into 2 - the first one that will
+ * continue to be processed and the second one that will be resubmitted via
+ * generic_make_request. A bounce bio will be allocated to encrypt the contents
+ * of the aforementioned "first one", and *bio_ptr will be updated to this
+ * bounce bio.
  *
  * Return: 0 on success; nonzero on error (and bio_endio() will have been called
  *	   so bio submission should abort).
@@ -267,10 +284,17 @@ int blk_crypto_bio_prep(struct bio **bio_ptr)
 	if (err)
 		goto fail;
 
-	/* Success if device supports the encryption context */
+	/*
+	 * Success if device supports the encryption context, or we succeeded
+	 * in falling back to the crypto API.
+	 */
 	if (blk_ksm_crypto_mode_supported(bio->bi_disk->queue->ksm, bc->bc_key))
 		return 0;
 
+	err = blk_crypto_fallback_bio_prep(bio_ptr);
+	if (!err)
+		return 0;
+
 fail:
 	bio->bi_status = BLK_STS_IOERR;
 	bio_endio(*bio_ptr);
@@ -299,7 +323,11 @@ void blk_crypto_rq_prep_clone(struct request *dst, struct request *src)
 	dst->rq_crypt_ctx.keyslot = -EINVAL;
 
 	if (src->rq_crypt_ctx.keyslot < 0) {
-		/* src doesn't have any crypto info, so nothing to do */
+		/*
+		 * Either src doesn't have any crypto info, or it was marked
+		 * for fallback en/decryption. In either case, the clone should
+		 * not have any crypto info.
+		 */
 		return;
 	}
 
@@ -399,6 +427,7 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
  * is evicted from hardware that it might have been programmed into. This
  * will call blk_ksm_evict_key on the queue's keyslot manager, if one
  * exists, and supports the crypto algorithm with the specified data unit size.
+ * Otherwise, it will evict the key from the blk-crypto-fallback's ksm.
  *
  * Return: 0 on success or if key is not present in the q's ksm, -err on error.
  */
@@ -408,5 +437,5 @@ int blk_crypto_evict_key(struct request_queue *q,
 	if (q->ksm && blk_ksm_crypto_mode_supported(q->ksm, key))
 		return blk_ksm_evict_key(q->ksm, key);
 
-	return 0;
+	return blk_crypto_fallback_evict_key(key);
 }
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
index f855895770f6..bde2e08879b6 100644
--- a/include/linux/blk-crypto.h
+++ b/include/linux/blk-crypto.h
@@ -58,7 +58,7 @@ struct blk_crypto_key {
  *
  * A bio_crypt_ctx specifies that the contents of the bio will be encrypted (for
  * write requests) or decrypted (for read requests) inline by the storage device
- * or controller.
+ * or controller, or by the crypto API fallback.
  */
 struct bio_crypt_ctx {
 	const struct blk_crypto_key	*bc_key;
@@ -251,6 +251,21 @@ static inline int blk_crypto_insert_cloned_request(struct request_queue *q,
 
 #endif /* CONFIG_BLK_INLINE_ENCRYPTION */
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
+
+int blk_crypto_start_using_key(struct blk_crypto_key *key,
+			       struct request_queue *q);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+static inline int blk_crypto_start_using_key(struct blk_crypto_key *key,
+					     struct request_queue *q)
+{
+	return 0;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
 #endif /* CONFIG_BLOCK */
 
 #endif /* __LINUX_BLK_CRYPTO_H */
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 1996689c51d3..520254f92a81 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -324,6 +324,11 @@ enum req_flag_bits {
 	__REQ_NOMERGE,		/* don't touch this for merging */
 	__REQ_IDLE,		/* anticipate more IO after this one */
 	__REQ_INTEGRITY,	/* I/O includes block integrity payload */
+	/*
+	 * I/O cannot include any special tags like block integrity or
+	 * bi_crypt_context
+	 */
+	__REQ_NO_SPECIAL,
 	__REQ_FUA,		/* forced unit access */
 	__REQ_PREFLUSH,		/* request for cache flush */
 	__REQ_RAHEAD,		/* read ahead, can fail anytime */
@@ -359,6 +364,7 @@ enum req_flag_bits {
 #define REQ_NOMERGE		(1ULL << __REQ_NOMERGE)
 #define REQ_IDLE		(1ULL << __REQ_IDLE)
 #define REQ_INTEGRITY		(1ULL << __REQ_INTEGRITY)
+#define REQ_NO_SPECIAL		(1ULL << __REQ_NO_SPECIAL)
 #define REQ_FUA			(1ULL << __REQ_FUA)
 #define REQ_PREFLUSH		(1ULL << __REQ_PREFLUSH)
 #define REQ_RAHEAD		(1ULL << __REQ_RAHEAD)
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 4/9] scsi: ufs: UFS driver v2.1 spec crypto additions
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
                   ` (2 preceding siblings ...)
  2020-02-21 11:50 ` [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-21 11:50 ` [PATCH v7 5/9] scsi: ufs: UFS crypto API Satya Tangirala
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala

Add the crypto registers and structs defined in v2.1 of the JEDEC UFSHCI
specification in preparation to add support for inline encryption to
UFS.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 drivers/scsi/ufs/ufshcd.c |  2 ++
 drivers/scsi/ufs/ufshcd.h |  5 +++
 drivers/scsi/ufs/ufshci.h | 67 +++++++++++++++++++++++++++++++++++++--
 3 files changed, 72 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index abd0e6b05f79..825d9eb34f10 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -4759,6 +4759,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
 	case OCS_MISMATCH_RESP_UPIU_SIZE:
 	case OCS_PEER_COMM_FAILURE:
 	case OCS_FATAL_ERROR:
+	case OCS_INVALID_CRYPTO_CONFIG:
+	case OCS_GENERAL_CRYPTO_ERROR:
 	default:
 		result |= DID_ERROR << 16;
 		dev_err(hba->dev,
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 2ae6c7c8528c..978781c538c4 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -716,6 +716,11 @@ struct ufs_hba {
 	 * for userspace to control the power management.
 	 */
 #define UFSHCD_CAP_RPM_AUTOSUSPEND (1 << 6)
+	/*
+	 * This capability allows the host controller driver to use the
+	 * inline crypto engine, if it is present
+	 */
+#define UFSHCD_CAP_CRYPTO (1 << 7)
 
 	struct devfreq *devfreq;
 	struct ufs_clk_scaling clk_scaling;
diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
index c2961d37cc1c..c0651fe6dbbc 100644
--- a/drivers/scsi/ufs/ufshci.h
+++ b/drivers/scsi/ufs/ufshci.h
@@ -90,6 +90,7 @@ enum {
 	MASK_64_ADDRESSING_SUPPORT		= 0x01000000,
 	MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT	= 0x02000000,
 	MASK_UIC_DME_TEST_MODE_SUPPORT		= 0x04000000,
+	MASK_CRYPTO_SUPPORT			= 0x10000000,
 };
 
 #define UFS_MASK(mask, offset)		((mask) << (offset))
@@ -143,6 +144,7 @@ enum {
 #define DEVICE_FATAL_ERROR			0x800
 #define CONTROLLER_FATAL_ERROR			0x10000
 #define SYSTEM_BUS_FATAL_ERROR			0x20000
+#define CRYPTO_ENGINE_FATAL_ERROR		0x40000
 
 #define UFSHCD_UIC_HIBERN8_MASK	(UIC_HIBERNATE_ENTER |\
 				UIC_HIBERNATE_EXIT)
@@ -155,11 +157,13 @@ enum {
 #define UFSHCD_ERROR_MASK	(UIC_ERROR |\
 				DEVICE_FATAL_ERROR |\
 				CONTROLLER_FATAL_ERROR |\
-				SYSTEM_BUS_FATAL_ERROR)
+				SYSTEM_BUS_FATAL_ERROR |\
+				CRYPTO_ENGINE_FATAL_ERROR)
 
 #define INT_FATAL_ERRORS	(DEVICE_FATAL_ERROR |\
 				CONTROLLER_FATAL_ERROR |\
-				SYSTEM_BUS_FATAL_ERROR)
+				SYSTEM_BUS_FATAL_ERROR |\
+				CRYPTO_ENGINE_FATAL_ERROR)
 
 /* HCS - Host Controller Status 30h */
 #define DEVICE_PRESENT				0x1
@@ -318,6 +322,61 @@ enum {
 	INTERRUPT_MASK_ALL_VER_21	= 0x71FFF,
 };
 
+/* CCAP - Crypto Capability 100h */
+union ufs_crypto_capabilities {
+	__le32 reg_val;
+	struct {
+		u8 num_crypto_cap;
+		u8 config_count;
+		u8 reserved;
+		u8 config_array_ptr;
+	};
+};
+
+enum ufs_crypto_key_size {
+	UFS_CRYPTO_KEY_SIZE_INVALID	= 0x0,
+	UFS_CRYPTO_KEY_SIZE_128		= 0x1,
+	UFS_CRYPTO_KEY_SIZE_192		= 0x2,
+	UFS_CRYPTO_KEY_SIZE_256		= 0x3,
+	UFS_CRYPTO_KEY_SIZE_512		= 0x4,
+};
+
+enum ufs_crypto_alg {
+	UFS_CRYPTO_ALG_AES_XTS			= 0x0,
+	UFS_CRYPTO_ALG_BITLOCKER_AES_CBC	= 0x1,
+	UFS_CRYPTO_ALG_AES_ECB			= 0x2,
+	UFS_CRYPTO_ALG_ESSIV_AES_CBC		= 0x3,
+};
+
+/* x-CRYPTOCAP - Crypto Capability X */
+union ufs_crypto_cap_entry {
+	__le32 reg_val;
+	struct {
+		u8 algorithm_id;
+		u8 sdus_mask; /* Supported data unit size mask */
+		u8 key_size;
+		u8 reserved;
+	};
+};
+
+#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
+#define UFS_CRYPTO_KEY_MAX_SIZE 64
+/* x-CRYPTOCFG - Crypto Configuration X */
+union ufs_crypto_cfg_entry {
+	__le32 reg_val[32];
+	struct {
+		u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE];
+		u8 data_unit_size;
+		u8 crypto_cap_idx;
+		u8 reserved_1;
+		u8 config_enable;
+		u8 reserved_multi_host;
+		u8 reserved_2;
+		u8 vsb[2];
+		u8 reserved_3[56];
+	};
+};
+
 /*
  * Request Descriptor Definitions
  */
@@ -339,6 +398,7 @@ enum {
 	UTP_NATIVE_UFS_COMMAND		= 0x10000000,
 	UTP_DEVICE_MANAGEMENT_FUNCTION	= 0x20000000,
 	UTP_REQ_DESC_INT_CMD		= 0x01000000,
+	UTP_REQ_DESC_CRYPTO_ENABLE_CMD	= 0x00800000,
 };
 
 /* UTP Transfer Request Data Direction (DD) */
@@ -358,6 +418,9 @@ enum {
 	OCS_PEER_COMM_FAILURE		= 0x5,
 	OCS_ABORTED			= 0x6,
 	OCS_FATAL_ERROR			= 0x7,
+	OCS_DEVICE_FATAL_ERROR		= 0x8,
+	OCS_INVALID_CRYPTO_CONFIG	= 0x9,
+	OCS_GENERAL_CRYPTO_ERROR	= 0xA,
 	OCS_INVALID_COMMAND_STATUS	= 0x0F,
 	MASK_OCS			= 0x0F,
 };
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 5/9] scsi: ufs: UFS crypto API
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
                   ` (3 preceding siblings ...)
  2020-02-21 11:50 ` [PATCH v7 4/9] scsi: ufs: UFS driver v2.1 spec crypto additions Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-22  4:59   ` Eric Biggers
  2020-02-21 11:50 ` [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala

Introduce functions to manipulate UFS inline encryption hardware
in line with the JEDEC UFSHCI v2.1 specification and to work with the
block keyslot manager.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 drivers/scsi/ufs/Kconfig         |   9 +
 drivers/scsi/ufs/Makefile        |   1 +
 drivers/scsi/ufs/ufshcd-crypto.c | 367 +++++++++++++++++++++++++++++++
 drivers/scsi/ufs/ufshcd-crypto.h |  54 +++++
 drivers/scsi/ufs/ufshcd.h        |  20 ++
 5 files changed, 451 insertions(+)
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h

diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
index d14c2243e02a..c69f1b49167b 100644
--- a/drivers/scsi/ufs/Kconfig
+++ b/drivers/scsi/ufs/Kconfig
@@ -160,3 +160,12 @@ config SCSI_UFS_BSG
 
 	  Select this if you need a bsg device node for your UFS controller.
 	  If unsure, say N.
+
+config SCSI_UFS_CRYPTO
+	bool "UFS Crypto Engine Support"
+	depends on SCSI_UFSHCD && BLK_INLINE_ENCRYPTION
+	help
+	  Enable Crypto Engine Support in UFS.
+	  Enabling this makes it possible for the kernel to use the crypto
+	  capabilities of the UFS device (if present) to perform crypto
+	  operations on data being transferred to/from the device.
diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
index 94c6c5d7334b..197e178f44bc 100644
--- a/drivers/scsi/ufs/Makefile
+++ b/drivers/scsi/ufs/Makefile
@@ -7,6 +7,7 @@ obj-$(CONFIG_SCSI_UFS_QCOM) += ufs-qcom.o
 obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o
 ufshcd-core-y				+= ufshcd.o ufs-sysfs.o
 ufshcd-core-$(CONFIG_SCSI_UFS_BSG)	+= ufs_bsg.o
+ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
 obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o
 obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
 obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
new file mode 100644
index 000000000000..1b8e14d30c04
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.c
@@ -0,0 +1,367 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <linux/keyslot-manager.h>
+#include "ufshcd.h"
+#include "ufshcd-crypto.h"
+
+static inline int ufshcd_num_keyslots(struct ufs_hba *hba)
+{
+	return hba->crypto_capabilities.config_count + 1;
+}
+
+static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
+{
+	/*
+	 * The actual number of configurations supported is (CFGC+1), so slot
+	 * numbers range from 0 to config_count inclusive.
+	 */
+	return slot < ufshcd_num_keyslots(hba);
+}
+
+static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
+{
+	return cap_idx < hba->crypto_capabilities.num_crypto_cap;
+}
+
+static u8 ufshcd_get_data_unit_size_mask(unsigned int data_unit_size)
+{
+	if (data_unit_size < 512 || data_unit_size > 65536 ||
+	    !is_power_of_2(data_unit_size))
+		return 0;
+
+	return data_unit_size / 512;
+}
+
+static size_t ufshcd_get_keysize_bytes(enum ufs_crypto_key_size size)
+{
+	switch (size) {
+	case UFS_CRYPTO_KEY_SIZE_128:
+		return 16;
+	case UFS_CRYPTO_KEY_SIZE_192:
+		return 24;
+	case UFS_CRYPTO_KEY_SIZE_256:
+		return 32;
+	case UFS_CRYPTO_KEY_SIZE_512:
+		return 64;
+	default:
+		return 0;
+	}
+}
+
+static int ufshcd_crypto_cap_find(struct ufs_hba *hba,
+				  enum blk_crypto_mode_num crypto_mode,
+				  unsigned int data_unit_size)
+{
+	enum ufs_crypto_alg ufs_alg;
+	u8 data_unit_mask;
+	int cap_idx;
+	enum ufs_crypto_key_size ufs_key_size;
+	union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
+
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return -EINVAL;
+
+	switch (crypto_mode) {
+	case BLK_ENCRYPTION_MODE_AES_256_XTS:
+		ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
+		ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	data_unit_mask = ufshcd_get_data_unit_size_mask(data_unit_size);
+
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		if (ccap_array[cap_idx].algorithm_id == ufs_alg &&
+		    (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
+		    ccap_array[cap_idx].key_size == ufs_key_size)
+			return cap_idx;
+	}
+
+	return -EINVAL;
+}
+
+/**
+ * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
+ *
+ *	Writes the key with the appropriate format - for AES_XTS,
+ *	the first half of the key is copied as is, the second half is
+ *	copied with an offset halfway into the cfg->crypto_key array.
+ *	For the other supported crypto algs, the key is just copied.
+ *
+ * @cfg: The crypto config to write to
+ * @key: The key to write
+ * @cap: The crypto capability (which specifies the crypto alg and key size)
+ *
+ * Returns 0 on success, or -EINVAL
+ */
+static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg,
+					     const u8 *key,
+					     union ufs_crypto_cap_entry cap)
+{
+	size_t key_size_bytes = ufshcd_get_keysize_bytes(cap.key_size);
+
+	if (key_size_bytes == 0)
+		return -EINVAL;
+
+	switch (cap.algorithm_id) {
+	case UFS_CRYPTO_ALG_AES_XTS:
+		key_size_bytes *= 2;
+		if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE)
+			return -EINVAL;
+
+		memcpy(cfg->crypto_key, key, key_size_bytes/2);
+		memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
+		       key + key_size_bytes/2, key_size_bytes/2);
+		return 0;
+	case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC:
+		/* fall through */
+	case UFS_CRYPTO_ALG_AES_ECB:
+		/* fall through */
+	case UFS_CRYPTO_ALG_ESSIV_AES_CBC:
+		memcpy(cfg->crypto_key, key, key_size_bytes);
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static void ufshcd_program_key(struct ufs_hba *hba,
+			       const union ufs_crypto_cfg_entry *cfg,
+			       int slot)
+{
+	int i;
+	u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
+
+	ufshcd_hold(hba, false);
+	/* Ensure that CFGE is cleared before programming the key */
+	ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	for (i = 0; i < 16; i++) {
+		ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]),
+			      slot_offset + i * sizeof(cfg->reg_val[0]));
+	}
+	/* Write dword 17 */
+	ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]),
+		      slot_offset + 17 * sizeof(cfg->reg_val[0]));
+	/* Dword 16 must be written last */
+	ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]),
+		      slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	ufshcd_release(hba);
+}
+
+static void ufshcd_clear_keyslot(struct ufs_hba *hba, int slot)
+{
+	union ufs_crypto_cfg_entry cfg = { 0 };
+
+	ufshcd_program_key(hba, &cfg, slot);
+}
+
+/* Clear all keyslots at driver init time */
+static void ufshcd_clear_all_keyslots(struct ufs_hba *hba)
+{
+	int slot;
+
+	for (slot = 0; slot < ufshcd_num_keyslots(hba); slot++)
+		ufshcd_clear_keyslot(hba, slot);
+}
+
+static int ufshcd_crypto_keyslot_program(struct keyslot_manager *ksm,
+					 const struct blk_crypto_key *key,
+					 unsigned int slot)
+{
+	struct ufs_hba *hba = container_of(ksm, struct ufs_hba, ksm);
+	int err = 0;
+	u8 data_unit_mask;
+	union ufs_crypto_cfg_entry cfg;
+	int cap_idx;
+
+	cap_idx = ufshcd_crypto_cap_find(hba, key->crypto_mode,
+					 key->data_unit_size);
+
+	if (!(hba->caps & UFSHCD_CAP_CRYPTO) ||
+	    !ufshcd_keyslot_valid(hba, slot) ||
+	    !ufshcd_cap_idx_valid(hba, cap_idx))
+		return -EINVAL;
+
+	data_unit_mask = ufshcd_get_data_unit_size_mask(key->data_unit_size);
+
+	if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask))
+		return -EINVAL;
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.data_unit_size = data_unit_mask;
+	cfg.crypto_cap_idx = cap_idx;
+	cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE;
+
+	err = ufshcd_crypto_cfg_entry_write_key(&cfg, key->raw,
+						hba->crypto_cap_array[cap_idx]);
+	if (err)
+		return err;
+
+	ufshcd_program_key(hba, &cfg, slot);
+
+	memzero_explicit(&cfg, sizeof(cfg));
+	return 0;
+}
+
+static int ufshcd_crypto_keyslot_evict(struct keyslot_manager *ksm,
+				       const struct blk_crypto_key *key,
+				       unsigned int slot)
+{
+	struct ufs_hba *hba = container_of(ksm, struct ufs_hba, ksm);
+
+	if (!(hba->caps & UFSHCD_CAP_CRYPTO) ||
+	    !ufshcd_keyslot_valid(hba, slot))
+		return -EINVAL;
+
+	/*
+	 * Clear the crypto cfg on the device. Clearing CFGE
+	 * might not be sufficient, so just clear the entire cfg.
+	 */
+	ufshcd_clear_keyslot(hba, slot);
+
+	return 0;
+}
+
+void ufshcd_crypto_enable(struct ufs_hba *hba)
+{
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return;
+
+	hba->caps |= UFSHCD_CAP_CRYPTO;
+
+	/* Reset might clear all keys, so reprogram all the keys. */
+	blk_ksm_reprogram_all_keys(&hba->ksm);
+}
+
+void ufshcd_crypto_disable(struct ufs_hba *hba)
+{
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+}
+
+static const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = {
+	.keyslot_program	= ufshcd_crypto_keyslot_program,
+	.keyslot_evict		= ufshcd_crypto_keyslot_evict,
+};
+
+bool ufshcd_blk_crypto_mode_num_for_alg_dusize(
+					enum ufs_crypto_alg ufs_crypto_alg,
+					enum ufs_crypto_key_size key_size,
+					enum blk_crypto_mode_num *blk_mode_num,
+					unsigned int *max_dun_bytes_supported)
+{
+	/*
+	 * This is currently the only mode that UFS and blk-crypto both support.
+	 */
+	if (ufs_crypto_alg == UFS_CRYPTO_ALG_AES_XTS &&
+	    key_size == UFS_CRYPTO_KEY_SIZE_256) {
+		*blk_mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS;
+		*max_dun_bytes_supported = 8;
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba
+ * @hba: Per adapter instance
+ *
+ * Return: 0 if crypto was initialized or is not supported, else a -errno value.
+ */
+int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+	int cap_idx = 0;
+	int err = 0;
+	enum blk_crypto_mode_num blk_mode_num;
+	unsigned int max_dun_bytes;
+
+	/* Default to disabling crypto */
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+
+	/* Return 0 if crypto support isn't present */
+	if (!(hba->capabilities & MASK_CRYPTO_SUPPORT) ||
+	    (hba->quirks & UFSHCD_QUIRK_BROKEN_CRYPTO))
+		goto out;
+
+	/*
+	 * Crypto Capabilities should never be 0, because the
+	 * config_array_ptr > 04h. So we use a 0 value to indicate that
+	 * crypto init failed, and can't be enabled.
+	 */
+	hba->crypto_capabilities.reg_val =
+			cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
+	hba->crypto_cfg_register =
+		(u32)hba->crypto_capabilities.config_array_ptr * 0x100;
+	hba->crypto_cap_array =
+		devm_kcalloc(hba->dev,
+			     hba->crypto_capabilities.num_crypto_cap,
+			     sizeof(hba->crypto_cap_array[0]),
+			     GFP_KERNEL);
+	if (!hba->crypto_cap_array) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	err = blk_ksm_init(&hba->ksm, hba->dev, ufshcd_num_keyslots(hba));
+	if (err)
+		goto out_free_caps;
+
+	hba->ksm.ksm_ll_ops = ufshcd_ksm_ops;
+	hba->ksm.ll_priv_data = hba;
+
+	memset(hba->ksm.crypto_modes_supported, 0,
+	       sizeof(hba->ksm.crypto_modes_supported));
+	memset(hba->ksm.max_dun_bytes_supported, 0,
+	       sizeof(hba->ksm.max_dun_bytes_supported));
+	/*
+	 * Store all the capabilities now so that we don't need to repeatedly
+	 * access the device each time we want to know its capabilities
+	 */
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		hba->crypto_cap_array[cap_idx].reg_val =
+			cpu_to_le32(ufshcd_readl(hba,
+						 REG_UFS_CRYPTOCAP +
+						 cap_idx * sizeof(__le32)));
+		if (!ufshcd_blk_crypto_mode_num_for_alg_dusize(
+				hba->crypto_cap_array[cap_idx].algorithm_id,
+				hba->crypto_cap_array[cap_idx].key_size,
+				&blk_mode_num,
+				&max_dun_bytes))
+			continue;
+		hba->ksm.crypto_modes_supported[blk_mode_num] |=
+			hba->crypto_cap_array[cap_idx].sdus_mask * 512;
+		hba->ksm.max_dun_bytes_supported[blk_mode_num] = max_dun_bytes;
+	}
+
+	ufshcd_clear_all_keyslots(hba);
+
+	return 0;
+
+out_free_caps:
+	devm_kfree(hba->dev, hba->crypto_cap_array);
+out:
+	/* Indicate that init failed by setting crypto_capabilities to 0 */
+	hba->crypto_capabilities.reg_val = 0;
+	return err;
+}
+
+void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					    struct request_queue *q)
+{
+	if (!ufshcd_hba_is_crypto_supported(hba) || !q)
+		return;
+
+	q->ksm = &hba->ksm;
+}
+
+void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba)
+{
+	blk_ksm_destroy(&hba->ksm);
+}
diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h
new file mode 100644
index 000000000000..8270c0c5081a
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef _UFSHCD_CRYPTO_H
+#define _UFSHCD_CRYPTO_H
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+#include <linux/keyslot-manager.h>
+#include "ufshcd.h"
+#include "ufshci.h"
+
+static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+	return hba->crypto_capabilities.reg_val != 0;
+}
+
+void ufshcd_crypto_enable(struct ufs_hba *hba);
+
+void ufshcd_crypto_disable(struct ufs_hba *hba);
+
+int ufshcd_hba_init_crypto(struct ufs_hba *hba);
+
+void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					    struct request_queue *q);
+
+void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba);
+
+#else /* CONFIG_SCSI_UFS_CRYPTO */
+
+static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+	return false;
+}
+
+static inline void ufshcd_crypto_enable(struct ufs_hba *hba) { }
+
+static inline void ufshcd_crypto_disable(struct ufs_hba *hba) { }
+
+static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+	return 0;
+}
+
+static inline void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+						struct request_queue *q) { }
+
+static inline void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba)
+{ }
+
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
+
+#endif /* _UFSHCD_CRYPTO_H */
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 978781c538c4..7b8a87418f0c 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -55,6 +55,7 @@
 #include <linux/clk.h>
 #include <linux/completion.h>
 #include <linux/regulator/consumer.h>
+#include <linux/keyslot-manager.h>
 #include "unipro.h"
 
 #include <asm/irq.h>
@@ -521,6 +522,10 @@ struct ufs_stats {
  * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
  *  device is known or not.
  * @scsi_block_reqs_cnt: reference counting for scsi block requests
+ * @crypto_capabilities: Content of crypto capabilities register (0x100)
+ * @crypto_cap_array: Array of crypto capabilities
+ * @crypto_cfg_register: Start of the crypto cfg array
+ * @ksm: the keyslot manager tied to this hba
  */
 struct ufs_hba {
 	void __iomem *mmio_base;
@@ -634,6 +639,13 @@ struct ufs_hba {
 	 * enabled via HCE register.
 	 */
 	#define UFSHCI_QUIRK_BROKEN_HCE				0x400
+
+	/*
+	 * This quirk needs to be enabled if the host controller advertises
+	 * inline encryption support but it doesn't work correctly.
+	 */
+	#define UFSHCD_QUIRK_BROKEN_CRYPTO			0x800
+
 	unsigned int quirks;	/* Deviations from standard UFSHCI spec. */
 
 	/* Device deviations from standard UFS device spec. */
@@ -735,6 +747,14 @@ struct ufs_hba {
 
 	struct device		bsg_dev;
 	struct request_queue	*bsg_queue;
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+	/* crypto */
+	union ufs_crypto_capabilities crypto_capabilities;
+	union ufs_crypto_cap_entry *crypto_cap_array;
+	u32 crypto_cfg_register;
+	struct keyslot_manager ksm;
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
 };
 
 /* Returns true if clocks can be gated. Otherwise false */
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
                   ` (4 preceding siblings ...)
  2020-02-21 11:50 ` [PATCH v7 5/9] scsi: ufs: UFS crypto API Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-21 17:22   ` Christoph Hellwig
  2020-02-21 11:50 ` [PATCH v7 7/9] fscrypt: add inline encryption support Satya Tangirala
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala

Wire up ufshcd.c with the UFS Crypto API, the block layer inline
encryption additions and the keyslot manager.

Also, introduce UFSHCD_QUIRK_BROKEN_CRYPTO that certain UFS drivers
that don't yet support inline encryption need to use - taken from
patches by John Stultz <john.stultz@linaro.org>
(https://android-review.googlesource.com/c/kernel/common/+/1162224/5)
(https://android-review.googlesource.com/c/kernel/common/+/1162225/5)
(https://android-review.googlesource.com/c/kernel/common/+/1164506/1)

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 drivers/scsi/ufs/ufs-hisi.c      |  8 +++++
 drivers/scsi/ufs/ufs-qcom.c      |  7 ++++
 drivers/scsi/ufs/ufshcd-crypto.c | 26 ++++++++++++++
 drivers/scsi/ufs/ufshcd-crypto.h | 14 ++++++++
 drivers/scsi/ufs/ufshcd.c        | 59 +++++++++++++++++++++++++++++---
 drivers/scsi/ufs/ufshcd.h        |  8 +++++
 6 files changed, 117 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c
index 5d6487350a6c..fa9d4d1c43c9 100644
--- a/drivers/scsi/ufs/ufs-hisi.c
+++ b/drivers/scsi/ufs/ufs-hisi.c
@@ -475,6 +475,14 @@ static int ufs_hisi_init_common(struct ufs_hba *hba)
 	if (!host)
 		return -ENOMEM;
 
+	/*
+	 * Inline crypto is currently broken with ufs-hisi because the keyslots
+	 * overlap with the vendor-specific SYS CTRL registers -- and even if
+	 * software uses only non-overlapping keyslots, the kernel crashes when
+	 * programming a key or a UFS error occurs on the first encrypted I/O.
+	 */
+	hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
+
 	host->hba = hba;
 	ufshcd_set_variant(hba, host);
 
diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index c69c29a1ceb9..4b2ec3745a16 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -1002,6 +1002,13 @@ static void ufs_qcom_advertise_quirks(struct ufs_hba *hba)
 				| UFSHCD_QUIRK_DME_PEER_ACCESS_AUTO_MODE
 				| UFSHCD_QUIRK_BROKEN_PA_RXHSUNTERMCAP);
 	}
+
+	/*
+	 * Inline crypto is currently broken with ufs-qcom at least because the
+	 * device tree doesn't include the crypto registers.  There are likely
+	 * to be other issues that will need to be addressed too.
+	 */
+	hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
 }
 
 static void ufs_qcom_set_caps(struct ufs_hba *hba)
diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
index 1b8e14d30c04..cd7ca50a1dd9 100644
--- a/drivers/scsi/ufs/ufshcd-crypto.c
+++ b/drivers/scsi/ufs/ufshcd-crypto.c
@@ -365,3 +365,29 @@ void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba)
 {
 	blk_ksm_destroy(&hba->ksm);
 }
+
+int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+			       struct scsi_cmnd *cmd,
+			       struct ufshcd_lrb *lrbp)
+{
+	struct rq_crypt_ctx *rc = &cmd->request->rq_crypt_ctx;
+	struct bio_crypt_ctx *bc = rc->bc;
+
+	lrbp->crypto_enable = false;
+
+	if (WARN_ON(!(hba->caps & UFSHCD_CAP_CRYPTO))) {
+		/*
+		 * Upper layer asked us to do inline encryption
+		 * but that isn't enabled, so we fail this request.
+		 */
+		return -EINVAL;
+	}
+	if (!ufshcd_keyslot_valid(hba, rc->keyslot))
+		return -EINVAL;
+
+	lrbp->crypto_enable = true;
+	lrbp->crypto_key_slot = rc->keyslot;
+	lrbp->data_unit_num = bc->bc_dun[0];
+
+	return 0;
+}
diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h
index 8270c0c5081a..c76f93ede51c 100644
--- a/drivers/scsi/ufs/ufshcd-crypto.h
+++ b/drivers/scsi/ufs/ufshcd-crypto.h
@@ -16,6 +16,15 @@ static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
 	return hba->crypto_capabilities.reg_val != 0;
 }
 
+int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+			       struct scsi_cmnd *cmd,
+			       struct ufshcd_lrb *lrbp);
+
+static inline bool ufshcd_lrbp_crypto_enabled(struct ufshcd_lrb *lrbp)
+{
+	return lrbp->crypto_enable;
+}
+
 void ufshcd_crypto_enable(struct ufs_hba *hba);
 
 void ufshcd_crypto_disable(struct ufs_hba *hba);
@@ -49,6 +58,11 @@ static inline void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
 static inline void ufshcd_crypto_destroy_keyslot_manager(struct ufs_hba *hba)
 { }
 
+static inline bool ufshcd_lrbp_crypto_enabled(struct ufshcd_lrb *lrbp)
+{
+	return false;
+}
+
 #endif /* CONFIG_SCSI_UFS_CRYPTO */
 
 #endif /* _UFSHCD_CRYPTO_H */
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 825d9eb34f10..9ecfc10feafb 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -47,6 +47,7 @@
 #include "unipro.h"
 #include "ufs-sysfs.h"
 #include "ufs_bsg.h"
+#include "ufshcd-crypto.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/ufs.h>
@@ -816,7 +817,14 @@ static void ufshcd_enable_run_stop_reg(struct ufs_hba *hba)
  */
 static inline void ufshcd_hba_start(struct ufs_hba *hba)
 {
-	ufshcd_writel(hba, CONTROLLER_ENABLE, REG_CONTROLLER_ENABLE);
+	u32 val = CONTROLLER_ENABLE;
+
+	if (ufshcd_hba_is_crypto_supported(hba)) {
+		ufshcd_crypto_enable(hba);
+		val |= CRYPTO_GENERAL_ENABLE;
+	}
+
+	ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE);
 }
 
 /**
@@ -2192,9 +2200,23 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
 		dword_0 |= UTP_REQ_DESC_INT_CMD;
 
 	/* Transfer request descriptor header fields */
+	if (ufshcd_lrbp_crypto_enabled(lrbp)) {
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+		dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
+		dword_0 |= lrbp->crypto_key_slot;
+		req_desc->header.dword_1 =
+			cpu_to_le32(lower_32_bits(lrbp->data_unit_num));
+		req_desc->header.dword_3 =
+			cpu_to_le32(upper_32_bits(lrbp->data_unit_num));
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
+	} else {
+		/* dword_1 and dword_3 are reserved, hence they are set to 0 */
+		req_desc->header.dword_1 = 0;
+		req_desc->header.dword_3 = 0;
+	}
+
 	req_desc->header.dword_0 = cpu_to_le32(dword_0);
-	/* dword_1 is reserved, hence it is set to 0 */
-	req_desc->header.dword_1 = 0;
+
 	/*
 	 * assigning invalid value for command status. Controller
 	 * updates OCS on command completion, with the command
@@ -2202,8 +2224,6 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
 	 */
 	req_desc->header.dword_2 =
 		cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
-	/* dword_3 is reserved, hence it is set to 0 */
-	req_desc->header.dword_3 = 0;
 
 	req_desc->prd_table_length = 0;
 }
@@ -2437,6 +2457,20 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 	lrbp->task_tag = tag;
 	lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
 	lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false;
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+	if (cmd->request->rq_crypt_ctx.keyslot >= 0) {
+		err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
+		if (err) {
+			lrbp->cmd = NULL;
+			ufshcd_release(hba);
+			goto out;
+		}
+	} else {
+		lrbp->crypto_enable = false;
+	}
+#endif
+
 	lrbp->req_abort_skip = false;
 
 	ufshcd_comp_scsi_upiu(hba, lrbp);
@@ -2470,6 +2504,9 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba,
 	lrbp->task_tag = tag;
 	lrbp->lun = 0; /* device management cmd is not specific to any LUN */
 	lrbp->intr_cmd = true; /* No interrupt aggregation */
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+	lrbp->crypto_enable = false; /* No crypto operations */
+#endif
 	hba->dev_cmd.type = cmd_type;
 
 	return ufshcd_comp_devman_upiu(hba, lrbp);
@@ -4208,6 +4245,8 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep)
 {
 	int err;
 
+	ufshcd_crypto_disable(hba);
+
 	ufshcd_writel(hba, CONTROLLER_DISABLE,  REG_CONTROLLER_ENABLE);
 	err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE,
 					CONTROLLER_ENABLE, CONTROLLER_DISABLE,
@@ -4624,6 +4663,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
 	if (ufshcd_is_rpm_autosuspend_allowed(hba))
 		sdev->rpm_autosuspend = 1;
 
+	ufshcd_crypto_setup_rq_keyslot_manager(hba, q);
+
 	return 0;
 }
 
@@ -8304,6 +8345,7 @@ EXPORT_SYMBOL_GPL(ufshcd_remove);
  */
 void ufshcd_dealloc_host(struct ufs_hba *hba)
 {
+	ufshcd_crypto_destroy_keyslot_manager(hba);
 	scsi_host_put(hba->host);
 }
 EXPORT_SYMBOL_GPL(ufshcd_dealloc_host);
@@ -8513,6 +8555,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
 	/* Reset the attached device */
 	ufshcd_vops_device_reset(hba);
 
+	/* Init crypto */
+	err = ufshcd_hba_init_crypto(hba);
+	if (err) {
+		dev_err(hba->dev, "crypto setup failed\n");
+		goto out_remove_scsi_host;
+	}
+
 	/* Host controller enable */
 	err = ufshcd_hba_enable(hba);
 	if (err) {
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 7b8a87418f0c..c8f948aa5e3d 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -168,6 +168,9 @@ struct ufs_pm_lvl_states {
  * @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation)
  * @issue_time_stamp: time stamp for debug purposes
  * @compl_time_stamp: time stamp for statistics
+ * @crypto_enable: whether or not the request needs inline crypto operations
+ * @crypto_key_slot: the key slot to use for inline crypto
+ * @data_unit_num: the data unit number for the first block for inline crypto
  * @req_abort_skip: skip request abort task flag
  */
 struct ufshcd_lrb {
@@ -192,6 +195,11 @@ struct ufshcd_lrb {
 	bool intr_cmd;
 	ktime_t issue_time_stamp;
 	ktime_t compl_time_stamp;
+#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
+	bool crypto_enable;
+	u8 crypto_key_slot;
+	u64 data_unit_num;
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
 
 	bool req_abort_skip;
 };
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 7/9] fscrypt: add inline encryption support
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
                   ` (5 preceding siblings ...)
  2020-02-21 11:50 ` [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-21 18:40   ` Eric Biggers
                     ` (2 more replies)
  2020-02-21 11:50 ` [PATCH v7 8/9] f2fs: " Satya Tangirala
                   ` (2 subsequent siblings)
  9 siblings, 3 replies; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala,
	Eric Biggers

Add support for inline encryption to fs/crypto/.  With "inline
encryption", the block layer handles the decryption/encryption as part
of the bio, instead of the filesystem doing the crypto itself via
Linux's crypto API.  This model is needed in order to take advantage of
the inline encryption hardware present on most modern mobile SoCs.

To use inline encryption, the filesystem needs to be mounted with
'-o inlinecrypt'.  The contents of any encrypted files will then be
encrypted using blk-crypto, instead of using the traditional
filesystem-layer crypto. Fscrypt still provides the key and IV to use,
and the actual ciphertext on-disk is still the same; therefore it's
testable using the existing fscrypt ciphertext verification tests.

Note that since blk-crypto has a fallack to Linux's crypto API, and
also supports all the encryption modes currently supported by fscrypt,
this feature is usable and testable even without actual inline
encryption hardware.

Per-filesystem changes will be needed to set encryption contexts when
submitting bios and to implement the 'inlinecrypt' mount option.  This
patch just adds the common code.

Co-developed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Satya Tangirala <satyat@google.com>
---
 fs/crypto/Kconfig           |   6 +
 fs/crypto/Makefile          |   1 +
 fs/crypto/bio.c             |  55 ++++++
 fs/crypto/crypto.c          |   2 +-
 fs/crypto/fname.c           |   4 +-
 fs/crypto/fscrypt_private.h | 121 ++++++++++++--
 fs/crypto/inline_crypt.c    | 324 ++++++++++++++++++++++++++++++++++++
 fs/crypto/keyring.c         |   4 +-
 fs/crypto/keysetup.c        |  95 +++++++----
 fs/crypto/keysetup_v1.c     |  16 +-
 include/linux/fs.h          |   1 +
 include/linux/fscrypt.h     |  57 +++++++
 12 files changed, 629 insertions(+), 57 deletions(-)
 create mode 100644 fs/crypto/inline_crypt.c

diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig
index 8046d7c7a3e9..f1f11a6228eb 100644
--- a/fs/crypto/Kconfig
+++ b/fs/crypto/Kconfig
@@ -24,3 +24,9 @@ config FS_ENCRYPTION_ALGS
 	select CRYPTO_SHA256
 	select CRYPTO_SHA512
 	select CRYPTO_XTS
+
+config FS_ENCRYPTION_INLINE_CRYPT
+	bool "Enable fscrypt to use inline crypto"
+	depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION
+	help
+	  Enable fscrypt to use inline encryption hardware if available.
diff --git a/fs/crypto/Makefile b/fs/crypto/Makefile
index 232e2bb5a337..652c7180ec6d 100644
--- a/fs/crypto/Makefile
+++ b/fs/crypto/Makefile
@@ -11,3 +11,4 @@ fscrypto-y := crypto.o \
 	      policy.o
 
 fscrypto-$(CONFIG_BLOCK) += bio.o
+fscrypto-$(CONFIG_FS_ENCRYPTION_INLINE_CRYPT) += inline_crypt.o
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index 4fa18fff9c4e..82d06cf4b94a 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -24,6 +24,8 @@
 #include <linux/module.h>
 #include <linux/bio.h>
 #include <linux/namei.h>
+#include <linux/fscrypt.h>
+#include <linux/blkdev.h>
 #include "fscrypt_private.h"
 
 void fscrypt_decrypt_bio(struct bio *bio)
@@ -41,6 +43,54 @@ void fscrypt_decrypt_bio(struct bio *bio)
 }
 EXPORT_SYMBOL(fscrypt_decrypt_bio);
 
+static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode,
+					      pgoff_t lblk, sector_t pblk,
+					      unsigned int len)
+{
+	const unsigned int blockbits = inode->i_blkbits;
+	const unsigned int blocks_per_page = 1 << (PAGE_SHIFT - blockbits);
+	struct bio *bio;
+	int ret, err = 0;
+	int num_pages = 0;
+
+	/* This always succeeds since __GFP_DIRECT_RECLAIM is set. */
+	bio = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
+
+	while (len) {
+		unsigned int blocks_this_page = min(len, blocks_per_page);
+		unsigned int bytes_this_page = blocks_this_page << blockbits;
+
+		if (num_pages == 0) {
+			fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOIO);
+			bio_set_dev(bio, inode->i_sb->s_bdev);
+			bio->bi_iter.bi_sector =
+					pblk << (blockbits - SECTOR_SHIFT);
+			bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
+		}
+		ret = bio_add_page(bio, ZERO_PAGE(0), bytes_this_page, 0);
+		if (WARN_ON(ret != bytes_this_page)) {
+			err = -EIO;
+			goto out;
+		}
+		num_pages++;
+		len -= blocks_this_page;
+		lblk += blocks_this_page;
+		pblk += blocks_this_page;
+		if (num_pages == BIO_MAX_PAGES || !len) {
+			err = submit_bio_wait(bio);
+			if (!err && bio->bi_status)
+				err = -EIO;
+			if (err)
+				goto out;
+			bio_reset(bio);
+			num_pages = 0;
+		}
+	}
+out:
+	bio_put(bio);
+	return err;
+}
+
 /**
  * fscrypt_zeroout_range() - zero out a range of blocks in an encrypted file
  * @inode: the file's inode
@@ -69,12 +119,17 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 	unsigned int nr_pages;
 	unsigned int i;
 	unsigned int offset;
+	const bool inlinecrypt = fscrypt_inode_uses_inline_crypto(inode);
 	struct bio *bio;
 	int ret, err;
 
 	if (len == 0)
 		return 0;
 
+	if (inlinecrypt)
+		return fscrypt_zeroout_range_inline_crypt(inode, lblk, pblk,
+							  len);
+
 	BUILD_BUG_ON(ARRAY_SIZE(pages) > BIO_MAX_PAGES);
 	nr_pages = min_t(unsigned int, ARRAY_SIZE(pages),
 			 (len + blocks_per_page - 1) >> blocks_per_page_bits);
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index 1ecaac7ee3cb..263bc676c73d 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -95,7 +95,7 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
 	DECLARE_CRYPTO_WAIT(wait);
 	struct scatterlist dst, src;
 	struct fscrypt_info *ci = inode->i_crypt_info;
-	struct crypto_skcipher *tfm = ci->ci_ctfm;
+	struct crypto_skcipher *tfm = ci->ci_key.tfm;
 	int res = 0;
 
 	if (WARN_ON_ONCE(len <= 0))
diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
index 4c212442a8f7..0fca2d7a5645 100644
--- a/fs/crypto/fname.c
+++ b/fs/crypto/fname.c
@@ -117,7 +117,7 @@ int fscrypt_fname_encrypt(const struct inode *inode, const struct qstr *iname,
 	struct skcipher_request *req = NULL;
 	DECLARE_CRYPTO_WAIT(wait);
 	const struct fscrypt_info *ci = inode->i_crypt_info;
-	struct crypto_skcipher *tfm = ci->ci_ctfm;
+	struct crypto_skcipher *tfm = ci->ci_key.tfm;
 	union fscrypt_iv iv;
 	struct scatterlist sg;
 	int res;
@@ -170,7 +170,7 @@ static int fname_decrypt(const struct inode *inode,
 	DECLARE_CRYPTO_WAIT(wait);
 	struct scatterlist src_sg, dst_sg;
 	const struct fscrypt_info *ci = inode->i_crypt_info;
-	struct crypto_skcipher *tfm = ci->ci_ctfm;
+	struct crypto_skcipher *tfm = ci->ci_key.tfm;
 	union fscrypt_iv iv;
 	int res;
 
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index 9aae851409e5..46e1db6c6220 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -14,6 +14,7 @@
 #include <linux/fscrypt.h>
 #include <linux/siphash.h>
 #include <crypto/hash.h>
+#include <linux/blk-crypto.h>
 
 #define CONST_STRLEN(str)	(sizeof(str) - 1)
 
@@ -146,6 +147,20 @@ struct fscrypt_symlink_data {
 	char encrypted_path[1];
 } __packed;
 
+/**
+ * struct fscrypt_prepared_key - a key prepared for actual encryption/decryption
+ * @tfm: crypto API transform object
+ * @blk_key: key for blk-crypto
+ *
+ * Normally only one of the fields will be non-NULL.
+ */
+struct fscrypt_prepared_key {
+	struct crypto_skcipher *tfm;
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+	struct fscrypt_blk_crypto_key *blk_key;
+#endif
+};
+
 /*
  * fscrypt_info - the "encryption key" for an inode
  *
@@ -155,12 +170,20 @@ struct fscrypt_symlink_data {
  */
 struct fscrypt_info {
 
-	/* The actual crypto transform used for encryption and decryption */
-	struct crypto_skcipher *ci_ctfm;
+	/* The key in a form prepared for actual encryption/decryption */
+	struct fscrypt_prepared_key	ci_key;
 
 	/* True if the key should be freed when this fscrypt_info is freed */
 	bool ci_owns_key;
 
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+	/*
+	 * True if this inode will use inline encryption (blk-crypto) instead of
+	 * the traditional filesystem-layer encryption.
+	 */
+	bool ci_inlinecrypt;
+#endif
+
 	/*
 	 * Encryption mode used for this inode.  It corresponds to either the
 	 * contents or filenames encryption mode, depending on the inode type.
@@ -185,7 +208,7 @@ struct fscrypt_info {
 
 	/*
 	 * If non-NULL, then encryption is done using the master key directly
-	 * and ci_ctfm will equal ci_direct_key->dk_ctfm.
+	 * and ci_key will equal ci_direct_key->dk_key.
 	 */
 	struct fscrypt_direct_key *ci_direct_key;
 
@@ -238,6 +261,7 @@ union fscrypt_iv {
 		u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
 	};
 	u8 raw[FSCRYPT_MAX_IV_SIZE];
+	__le64 dun[FSCRYPT_MAX_IV_SIZE / sizeof(__le64)];
 };
 
 void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
@@ -280,6 +304,76 @@ extern int fscrypt_hkdf_expand(const struct fscrypt_hkdf *hkdf, u8 context,
 
 extern void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf);
 
+/* inline_crypt.c */
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+extern void fscrypt_select_encryption_impl(struct fscrypt_info *ci);
+
+static inline bool
+fscrypt_using_inline_encryption(const struct fscrypt_info *ci)
+{
+	return ci->ci_inlinecrypt;
+}
+
+extern int fscrypt_prepare_inline_crypt_key(
+					struct fscrypt_prepared_key *prep_key,
+					const u8 *raw_key,
+					const struct fscrypt_info *ci);
+
+extern void fscrypt_destroy_inline_crypt_key(
+					struct fscrypt_prepared_key *prep_key);
+
+/*
+ * Check whether the crypto transform or blk-crypto key has been allocated in
+ * @prep_key, depending on which encryption implementation the file will use.
+ */
+static inline bool
+fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
+			const struct fscrypt_info *ci)
+{
+	/*
+	 * The READ_ONCE() here pairs with the smp_store_release() in
+	 * fscrypt_prepare_key().  (This only matters for the per-mode keys,
+	 * which are shared by multiple inodes.)
+	 */
+	if (fscrypt_using_inline_encryption(ci))
+		return READ_ONCE(prep_key->blk_key) != NULL;
+	return READ_ONCE(prep_key->tfm) != NULL;
+}
+
+#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
+static inline void fscrypt_select_encryption_impl(struct fscrypt_info *ci)
+{
+}
+
+static inline bool fscrypt_using_inline_encryption(
+					const struct fscrypt_info *ci)
+{
+	return false;
+}
+
+static inline int
+fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
+				 const u8 *raw_key,
+				 const struct fscrypt_info *ci)
+{
+	WARN_ON(1);
+	return -EOPNOTSUPP;
+}
+
+static inline void
+fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key)
+{
+}
+
+static inline bool
+fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
+			const struct fscrypt_info *ci)
+{
+	return READ_ONCE(prep_key->tfm) != NULL;
+}
+#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
 /* keyring.c */
 
 /*
@@ -369,14 +463,11 @@ struct fscrypt_master_key {
 	struct list_head	mk_decrypted_inodes;
 	spinlock_t		mk_decrypted_inodes_lock;
 
-	/* Crypto API transforms for DIRECT_KEY policies, allocated on-demand */
-	struct crypto_skcipher	*mk_direct_tfms[__FSCRYPT_MODE_MAX + 1];
+	/* Per-mode keys for DIRECT_KEY policies, allocated on-demand */
+	struct fscrypt_prepared_key mk_direct_keys[__FSCRYPT_MODE_MAX + 1];
 
-	/*
-	 * Crypto API transforms for filesystem-layer implementation of
-	 * IV_INO_LBLK_64 policies, allocated on-demand.
-	 */
-	struct crypto_skcipher	*mk_iv_ino_lblk_64_tfms[__FSCRYPT_MODE_MAX + 1];
+	/* Per-mode keys for IV_INO_LBLK_64 policies, allocated on-demand */
+	struct fscrypt_prepared_key mk_iv_ino_lblk_64_keys[__FSCRYPT_MODE_MAX + 1];
 
 } __randomize_layout;
 
@@ -433,13 +524,17 @@ struct fscrypt_mode {
 	int keysize;
 	int ivsize;
 	int logged_impl_name;
+	enum blk_crypto_mode_num blk_crypto_mode;
+	unsigned int blk_crypto_dun_bytes_required;
 };
 
 extern struct fscrypt_mode fscrypt_modes[];
 
-extern struct crypto_skcipher *
-fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
-			  const struct inode *inode);
+extern int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
+			       const u8 *raw_key,
+			       const struct fscrypt_info *ci);
+
+extern void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key);
 
 extern int fscrypt_set_per_file_enc_key(struct fscrypt_info *ci,
 					const u8 *raw_key);
diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c
new file mode 100644
index 000000000000..72692366795a
--- /dev/null
+++ b/fs/crypto/inline_crypt.c
@@ -0,0 +1,324 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Inline encryption support for fscrypt
+ *
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * With "inline encryption", the block layer handles the decryption/encryption
+ * as part of the bio, instead of the filesystem doing the crypto itself via
+ * crypto API.  See Documentation/block/inline-encryption.rst.  fscrypt still
+ * provides the key and IV to use.
+ */
+
+#include <linux/blk-crypto.h>
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/sched/mm.h>
+
+#include "fscrypt_private.h"
+
+struct fscrypt_blk_crypto_key {
+	struct blk_crypto_key base;
+	int num_devs;
+	struct request_queue *devs[];
+};
+
+/* Enable inline encryption for this file if supported. */
+void fscrypt_select_encryption_impl(struct fscrypt_info *ci)
+{
+	const struct inode *inode = ci->ci_inode;
+	struct super_block *sb = inode->i_sb;
+
+	/* The file must need contents encryption, not filenames encryption */
+	if (!S_ISREG(inode->i_mode))
+		return;
+
+	/* blk-crypto must implement the needed encryption algorithm */
+	if (ci->ci_mode->blk_crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
+		return;
+
+	/* The filesystem must be mounted with -o inlinecrypt */
+	if (!(sb->s_flags & SB_INLINE_CRYPT))
+		return;
+
+	ci->ci_inlinecrypt = true;
+}
+
+int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
+				     const u8 *raw_key,
+				     const struct fscrypt_info *ci)
+{
+	const struct inode *inode = ci->ci_inode;
+	struct super_block *sb = inode->i_sb;
+	enum blk_crypto_mode_num crypto_mode = ci->ci_mode->blk_crypto_mode;
+	unsigned int blk_crypto_dun_bytes =
+				ci->ci_mode->blk_crypto_dun_bytes_required;
+	int num_devs = 1;
+	int queue_refs = 0;
+	struct fscrypt_blk_crypto_key *blk_key;
+	int err;
+	int i;
+	unsigned int flags;
+
+	if (sb->s_cop->get_num_devices)
+		num_devs = sb->s_cop->get_num_devices(sb);
+	if (WARN_ON(num_devs < 1))
+		return -EINVAL;
+
+	blk_key = kzalloc(struct_size(blk_key, devs, num_devs), GFP_NOFS);
+	if (!blk_key)
+		return -ENOMEM;
+
+	blk_key->num_devs = num_devs;
+	if (num_devs == 1)
+		blk_key->devs[0] = bdev_get_queue(sb->s_bdev);
+	else
+		sb->s_cop->get_devices(sb, blk_key->devs);
+
+	err = blk_crypto_init_key(&blk_key->base, raw_key, crypto_mode,
+				  blk_crypto_dun_bytes, sb->s_blocksize);
+	if (err) {
+		fscrypt_err(inode, "error %d initializing blk-crypto key", err);
+		goto fail;
+	}
+
+	/*
+	 * We have to start using blk-crypto on all the filesystem's devices.
+	 * We also have to save all the request_queue's for later so that the
+	 * key can be evicted from them.  This is needed because some keys
+	 * aren't destroyed until after the filesystem was already unmounted
+	 * (namely, the per-mode keys in struct fscrypt_master_key).
+	 */
+	for (i = 0; i < num_devs; i++) {
+		if (!blk_get_queue(blk_key->devs[i])) {
+			fscrypt_err(inode, "couldn't get request_queue");
+			err = -EAGAIN;
+			goto fail;
+		}
+		queue_refs++;
+
+		flags = memalloc_nofs_save();
+		err = blk_crypto_start_using_key(&blk_key->base,
+						 blk_key->devs[i]);
+		memalloc_nofs_restore(flags);
+		if (err) {
+			fscrypt_err(inode,
+				    "error %d starting to use blk-crypto", err);
+			goto fail;
+		}
+	}
+	/*
+	 * Pairs with READ_ONCE() in fscrypt_is_key_prepared().  (Only matters
+	 * for the per-mode keys, which are shared by multiple inodes.)
+	 */
+	smp_store_release(&prep_key->blk_key, blk_key);
+	return 0;
+
+fail:
+	for (i = 0; i < queue_refs; i++)
+		blk_put_queue(blk_key->devs[i]);
+	kzfree(blk_key);
+	return err;
+}
+
+void fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key)
+{
+	struct fscrypt_blk_crypto_key *blk_key = prep_key->blk_key;
+	int i;
+
+	if (blk_key) {
+		for (i = 0; i < blk_key->num_devs; i++) {
+			blk_crypto_evict_key(blk_key->devs[i], &blk_key->base);
+			blk_put_queue(blk_key->devs[i]);
+		}
+		kzfree(blk_key);
+	}
+}
+
+/**
+ * fscrypt_inode_uses_inline_crypto - test whether an inode uses inline
+ *				      encryption
+ * @inode: an inode
+ *
+ * Return: true if the inode requires file contents encryption and if the
+ *	   encryption should be done in the block layer via blk-crypto rather
+ *	   than in the filesystem layer.
+ */
+bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+{
+	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
+		inode->i_crypt_info->ci_inlinecrypt;
+}
+EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto);
+
+/**
+ * fscrypt_inode_uses_fs_layer_crypto - test whether an inode uses fs-layer
+ *					encryption
+ * @inode: an inode
+ *
+ * Return: true if the inode requires file contents encryption and if the
+ *	   encryption should be done in the filesystem layer rather than in the
+ *	   block layer via blk-crypto.
+ */
+bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
+{
+	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
+		!inode->i_crypt_info->ci_inlinecrypt;
+}
+EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto);
+
+static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num,
+				 u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
+{
+	union fscrypt_iv iv;
+	int i;
+
+	fscrypt_generate_iv(&iv, lblk_num, ci);
+
+	BUILD_BUG_ON(FSCRYPT_MAX_IV_SIZE > BLK_CRYPTO_MAX_IV_SIZE);
+	memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE);
+	for (i = 0; i < ci->ci_mode->ivsize/sizeof(dun[0]); i++)
+		dun[i] = le64_to_cpu(iv.dun[i]);
+}
+
+/**
+ * fscrypt_set_bio_crypt_ctx - prepare a file contents bio for inline encryption
+ * @bio: a bio which will eventually be submitted to the file
+ * @inode: the file's inode
+ * @first_lblk: the first file logical block number in the I/O
+ * @gfp_mask: memory allocation flags - these must be a waiting mask so that
+ *					bio_crypt_set_ctx can't fail.
+ *
+ * If the contents of the file should be encrypted (or decrypted) with inline
+ * encryption, then assign the appropriate encryption context to the bio.
+ *
+ * Normally the bio should be newly allocated (i.e. no pages added yet), as
+ * otherwise fscrypt_mergeable_bio() won't work as intended.
+ *
+ * The encryption context will be freed automatically when the bio is freed.
+ */
+void fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
+			       u64 first_lblk, gfp_t gfp_mask)
+{
+	const struct fscrypt_info *ci = inode->i_crypt_info;
+	u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+
+	if (!fscrypt_inode_uses_inline_crypto(inode))
+		return;
+
+	fscrypt_generate_dun(ci, first_lblk, dun);
+	bio_crypt_set_ctx(bio, &ci->ci_key.blk_key->base, dun, gfp_mask);
+}
+EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx);
+
+/* Extract the inode and logical block number from a buffer_head. */
+static bool bh_get_inode_and_lblk_num(const struct buffer_head *bh,
+				      const struct inode **inode_ret,
+				      u64 *lblk_num_ret)
+{
+	struct page *page = bh->b_page;
+	const struct address_space *mapping;
+	const struct inode *inode;
+
+	/*
+	 * The ext4 journal (jbd2) can submit a buffer_head it directly created
+	 * for a non-pagecache page.  fscrypt doesn't care about these.
+	 */
+	mapping = page_mapping(page);
+	if (!mapping)
+		return false;
+	inode = mapping->host;
+
+	*inode_ret = inode;
+	*lblk_num_ret = ((u64)page->index << (PAGE_SHIFT - inode->i_blkbits)) +
+			(bh_offset(bh) >> inode->i_blkbits);
+	return true;
+}
+
+/**
+ * fscrypt_set_bio_crypt_ctx_bh - prepare a file contents bio for inline
+ *				  encryption
+ * @bio: a bio which will eventually be submitted to the file
+ * @first_bh: the first buffer_head for which I/O will be submitted
+ * @gfp_mask: memory allocation flags
+ *
+ * Same as fscrypt_set_bio_crypt_ctx(), except this takes a buffer_head instead
+ * of an inode and block number directly.
+ */
+void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
+				 const struct buffer_head *first_bh,
+				 gfp_t gfp_mask)
+{
+	const struct inode *inode;
+	u64 first_lblk;
+
+	if (bh_get_inode_and_lblk_num(first_bh, &inode, &first_lblk))
+		fscrypt_set_bio_crypt_ctx(bio, inode, first_lblk, gfp_mask);
+}
+EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx_bh);
+
+/**
+ * fscrypt_mergeable_bio - test whether data can be added to a bio
+ * @bio: the bio being built up
+ * @inode: the inode for the next part of the I/O
+ * @next_lblk: the next file logical block number in the I/O
+ *
+ * When building a bio which may contain data which should undergo inline
+ * encryption (or decryption) via fscrypt, filesystems should call this function
+ * to ensure that the resulting bio contains only logically contiguous data.
+ * This will return false if the next part of the I/O cannot be merged with the
+ * bio because either the encryption key would be different or the encryption
+ * data unit numbers would be discontiguous.
+ *
+ * fscrypt_set_bio_crypt_ctx() must have already been called on the bio.
+ *
+ * Return: true iff the I/O is mergeable
+ */
+bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+			   u64 next_lblk)
+{
+	const struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+	u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+
+	if (!!bc != fscrypt_inode_uses_inline_crypto(inode))
+		return false;
+	if (!bc)
+		return true;
+
+	/*
+	 * Comparing the key pointers is good enough, as all I/O for each key
+	 * uses the same pointer.  I.e., there's currently no need to support
+	 * merging requests where the keys are the same but the pointers differ.
+	 */
+	if (bc->bc_key != &inode->i_crypt_info->ci_key.blk_key->base)
+		return false;
+
+	fscrypt_generate_dun(inode->i_crypt_info, next_lblk, next_dun);
+	return bio_crypt_dun_is_contiguous(bc, bio->bi_iter.bi_size, next_dun);
+}
+EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio);
+
+/**
+ * fscrypt_mergeable_bio_bh - test whether data can be added to a bio
+ * @bio: the bio being built up
+ * @next_bh: the next buffer_head for which I/O will be submitted
+ *
+ * Same as fscrypt_mergeable_bio(), except this takes a buffer_head instead of
+ * an inode and block number directly.
+ *
+ * Return: true iff the I/O is mergeable
+ */
+bool fscrypt_mergeable_bio_bh(struct bio *bio,
+			      const struct buffer_head *next_bh)
+{
+	const struct inode *inode;
+	u64 next_lblk;
+
+	if (!bh_get_inode_and_lblk_num(next_bh, &inode, &next_lblk))
+		return !bio->bi_crypt_context;
+
+	return fscrypt_mergeable_bio(bio, inode, next_lblk);
+}
+EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh);
diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c
index ab41b25d4fa1..d8ab33f631ba 100644
--- a/fs/crypto/keyring.c
+++ b/fs/crypto/keyring.c
@@ -44,8 +44,8 @@ static void free_master_key(struct fscrypt_master_key *mk)
 	wipe_master_key_secret(&mk->mk_secret);
 
 	for (i = 0; i <= __FSCRYPT_MODE_MAX; i++) {
-		crypto_free_skcipher(mk->mk_direct_tfms[i]);
-		crypto_free_skcipher(mk->mk_iv_ino_lblk_64_tfms[i]);
+		fscrypt_destroy_prepared_key(&mk->mk_direct_keys[i]);
+		fscrypt_destroy_prepared_key(&mk->mk_iv_ino_lblk_64_keys[i]);
 	}
 
 	key_put(mk->mk_users);
diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
index 65cb09fa6ead..7c157130c16a 100644
--- a/fs/crypto/keysetup.c
+++ b/fs/crypto/keysetup.c
@@ -19,6 +19,8 @@ struct fscrypt_mode fscrypt_modes[] = {
 		.cipher_str = "xts(aes)",
 		.keysize = 64,
 		.ivsize = 16,
+		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
+		.blk_crypto_dun_bytes_required = 8,
 	},
 	[FSCRYPT_MODE_AES_256_CTS] = {
 		.friendly_name = "AES-256-CTS-CBC",
@@ -31,6 +33,8 @@ struct fscrypt_mode fscrypt_modes[] = {
 		.cipher_str = "essiv(cbc(aes),sha256)",
 		.keysize = 16,
 		.ivsize = 16,
+		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+		.blk_crypto_dun_bytes_required = 8,
 	},
 	[FSCRYPT_MODE_AES_128_CTS] = {
 		.friendly_name = "AES-128-CTS-CBC",
@@ -43,6 +47,8 @@ struct fscrypt_mode fscrypt_modes[] = {
 		.cipher_str = "adiantum(xchacha12,aes)",
 		.keysize = 32,
 		.ivsize = 32,
+		.blk_crypto_mode = BLK_ENCRYPTION_MODE_ADIANTUM,
+		.blk_crypto_dun_bytes_required = 24,
 	},
 };
 
@@ -62,9 +68,9 @@ select_encryption_mode(const union fscrypt_policy *policy,
 }
 
 /* Create a symmetric cipher object for the given encryption mode and key */
-struct crypto_skcipher *fscrypt_allocate_skcipher(struct fscrypt_mode *mode,
-						  const u8 *raw_key,
-						  const struct inode *inode)
+static struct crypto_skcipher *
+fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
+			  const struct inode *inode)
 {
 	struct crypto_skcipher *tfm;
 	int err;
@@ -107,30 +113,55 @@ struct crypto_skcipher *fscrypt_allocate_skcipher(struct fscrypt_mode *mode,
 	return ERR_PTR(err);
 }
 
-/* Given a per-file encryption key, set up the file's crypto transform object */
-int fscrypt_set_per_file_enc_key(struct fscrypt_info *ci, const u8 *raw_key)
+/*
+ * Prepare the crypto transform object or blk-crypto key in @prep_key, given the
+ * raw key, encryption mode, and flag indicating which encryption implementation
+ * (fs-layer or blk-crypto) will be used.
+ */
+int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
+			const u8 *raw_key, const struct fscrypt_info *ci)
 {
 	struct crypto_skcipher *tfm;
 
+	if (fscrypt_using_inline_encryption(ci))
+		return fscrypt_prepare_inline_crypt_key(prep_key, raw_key, ci);
+
 	tfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key, ci->ci_inode);
 	if (IS_ERR(tfm))
 		return PTR_ERR(tfm);
+	/*
+	 * Pairs with READ_ONCE() in fscrypt_is_key_prepared().  (Only matters
+	 * for the per-mode keys, which are shared by multiple inodes.)
+	 */
+	smp_store_release(&prep_key->tfm, tfm);
+	return 0;
+}
+
+/* Destroy a crypto transform object and/or blk-crypto key. */
+void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key)
+{
+	crypto_free_skcipher(prep_key->tfm);
+	fscrypt_destroy_inline_crypt_key(prep_key);
+}
 
-	ci->ci_ctfm = tfm;
+/* Given a per-file encryption key, set up the file's crypto transform object */
+int fscrypt_set_per_file_enc_key(struct fscrypt_info *ci, const u8 *raw_key)
+{
 	ci->ci_owns_key = true;
-	return 0;
+	return fscrypt_prepare_key(&ci->ci_key, raw_key, ci);
 }
 
 static int setup_per_mode_enc_key(struct fscrypt_info *ci,
 				  struct fscrypt_master_key *mk,
-				  struct crypto_skcipher **tfms,
+				  struct fscrypt_prepared_key *keys,
 				  u8 hkdf_context, bool include_fs_uuid)
 {
+	static DEFINE_MUTEX(mode_key_setup_mutex);
 	const struct inode *inode = ci->ci_inode;
 	const struct super_block *sb = inode->i_sb;
 	struct fscrypt_mode *mode = ci->ci_mode;
 	const u8 mode_num = mode - fscrypt_modes;
-	struct crypto_skcipher *tfm, *prev_tfm;
+	struct fscrypt_prepared_key *prep_key;
 	u8 mode_key[FSCRYPT_MAX_KEY_SIZE];
 	u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)];
 	unsigned int hkdf_infolen = 0;
@@ -139,10 +170,16 @@ static int setup_per_mode_enc_key(struct fscrypt_info *ci,
 	if (WARN_ON(mode_num > __FSCRYPT_MODE_MAX))
 		return -EINVAL;
 
-	/* pairs with cmpxchg() below */
-	tfm = READ_ONCE(tfms[mode_num]);
-	if (likely(tfm != NULL))
-		goto done;
+	prep_key = &keys[mode_num];
+	if (fscrypt_is_key_prepared(prep_key, ci)) {
+		ci->ci_key = *prep_key;
+		return 0;
+	}
+
+	mutex_lock(&mode_key_setup_mutex);
+
+	if (fscrypt_is_key_prepared(prep_key, ci))
+		goto done_unlock;
 
 	BUILD_BUG_ON(sizeof(mode_num) != 1);
 	BUILD_BUG_ON(sizeof(sb->s_uuid) != 16);
@@ -157,21 +194,17 @@ static int setup_per_mode_enc_key(struct fscrypt_info *ci,
 				  hkdf_context, hkdf_info, hkdf_infolen,
 				  mode_key, mode->keysize);
 	if (err)
-		return err;
-	tfm = fscrypt_allocate_skcipher(mode, mode_key, inode);
+		goto out_unlock;
+	err = fscrypt_prepare_key(prep_key, mode_key, ci);
 	memzero_explicit(mode_key, mode->keysize);
-	if (IS_ERR(tfm))
-		return PTR_ERR(tfm);
-
-	/* pairs with READ_ONCE() above */
-	prev_tfm = cmpxchg(&tfms[mode_num], NULL, tfm);
-	if (prev_tfm != NULL) {
-		crypto_free_skcipher(tfm);
-		tfm = prev_tfm;
-	}
-done:
-	ci->ci_ctfm = tfm;
-	return 0;
+	if (err)
+		goto out_unlock;
+done_unlock:
+	ci->ci_key = *prep_key;
+	err = 0;
+out_unlock:
+	mutex_unlock(&mode_key_setup_mutex);
+	return err;
 }
 
 int fscrypt_derive_dirhash_key(struct fscrypt_info *ci,
@@ -203,7 +236,7 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci,
 		 * encryption key.  This ensures that the master key is
 		 * consistently used only for HKDF, avoiding key reuse issues.
 		 */
-		err = setup_per_mode_enc_key(ci, mk, mk->mk_direct_tfms,
+		err = setup_per_mode_enc_key(ci, mk, mk->mk_direct_keys,
 					     HKDF_CONTEXT_DIRECT_KEY, false);
 	} else if (ci->ci_policy.v2.flags &
 		   FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) {
@@ -213,7 +246,7 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci,
 		 * the IVs.  This format is optimized for use with inline
 		 * encryption hardware compliant with the UFS or eMMC standards.
 		 */
-		err = setup_per_mode_enc_key(ci, mk, mk->mk_iv_ino_lblk_64_tfms,
+		err = setup_per_mode_enc_key(ci, mk, mk->mk_iv_ino_lblk_64_keys,
 					     HKDF_CONTEXT_IV_INO_LBLK_64_KEY,
 					     true);
 	} else {
@@ -261,6 +294,8 @@ static int setup_file_encryption_key(struct fscrypt_info *ci,
 	struct fscrypt_key_specifier mk_spec;
 	int err;
 
+	fscrypt_select_encryption_impl(ci);
+
 	switch (ci->ci_policy.version) {
 	case FSCRYPT_POLICY_V1:
 		mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR;
@@ -353,7 +388,7 @@ static void put_crypt_info(struct fscrypt_info *ci)
 	if (ci->ci_direct_key)
 		fscrypt_put_direct_key(ci->ci_direct_key);
 	else if (ci->ci_owns_key)
-		crypto_free_skcipher(ci->ci_ctfm);
+		fscrypt_destroy_prepared_key(&ci->ci_key);
 
 	key = ci->ci_master_key;
 	if (key) {
diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c
index 801b48c0cd7f..59c520b200cb 100644
--- a/fs/crypto/keysetup_v1.c
+++ b/fs/crypto/keysetup_v1.c
@@ -146,7 +146,7 @@ struct fscrypt_direct_key {
 	struct hlist_node		dk_node;
 	refcount_t			dk_refcount;
 	const struct fscrypt_mode	*dk_mode;
-	struct crypto_skcipher		*dk_ctfm;
+	struct fscrypt_prepared_key	dk_key;
 	u8				dk_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
 	u8				dk_raw[FSCRYPT_MAX_KEY_SIZE];
 };
@@ -154,7 +154,7 @@ struct fscrypt_direct_key {
 static void free_direct_key(struct fscrypt_direct_key *dk)
 {
 	if (dk) {
-		crypto_free_skcipher(dk->dk_ctfm);
+		fscrypt_destroy_prepared_key(&dk->dk_key);
 		kzfree(dk);
 	}
 }
@@ -199,6 +199,8 @@ find_or_insert_direct_key(struct fscrypt_direct_key *to_insert,
 			continue;
 		if (ci->ci_mode != dk->dk_mode)
 			continue;
+		if (!fscrypt_is_key_prepared(&dk->dk_key, ci))
+			continue;
 		if (crypto_memneq(raw_key, dk->dk_raw, ci->ci_mode->keysize))
 			continue;
 		/* using existing tfm with same (descriptor, mode, raw_key) */
@@ -231,13 +233,9 @@ fscrypt_get_direct_key(const struct fscrypt_info *ci, const u8 *raw_key)
 		return ERR_PTR(-ENOMEM);
 	refcount_set(&dk->dk_refcount, 1);
 	dk->dk_mode = ci->ci_mode;
-	dk->dk_ctfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key,
-						ci->ci_inode);
-	if (IS_ERR(dk->dk_ctfm)) {
-		err = PTR_ERR(dk->dk_ctfm);
-		dk->dk_ctfm = NULL;
+	err = fscrypt_prepare_key(&dk->dk_key, raw_key, ci);
+	if (err)
 		goto err_free_dk;
-	}
 	memcpy(dk->dk_descriptor, ci->ci_policy.v1.master_key_descriptor,
 	       FSCRYPT_KEY_DESCRIPTOR_SIZE);
 	memcpy(dk->dk_raw, raw_key, ci->ci_mode->keysize);
@@ -259,7 +257,7 @@ static int setup_v1_file_key_direct(struct fscrypt_info *ci,
 	if (IS_ERR(dk))
 		return PTR_ERR(dk);
 	ci->ci_direct_key = dk;
-	ci->ci_ctfm = dk->dk_ctfm;
+	ci->ci_key = dk->dk_key;
 	return 0;
 }
 
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 3cd4fe6b845e..2331ff0464b2 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1370,6 +1370,7 @@ extern int send_sigurg(struct fown_struct *fown);
 #define SB_NODIRATIME	2048	/* Do not update directory access times */
 #define SB_SILENT	32768
 #define SB_POSIXACL	(1<<16)	/* VFS does not apply the umask */
+#define SB_INLINE_CRYPT	(1<<17)	/* inodes in SB use blk-crypto */
 #define SB_KERNMOUNT	(1<<22) /* this is a kern_mount call */
 #define SB_I_VERSION	(1<<23) /* Update inode I_version field */
 #define SB_LAZYTIME	(1<<25) /* Update the on-disk [acm]times lazily */
diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
index 556f4adf5dc5..2a84131ab270 100644
--- a/include/linux/fscrypt.h
+++ b/include/linux/fscrypt.h
@@ -64,6 +64,9 @@ struct fscrypt_operations {
 	bool (*has_stable_inodes)(struct super_block *sb);
 	void (*get_ino_and_lblk_bits)(struct super_block *sb,
 				      int *ino_bits_ret, int *lblk_bits_ret);
+	int (*get_num_devices)(struct super_block *sb);
+	void (*get_devices)(struct super_block *sb,
+			    struct request_queue **devs);
 };
 
 static inline bool fscrypt_has_encryption_key(const struct inode *inode)
@@ -497,6 +500,60 @@ static inline void fscrypt_set_ops(struct super_block *sb,
 
 #endif	/* !CONFIG_FS_ENCRYPTION */
 
+/* inline_crypt.c */
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+extern bool fscrypt_inode_uses_inline_crypto(const struct inode *inode);
+
+extern bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode);
+
+extern void fscrypt_set_bio_crypt_ctx(struct bio *bio,
+				      const struct inode *inode,
+				      u64 first_lblk, gfp_t gfp_mask);
+
+extern void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
+					 const struct buffer_head *first_bh,
+					 gfp_t gfp_mask);
+
+extern bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+				  u64 next_lblk);
+
+extern bool fscrypt_mergeable_bio_bh(struct bio *bio,
+				     const struct buffer_head *next_bh);
+
+#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+{
+	return false;
+}
+
+static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
+{
+	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode);
+}
+
+static inline void fscrypt_set_bio_crypt_ctx(struct bio *bio,
+					     const struct inode *inode,
+					     u64 first_lblk, gfp_t gfp_mask) { }
+
+static inline void fscrypt_set_bio_crypt_ctx_bh(
+					 struct bio *bio,
+					 const struct buffer_head *first_bh,
+					 gfp_t gfp_mask) { }
+
+static inline bool fscrypt_mergeable_bio(struct bio *bio,
+					 const struct inode *inode,
+					 u64 next_lblk)
+{
+	return true;
+}
+
+static inline bool fscrypt_mergeable_bio_bh(struct bio *bio,
+					    const struct buffer_head *next_bh)
+{
+	return true;
+}
+#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
 /**
  * fscrypt_require_key - require an inode's encryption key
  * @inode: the inode we need the key for
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 8/9] f2fs: add inline encryption support
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
                   ` (6 preceding siblings ...)
  2020-02-21 11:50 ` [PATCH v7 7/9] fscrypt: add inline encryption support Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-21 11:50 ` [PATCH v7 9/9] ext4: " Satya Tangirala
  2020-02-21 17:16 ` [PATCH v7 0/9] Inline Encryption Support Eric Biggers
  9 siblings, 0 replies; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Satya Tangirala,
	Eric Biggers

Wire up f2fs to support inline encryption via the helper functions which
fs/crypto/ now provides.  This includes:

- Adding a mount option 'inlinecrypt' which enables inline encryption
  on encrypted files where it can be used.

- Setting the bio_crypt_ctx on bios that will be submitted to an
  inline-encrypted file.

- Not adding logically discontiguous data to bios that will be submitted
  to an inline-encrypted file.

- Not doing filesystem-layer crypto on inline-encrypted files.

Co-developed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Satya Tangirala <satyat@google.com>
---
 Documentation/filesystems/f2fs.txt |  4 ++
 fs/f2fs/compress.c                 |  2 +-
 fs/f2fs/data.c                     | 67 +++++++++++++++++++++++++-----
 fs/f2fs/f2fs.h                     |  3 ++
 fs/f2fs/super.c                    | 35 ++++++++++++++++
 5 files changed, 100 insertions(+), 11 deletions(-)

diff --git a/Documentation/filesystems/f2fs.txt b/Documentation/filesystems/f2fs.txt
index 4eb3e2ddd00e..e8d6a9240f55 100644
--- a/Documentation/filesystems/f2fs.txt
+++ b/Documentation/filesystems/f2fs.txt
@@ -246,6 +246,10 @@ compress_extension=%s  Support adding specified extension, so that f2fs can enab
                        on compression extension list and enable compression on
                        these file by default rather than to enable it via ioctl.
                        For other files, we can still enable compression via ioctl.
+inlinecrypt
+                       Use blk-crypto, rather than fscrypt, for file content
+                       en/decryption. See
+                       Documentation/block/inline-encryption.rst
 
 ================================================================================
 DEBUGFS ENTRIES
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index d8a64be90a50..6ae313ea5432 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -775,7 +775,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
 		.need_lock = LOCK_RETRY,
 		.io_type = io_type,
 		.io_wbc = wbc,
-		.encrypted = f2fs_encrypted_file(cc->inode),
+		.encrypted = fscrypt_inode_uses_fs_layer_crypto(cc->inode),
 	};
 	struct dnode_of_data dn;
 	struct node_info ni;
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index b27b72107911..6d42ecceb08b 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -436,6 +436,33 @@ static struct bio *__bio_alloc(struct f2fs_io_info *fio, int npages)
 	return bio;
 }
 
+static void f2fs_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
+				  pgoff_t first_idx,
+				  const struct f2fs_io_info *fio,
+				  gfp_t gfp_mask)
+{
+	/*
+	 * The f2fs garbage collector sets ->encrypted_page when it wants to
+	 * read/write raw data without encryption.
+	 */
+	if (!fio || !fio->encrypted_page)
+		fscrypt_set_bio_crypt_ctx(bio, inode, first_idx, gfp_mask);
+}
+
+static bool f2fs_crypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+				     pgoff_t next_idx,
+				     const struct f2fs_io_info *fio)
+{
+	/*
+	 * The f2fs garbage collector sets ->encrypted_page when it wants to
+	 * read/write raw data without encryption.
+	 */
+	if (fio && fio->encrypted_page)
+		return !bio_has_crypt_ctx(bio);
+
+	return fscrypt_mergeable_bio(bio, inode, next_idx);
+}
+
 static inline void __submit_bio(struct f2fs_sb_info *sbi,
 				struct bio *bio, enum page_type type)
 {
@@ -632,6 +659,9 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
 	/* Allocate a new bio */
 	bio = __bio_alloc(fio, 1);
 
+	f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
+			       fio->page->index, fio, GFP_NOIO);
+
 	if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
 		bio_put(bio);
 		return -EFAULT;
@@ -820,12 +850,16 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
 	trace_f2fs_submit_page_bio(page, fio);
 	f2fs_trace_ios(fio, 0);
 
-	if (bio && !page_is_mergeable(fio->sbi, bio, *fio->last_block,
-						fio->new_blkaddr))
+	if (bio && (!page_is_mergeable(fio->sbi, bio, *fio->last_block,
+				       fio->new_blkaddr) ||
+		    !f2fs_crypt_mergeable_bio(bio, fio->page->mapping->host,
+					      fio->page->index, fio)))
 		f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
 alloc_new:
 	if (!bio) {
 		bio = __bio_alloc(fio, BIO_MAX_PAGES);
+		f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
+				       fio->page->index, fio, GFP_NOIO);
 		bio_set_op_attrs(bio, fio->op, fio->op_flags);
 
 		add_bio_entry(fio->sbi, bio, page, fio->temp);
@@ -882,8 +916,11 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
 
 	inc_page_count(sbi, WB_DATA_TYPE(bio_page));
 
-	if (io->bio && !io_is_mergeable(sbi, io->bio, io, fio,
-			io->last_block_in_bio, fio->new_blkaddr))
+	if (io->bio &&
+	    (!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio,
+			      fio->new_blkaddr) ||
+	     !f2fs_crypt_mergeable_bio(io->bio, fio->page->mapping->host,
+				       fio->page->index, fio)))
 		__submit_merged_bio(io);
 alloc_new:
 	if (io->bio == NULL) {
@@ -895,6 +932,8 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
 			goto skip;
 		}
 		io->bio = __bio_alloc(fio, BIO_MAX_PAGES);
+		f2fs_set_bio_crypt_ctx(io->bio, fio->page->mapping->host,
+				       fio->page->index, fio, GFP_NOIO);
 		io->fio = *fio;
 	}
 
@@ -938,11 +977,14 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
 	bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
 	if (!bio)
 		return ERR_PTR(-ENOMEM);
+
+	f2fs_set_bio_crypt_ctx(bio, inode, first_idx, NULL, GFP_NOFS);
+
 	f2fs_target_device(sbi, blkaddr, bio);
 	bio->bi_end_io = f2fs_read_end_io;
 	bio_set_op_attrs(bio, REQ_OP_READ, op_flag);
 
-	if (f2fs_encrypted_file(inode))
+	if (fscrypt_inode_uses_fs_layer_crypto(inode))
 		post_read_steps |= 1 << STEP_DECRYPT;
 	if (f2fs_compressed_file(inode))
 		post_read_steps |= 1 << STEP_DECOMPRESS;
@@ -1972,8 +2014,9 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
 	 * This page will go to BIO.  Do we need to send this
 	 * BIO off first?
 	 */
-	if (bio && !page_is_mergeable(F2FS_I_SB(inode), bio,
-				*last_block_in_bio, block_nr)) {
+	if (bio && (!page_is_mergeable(F2FS_I_SB(inode), bio,
+				       *last_block_in_bio, block_nr) ||
+		    !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) {
 submit_and_realloc:
 		__submit_bio(F2FS_I_SB(inode), bio, DATA);
 		bio = NULL;
@@ -2099,8 +2142,9 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
 		blkaddr = datablock_addr(dn.inode, dn.node_page,
 						dn.ofs_in_node + i + 1);
 
-		if (bio && !page_is_mergeable(sbi, bio,
-					*last_block_in_bio, blkaddr)) {
+		if (bio && (!page_is_mergeable(sbi, bio,
+					*last_block_in_bio, blkaddr) ||
+		    !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) {
 submit_and_realloc:
 			__submit_bio(sbi, bio, DATA);
 			bio = NULL;
@@ -2319,6 +2363,9 @@ int f2fs_encrypt_one_page(struct f2fs_io_info *fio)
 	/* wait for GCed page writeback via META_MAPPING */
 	f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
 
+	if (fscrypt_inode_uses_inline_crypto(inode))
+		return 0;
+
 retry_encrypt:
 	fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(page,
 					PAGE_SIZE, 0, gfp_flags);
@@ -2492,7 +2539,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
 			f2fs_unlock_op(fio->sbi);
 		err = f2fs_inplace_write_data(fio);
 		if (err) {
-			if (f2fs_encrypted_file(inode))
+			if (fscrypt_inode_uses_fs_layer_crypto(inode))
 				fscrypt_finalize_bounce_page(&fio->encrypted_page);
 			if (PageWriteback(page))
 				end_page_writeback(page);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 5355be6b6755..75817f0dc6f8 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -139,6 +139,9 @@ struct f2fs_mount_info {
 	int alloc_mode;			/* segment allocation policy */
 	int fsync_mode;			/* fsync policy */
 	bool test_dummy_encryption;	/* test dummy encryption */
+#ifdef CONFIG_FS_ENCRYPTION
+	bool inlinecrypt;		/* inline encryption enabled */
+#endif
 	block_t unusable_cap;		/* Amount of space allowed to be
 					 * unusable when disabling checkpoint
 					 */
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 65a7a432dfee..fb94ea555c30 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -137,6 +137,7 @@ enum {
 	Opt_alloc,
 	Opt_fsync,
 	Opt_test_dummy_encryption,
+	Opt_inlinecrypt,
 	Opt_checkpoint_disable,
 	Opt_checkpoint_disable_cap,
 	Opt_checkpoint_disable_cap_perc,
@@ -202,6 +203,7 @@ static match_table_t f2fs_tokens = {
 	{Opt_alloc, "alloc_mode=%s"},
 	{Opt_fsync, "fsync_mode=%s"},
 	{Opt_test_dummy_encryption, "test_dummy_encryption"},
+	{Opt_inlinecrypt, "inlinecrypt"},
 	{Opt_checkpoint_disable, "checkpoint=disable"},
 	{Opt_checkpoint_disable_cap, "checkpoint=disable:%u"},
 	{Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"},
@@ -790,6 +792,13 @@ static int parse_options(struct super_block *sb, char *options)
 			f2fs_info(sbi, "Test dummy encryption mode enabled");
 #else
 			f2fs_info(sbi, "Test dummy encryption mount option ignored");
+#endif
+			break;
+		case Opt_inlinecrypt:
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+			sb->s_flags |= SB_INLINE_CRYPT;
+#else
+			f2fs_info(sbi, "inline encryption not supported");
 #endif
 			break;
 		case Opt_checkpoint_disable_cap_perc:
@@ -1538,6 +1547,8 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
 #ifdef CONFIG_FS_ENCRYPTION
 	if (F2FS_OPTION(sbi).test_dummy_encryption)
 		seq_puts(seq, ",test_dummy_encryption");
+	if (sbi->sb->s_flags & SB_INLINE_CRYPT)
+		seq_puts(seq, ",inlinecrypt");
 #endif
 
 	if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_DEFAULT)
@@ -1568,6 +1579,9 @@ static void default_options(struct f2fs_sb_info *sbi)
 	F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT;
 	F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX;
 	F2FS_OPTION(sbi).test_dummy_encryption = false;
+#ifdef CONFIG_FS_ENCRYPTION
+	sbi->sb->s_flags &= ~SB_INLINE_CRYPT;
+#endif
 	F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID);
 	F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID);
 	F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZO;
@@ -2415,6 +2429,25 @@ static void f2fs_get_ino_and_lblk_bits(struct super_block *sb,
 	*lblk_bits_ret = 8 * sizeof(block_t);
 }
 
+static int f2fs_get_num_devices(struct super_block *sb)
+{
+	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+
+	if (f2fs_is_multi_device(sbi))
+		return sbi->s_ndevs;
+	return 1;
+}
+
+static void f2fs_get_devices(struct super_block *sb,
+			     struct request_queue **devs)
+{
+	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+	int i;
+
+	for (i = 0; i < sbi->s_ndevs; i++)
+		devs[i] = bdev_get_queue(FDEV(i).bdev);
+}
+
 static const struct fscrypt_operations f2fs_cryptops = {
 	.key_prefix		= "f2fs:",
 	.get_context		= f2fs_get_context,
@@ -2424,6 +2457,8 @@ static const struct fscrypt_operations f2fs_cryptops = {
 	.max_namelen		= F2FS_NAME_LEN,
 	.has_stable_inodes	= f2fs_has_stable_inodes,
 	.get_ino_and_lblk_bits	= f2fs_get_ino_and_lblk_bits,
+	.get_num_devices	= f2fs_get_num_devices,
+	.get_devices		= f2fs_get_devices,
 };
 #endif
 
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v7 9/9] ext4: add inline encryption support
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
                   ` (7 preceding siblings ...)
  2020-02-21 11:50 ` [PATCH v7 8/9] f2fs: " Satya Tangirala
@ 2020-02-21 11:50 ` Satya Tangirala
  2020-02-22  5:21   ` Eric Biggers
  2020-02-21 17:16 ` [PATCH v7 0/9] Inline Encryption Support Eric Biggers
  9 siblings, 1 reply; 40+ messages in thread
From: Satya Tangirala @ 2020-02-21 11:50 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin, Eric Biggers,
	Satya Tangirala

From: Eric Biggers <ebiggers@google.com>

Wire up ext4 to support inline encryption via the helper functions which
fs/crypto/ now provides.  This includes:

- Adding a mount option 'inlinecrypt' which enables inline encryption
  on encrypted files where it can be used.

- Setting the bio_crypt_ctx on bios that will be submitted to an
  inline-encrypted file.

  Note: submit_bh_wbc() in fs/buffer.c also needed to be patched for
  this part, since ext4 sometimes uses ll_rw_block() on file data.

- Not adding logically discontiguous data to bios that will be submitted
  to an inline-encrypted file.

- Not doing filesystem-layer crypto on inline-encrypted files.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Satya Tangirala <satyat@google.com>
---
 Documentation/admin-guide/ext4.rst |  4 ++++
 fs/buffer.c                        |  7 ++++---
 fs/ext4/ext4.h                     |  1 +
 fs/ext4/inode.c                    |  4 ++--
 fs/ext4/page-io.c                  |  6 ++++--
 fs/ext4/readpage.c                 | 11 ++++++++---
 fs/ext4/super.c                    | 19 +++++++++++++++++++
 7 files changed, 42 insertions(+), 10 deletions(-)

diff --git a/Documentation/admin-guide/ext4.rst b/Documentation/admin-guide/ext4.rst
index 9443fcef1876..b7cd3bdf827e 100644
--- a/Documentation/admin-guide/ext4.rst
+++ b/Documentation/admin-guide/ext4.rst
@@ -395,6 +395,10 @@ When mounting an ext4 filesystem, the following option are accepted:
         Documentation/filesystems/dax.txt.  Note that this option is
         incompatible with data=journal.
 
+  inlinecrypt
+        Use blk-crypto, rather than fscrypt, for file content en/decryption.
+        See Documentation/block/inline-encryption.rst.
+
 Data Mode
 =========
 There are 3 different data modes:
diff --git a/fs/buffer.c b/fs/buffer.c
index b8d28370cfd7..226f1784eda7 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -331,9 +331,8 @@ static void decrypt_bh(struct work_struct *work)
 static void end_buffer_async_read_io(struct buffer_head *bh, int uptodate)
 {
 	/* Decrypt if needed */
-	if (uptodate && IS_ENABLED(CONFIG_FS_ENCRYPTION) &&
-	    IS_ENCRYPTED(bh->b_page->mapping->host) &&
-	    S_ISREG(bh->b_page->mapping->host->i_mode)) {
+	if (uptodate &&
+	    fscrypt_inode_uses_fs_layer_crypto(bh->b_page->mapping->host)) {
 		struct decrypt_bh_ctx *ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
 
 		if (ctx) {
@@ -3085,6 +3084,8 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
 	 */
 	bio = bio_alloc(GFP_NOIO, 1);
 
+	fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
+
 	bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
 	bio_set_dev(bio, bh->b_bdev);
 	bio->bi_write_hint = write_hint;
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 4441331d06cc..3917f4b3ec30 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -1151,6 +1151,7 @@ struct ext4_inode_info {
 #define EXT4_MOUNT_JOURNAL_CHECKSUM	0x800000 /* Journal checksums */
 #define EXT4_MOUNT_JOURNAL_ASYNC_COMMIT	0x1000000 /* Journal Async Commit */
 #define EXT4_MOUNT_WARN_ON_ERROR	0x2000000 /* Trigger WARN_ON on error */
+#define EXT4_MOUNT_INLINECRYPT		0x4000000 /* Inline encryption support */
 #define EXT4_MOUNT_DELALLOC		0x8000000 /* Delalloc support */
 #define EXT4_MOUNT_DATA_ERR_ABORT	0x10000000 /* Abort on file data write */
 #define EXT4_MOUNT_BLOCK_VALIDITY	0x20000000 /* Block validity checking */
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index e60aca791d3f..9ac1dd131586 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1089,7 +1089,7 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
 	}
 	if (unlikely(err)) {
 		page_zero_new_buffers(page, from, to);
-	} else if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)) {
+	} else if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
 		for (i = 0; i < nr_wait; i++) {
 			int err2;
 
@@ -3720,7 +3720,7 @@ static int __ext4_block_zero_page_range(handle_t *handle,
 		/* Uhhuh. Read error. Complain and punt. */
 		if (!buffer_uptodate(bh))
 			goto unlock;
-		if (S_ISREG(inode->i_mode) && IS_ENCRYPTED(inode)) {
+		if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
 			/* We expect the key to be set. */
 			BUG_ON(!fscrypt_has_encryption_key(inode));
 			err = fscrypt_decrypt_pagecache_blocks(page, blocksize,
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 68b39e75446a..0757145a31b2 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -404,6 +404,7 @@ static void io_submit_init_bio(struct ext4_io_submit *io,
 	 * __GFP_DIRECT_RECLAIM is set, see comments for bio_alloc_bioset().
 	 */
 	bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
+	fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
 	bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
 	bio_set_dev(bio, bh->b_bdev);
 	bio->bi_end_io = ext4_end_bio;
@@ -420,7 +421,8 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
 {
 	int ret;
 
-	if (io->io_bio && bh->b_blocknr != io->io_next_block) {
+	if (io->io_bio && (bh->b_blocknr != io->io_next_block ||
+			   !fscrypt_mergeable_bio_bh(io->io_bio, bh))) {
 submit_and_retry:
 		ext4_io_submit(io);
 	}
@@ -508,7 +510,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 	 * (e.g. holes) to be unnecessarily encrypted, but this is rare and
 	 * can't happen in the common case of blocksize == PAGE_SIZE.
 	 */
-	if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && nr_to_submit) {
+	if (fscrypt_inode_uses_fs_layer_crypto(inode) && nr_to_submit) {
 		gfp_t gfp_flags = GFP_NOFS;
 		unsigned int enc_bytes = round_up(len, i_blocksize(inode));
 
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index c1769afbf799..68eac0aeffad 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -195,7 +195,7 @@ static void ext4_set_bio_post_read_ctx(struct bio *bio,
 {
 	unsigned int post_read_steps = 0;
 
-	if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode))
+	if (fscrypt_inode_uses_fs_layer_crypto(inode))
 		post_read_steps |= 1 << STEP_DECRYPT;
 
 	if (ext4_need_verity(inode, first_idx))
@@ -232,6 +232,7 @@ int ext4_mpage_readpages(struct address_space *mapping,
 	const unsigned blkbits = inode->i_blkbits;
 	const unsigned blocks_per_page = PAGE_SIZE >> blkbits;
 	const unsigned blocksize = 1 << blkbits;
+	sector_t next_block;
 	sector_t block_in_file;
 	sector_t last_block;
 	sector_t last_block_in_file;
@@ -264,7 +265,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
 		if (page_has_buffers(page))
 			goto confused;
 
-		block_in_file = (sector_t)page->index << (PAGE_SHIFT - blkbits);
+		block_in_file = next_block =
+			(sector_t)page->index << (PAGE_SHIFT - blkbits);
 		last_block = block_in_file + nr_pages * blocks_per_page;
 		last_block_in_file = (ext4_readpage_limit(inode) +
 				      blocksize - 1) >> blkbits;
@@ -364,7 +366,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
 		 * This page will go to BIO.  Do we need to send this
 		 * BIO off first?
 		 */
-		if (bio && (last_block_in_bio != blocks[0] - 1)) {
+		if (bio && (last_block_in_bio != blocks[0] - 1 ||
+			    !fscrypt_mergeable_bio(bio, inode, next_block))) {
 		submit_and_realloc:
 			submit_bio(bio);
 			bio = NULL;
@@ -376,6 +379,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
 			 */
 			bio = bio_alloc(GFP_KERNEL,
 				min_t(int, nr_pages, BIO_MAX_PAGES));
+			fscrypt_set_bio_crypt_ctx(bio, inode, next_block,
+						  GFP_KERNEL);
 			ext4_set_bio_post_read_ctx(bio, inode, page->index);
 			bio_set_dev(bio, bdev);
 			bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index f464dff09774..490e748b048c 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1504,6 +1504,7 @@ enum {
 	Opt_journal_path, Opt_journal_checksum, Opt_journal_async_commit,
 	Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback,
 	Opt_data_err_abort, Opt_data_err_ignore, Opt_test_dummy_encryption,
+	Opt_inlinecrypt,
 	Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota,
 	Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota,
 	Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err,
@@ -1601,6 +1602,7 @@ static const match_table_t tokens = {
 	{Opt_noinit_itable, "noinit_itable"},
 	{Opt_max_dir_size_kb, "max_dir_size_kb=%u"},
 	{Opt_test_dummy_encryption, "test_dummy_encryption"},
+	{Opt_inlinecrypt, "inlinecrypt"},
 	{Opt_nombcache, "nombcache"},
 	{Opt_nombcache, "no_mbcache"},	/* for backward compatibility */
 	{Opt_removed, "check=none"},	/* mount option from ext2/3 */
@@ -1812,6 +1814,11 @@ static const struct mount_opts {
 	{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
 	{Opt_max_dir_size_kb, 0, MOPT_GTE0},
 	{Opt_test_dummy_encryption, 0, MOPT_GTE0},
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+	{Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_SET},
+#else
+	{Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_NOSUPPORT},
+#endif
 	{Opt_nombcache, EXT4_MOUNT_NO_MBCACHE, MOPT_SET},
 	{Opt_err, 0, 0}
 };
@@ -1888,6 +1895,13 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
 	case Opt_nolazytime:
 		sb->s_flags &= ~SB_LAZYTIME;
 		return 1;
+	case Opt_inlinecrypt:
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+		sb->s_flags |= SB_INLINE_CRYPT;
+#else
+		ext4_msg(sb, KERN_ERR, "inline encryption not supported");
+#endif
+		return 1;
 	}
 
 	for (m = ext4_mount_opts; m->token != Opt_err; m++)
@@ -2300,6 +2314,8 @@ static int _ext4_show_options(struct seq_file *seq, struct super_block *sb,
 		SEQ_OPTS_PUTS("data_err=abort");
 	if (DUMMY_ENCRYPTION_ENABLED(sbi))
 		SEQ_OPTS_PUTS("test_dummy_encryption");
+	if (sb->s_flags & SB_INLINE_CRYPT)
+		SEQ_OPTS_PUTS("inlinecrypt");
 
 	ext4_show_quota_options(seq, sb);
 	return 0;
@@ -3808,6 +3824,9 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
 	 */
 	sbi->s_li_wait_mult = EXT4_DEF_LI_WAIT_MULT;
 
+	/* disable blk-crypto file content encryption by default */
+	sb->s_flags &= ~SB_INLINE_CRYPT;
+
 	blocksize = BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
 	if (blocksize < EXT4_MIN_BLOCK_SIZE ||
 	    blocksize > EXT4_MAX_BLOCK_SIZE) {
-- 
2.25.0.265.gbab2e86ba0-goog


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption
  2020-02-21 11:50 ` [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption Satya Tangirala
@ 2020-02-21 16:51   ` Randy Dunlap
  2020-02-21 17:25   ` Christoph Hellwig
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 40+ messages in thread
From: Randy Dunlap @ 2020-02-21 16:51 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt,
	linux-fsdevel, linux-f2fs-devel, linux-ext4
  Cc: Barani Muthukumaran, Kuohong Wang, Kim Boojin

On 2/21/20 3:50 AM, Satya Tangirala wrote:
> Blk-crypto delegates crypto operations to inline encryption hardware when
> available. The separately configurable blk-crypto-fallback contains a
> software fallback to the kernel crypto API - when enabled, blk-crypto
> will use this fallback for en/decryption when inline encryption hardware is
> not available. This lets upper layers not have to worry about whether or
> not the underlying device has support for inline encryption before
> deciding to specify an encryption context for a bio, and also allows for
> testing without actual inline encryption hardware. For more details, refer
> to Documentation/block/inline-encryption.rst.
> 
> Signed-off-by: Satya Tangirala <satyat@google.com>
> ---
>  Documentation/block/index.rst             |   1 +
>  Documentation/block/inline-encryption.rst | 162 ++++++
>  block/Kconfig                             |  10 +
>  block/Makefile                            |   1 +
>  block/bio-integrity.c                     |   2 +-
>  block/blk-crypto-fallback.c               | 673 ++++++++++++++++++++++
>  block/blk-crypto-internal.h               |  32 +
>  block/blk-crypto.c                        |  43 +-
>  include/linux/blk-crypto.h                |  17 +-
>  include/linux/blk_types.h                 |   6 +
>  10 files changed, 938 insertions(+), 9 deletions(-)
>  create mode 100644 Documentation/block/inline-encryption.rst
>  create mode 100644 block/blk-crypto-fallback.c


> diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst
> new file mode 100644
> index 000000000000..02abea993975
> --- /dev/null
> +++ b/Documentation/block/inline-encryption.rst
> @@ -0,0 +1,162 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=================
> +Inline Encryption
> +=================
> +
> +Background
> +==========
> +
> +Inline encryption hardware sit logically between memory and the disk, and can

                              sits

> +en/decrypt data as it goes in/out of the disk. Inline encryption hardware have a

                                                                             has

> +fixed number of "keyslots" - slots into which encryption contexts (i.e. the
> +encryption key, encryption algorithm, data unit size) can be programmed by the
> +kernel at any time. Each request sent to the disk can be tagged with the index
> +of a keyslot (and also a data unit number to act as an encryption tweak), and
> +the inline encryption hardware will en/decrypt the data in the request with the
> +encryption context programmed into that keyslot. This is very different from
> +full disk encryption solutions like self encrypting drives/TCG OPAL/ATA
> +Security standards, since with inline encryption, any block on disk could be
> +encrypted with any encryption context the kernel chooses.
> +
> +
> +Objective
> +=========
> +
...
> +
> +
> +Constraints and notes
> +=====================
> +
> +- IE hardware have a limited number of "keyslots" that can be programmed

                 has

> +  with an encryption context (key, algorithm, data unit size, etc.) at any time.
> +  One can specify a keyslot in a data request made to the device, and the
> +  device will en/decrypt the data using the encryption context programmed into
> +  that specified keyslot. When possible, we want to make multiple requests with
> +  the same encryption context share the same keyslot.
> +
...
> +
> +
> +Design
> +======
> +
> +We add a :c:type:`struct bio_crypt_ctx` to :c:type:`struct bio` that can
> +represent an encryption context, because we need to be able to pass this
> +encryption context from the FS layer to the device driver to act upon.
> +
> +While IE hardware works on the notion of keyslots, the FS layer has no
> +knowledge of keyslots - it simply wants to specify an encryption context to
> +use while en/decrypting a bio.
> +
> +We introduce a keyslot manager (KSM) that handles the translation from
> +encryption contexts specified by the FS to keyslots on the IE hardware.
> +This KSM also serves as the way IE hardware can expose their capabilities to

                                                          its

> +upper layers. The generic mode of operation is: each device driver that wants
> +to support IE will construct a KSM and set it up in its struct request_queue.
> +Upper layers that want to use IE on this device can then use this KSM in
> +the device's struct request_queue to translate an encryption context into
> +a keyslot. The presence of the KSM in the request queue shall be used to mean
> +that the device supports IE.
> +
...

Hi Satya,
ISTM that we disagree on whether "hardware" is singular or plural.  ;)

My interface search foo (FWIW) says that "hardware" has no plural version.
Anyway, my best evidence is in your intro/commit message, where it says:
"when inline encryption hardware is not available",
so it must be singular.  :)

cheers.
-- 
~Randy


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption
  2020-02-21 11:50 ` [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption Satya Tangirala
@ 2020-02-21 17:04   ` Christoph Hellwig
  2020-02-21 17:31     ` Christoph Hellwig
  2020-02-27 18:48   ` Eric Biggers
  1 sibling, 1 reply; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-21 17:04 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

> +#ifdef CONFIG_PM
> +static inline void blk_ksm_set_dev(struct keyslot_manager *ksm,
> +				   struct device *dev)
> +{
> +	ksm->dev = dev;
> +}
> +
> +/* If there's an underlying device and it's suspended, resume it. */
> +static inline void blk_ksm_pm_get(struct keyslot_manager *ksm)
> +{
> +	if (ksm->dev)
> +		pm_runtime_get_sync(ksm->dev);
> +}
> +
> +static inline void blk_ksm_pm_put(struct keyslot_manager *ksm)
> +{
> +	if (ksm->dev)
> +		pm_runtime_put_sync(ksm->dev);
> +}
> +#else /* CONFIG_PM */
> +static inline void blk_ksm_set_dev(struct keyslot_manager *ksm,
> +				   struct device *dev)
> +{
> +}
> +
> +static inline void blk_ksm_pm_get(struct keyslot_manager *ksm)
> +{
> +}
> +
> +static inline void blk_ksm_pm_put(struct keyslot_manager *ksm)
> +{
> +}
> +#endif /* !CONFIG_PM */

I think no one is hurt by an unused dev field for the non-pm case.
I'd suggest to define the field unconditionally, and replace all
the above with direct calls below.

> +/**
> + * blk_ksm_get_slot() - Increment the refcount on the specified slot.
> + * @ksm: The keyslot manager that we want to modify.
> + * @slot: The slot to increment the refcount of.
> + *
> + * This function assumes that there is already an active reference to that slot
> + * and simply increments the refcount. This is useful when cloning a bio that
> + * already has a reference to a keyslot, and we want the cloned bio to also have
> + * its own reference.
> + *
> + * Context: Any context.
> + */
> +void blk_ksm_get_slot(struct keyslot_manager *ksm, unsigned int slot)

This function doesn't appear to be used at all in the whole series.

> +/**
> + * blk_ksm_put_slot() - Release a reference to a slot
> + * @ksm: The keyslot manager to release the reference from.
> + * @slot: The slot to release the reference from.
> + *
> + * Context: Any context.
> + */
> +void blk_ksm_put_slot(struct keyslot_manager *ksm, unsigned int slot)
> +{
> +	unsigned long flags;
> +
> +	if (WARN_ON(slot >= ksm->num_slots))
> +		return;
> +
> +	if (atomic_dec_and_lock_irqsave(&ksm->slots[slot].slot_refs,
> +					&ksm->idle_slots_lock, flags)) {
> +		list_add_tail(&ksm->slots[slot].idle_slot_node,
> +			      &ksm->idle_slots);
> +		spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
> +		wake_up(&ksm->idle_slots_wait_queue);
> +	}

Given that blk_ksm_get_slot_for_key returns a signed keyslot that
can return errors, and the only callers stores it in a signed variable
I think this function should take a signed slot as well, and the check
for a non-negative slot should be moved here from the only caller.

> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index 853d92ceee64..b2103e207ed5 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -8,6 +8,7 @@
>  #include <linux/highmem.h>
>  #include <linux/mempool.h>
>  #include <linux/ioprio.h>
> +#include <linux/blk-crypto.h>
>  
>  #ifdef CONFIG_BLOCK
>  /* struct bio, bio_vec and BIO_* flags are defined in blk_types.h */

This doesn't belong here, but into the patch that actually requires
crypto definitions in bio.h.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 0/9] Inline Encryption Support
  2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
                   ` (8 preceding siblings ...)
  2020-02-21 11:50 ` [PATCH v7 9/9] ext4: " Satya Tangirala
@ 2020-02-21 17:16 ` Eric Biggers
  9 siblings, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-21 17:16 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:41AM -0800, Satya Tangirala wrote:
> This patch series adds support for Inline Encryption to the block layer,
> UFS, fscrypt, f2fs and ext4.
[...]
> Changes v6 => v7:
>  - Keyslot management is now done on a per-request basis rather than a
>    per-bio basis.
>  - Storage drivers can now specify the maximum number of bytes they
>    can accept for the data unit number (DUN) for each crypto algorithm,
>    and upper layers can specify the minimum number of bytes of DUN they
>    want with the blk_crypto_key they send with the bio - a driver is
>    only considered to support a blk_crypto_key if the driver supports at
>    least as many DUN bytes as the upper layer wants. This is necessary
>    because storage drivers may not support as many bytes as the
>    algorithm specification dictates (for e.g. UFS only supports 8 byte
>    DUNs for AES-256-XTS, even though the algorithm specification
>    says DUNs are 16 bytes long).
>  - Introduce SB_INLINECRYPT to keep track of whether inline encryption
>    is enabled for a filesystem (instead of using an fscrypt_operation).
>  - Expose keyslot manager declaration and embed it within ufs_hba to
>    clean up code.
>  - Make blk-crypto preclude blk-integrity.
>  - Some bug fixes
>  - Introduce UFSHCD_QUIRK_BROKEN_CRYPTO for UFS drivers that don't
>    support inline encryption (yet)

This patchset can also be retrieved from

	Repo: https://git.kernel.org/pub/scm/fs/fscrypt/fscrypt.git
	Tag: inline-encryption-v7

For review purposes I also created a tag
inline-encryption-v6-rebased-onto-v7-base which is the v6 patchset rebased onto
the same base commit (v5.6-rc2).  So it's possible to see what changed by

	git diff inline-encryption-v6-rebased-onto-v7-base..inline-encryption-v7

(Although, I had to resolve conflicts in fs/crypto/ to do the rebase, so it's
not *exactly* v6.)

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 2/9] block: Inline encryption support for blk-mq
  2020-02-21 11:50 ` [PATCH v7 2/9] block: Inline encryption support for blk-mq Satya Tangirala
@ 2020-02-21 17:22   ` Christoph Hellwig
  2020-02-22  0:52     ` Satya Tangirala
  2020-02-27 18:25     ` Eric Biggers
  0 siblings, 2 replies; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-21 17:22 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

> index bf62c25cde8f..bce563031e7c 100644
> --- a/block/bio-integrity.c
> +++ b/block/bio-integrity.c
> @@ -42,6 +42,11 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
>  	struct bio_set *bs = bio->bi_pool;
>  	unsigned inline_vecs;
>  
> +	if (bio_has_crypt_ctx(bio)) {
> +		pr_warn("blk-integrity can't be used together with blk-crypto en/decryption.");
> +		return ERR_PTR(-EOPNOTSUPP);
> +	}

What is the rationale for this limitation?  Restricting unrelated
features from being used together is a pretty bad design pattern and
should be avoided where possible.  If it can't it needs to be documented
very clearly.

> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +	rq->rq_crypt_ctx.keyslot = -EINVAL;
> +#endif

All the other core block calls to the crypto code are in helpers that
are stubbed out.  It might make sense to follow that style here.

>  
>  free_and_out:
> @@ -1813,5 +1826,8 @@ int __init blk_dev_init(void)
>  	blk_debugfs_root = debugfs_create_dir("block", NULL);
>  #endif
>  
> +	if (bio_crypt_ctx_init() < 0)
> +		panic("Failed to allocate mem for bio crypt ctxs\n");

Maybe move that panic into bio_crypt_ctx_init itself?

> +static int num_prealloc_crypt_ctxs = 128;
> +
> +module_param(num_prealloc_crypt_ctxs, int, 0444);
> +MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
> +		"Number of bio crypto contexts to preallocate");

Please write a comment why this is a tunable, how the default is choosen
and why someone might want to chane it.

> +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
> +{
> +	return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
> +}

I'd rather move bio_crypt_set_ctx out of line, at which point we don't
really need this helper.

> +/* Return: 0 on success, negative on error */
> +int rq_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
> +				  struct keyslot_manager *ksm,
> +				  struct rq_crypt_ctx *rc)
> +{
> +	rc->keyslot = blk_ksm_get_slot_for_key(ksm, bc->bc_key);
> +	return rc->keyslot >= 0 ? 0 : rc->keyslot;
> +}
> +
> +void rq_crypt_ctx_release_keyslot(struct keyslot_manager *ksm,
> +				  struct rq_crypt_ctx *rc)
> +{
> +	if (rc->keyslot >= 0)
> +		blk_ksm_put_slot(ksm, rc->keyslot);
> +	rc->keyslot = -EINVAL;
> +}

Is there really much of a need for these helpers?  I think the
callers would generally be simpler without them.  Especially the
fallback code can avoid having to declare rq_crypt_ctx variables
on stack without the helpers.

> +int blk_crypto_init_request(struct request *rq, struct request_queue *q,
> +			    struct bio *bio)

We can always derive the request_queue from rq->q, so there is no need
to pass it explicitly (even if a lot of legacy block code does, but
it is slowly getting cleaned up).

> +{
> +	struct rq_crypt_ctx *rc = &rq->rq_crypt_ctx;
> +	struct bio_crypt_ctx *bc;
> +	int err;
> +
> +	rc->bc = NULL;
> +	rc->keyslot = -EINVAL;
> +
> +	if (!bio)
> +		return 0;
> +
> +	bc = bio->bi_crypt_context;
> +	if (!bc)
> +		return 0;

Shouldn't the checks if the bio actually requires crypto handling be
done by the caller based on a new handler ala:

static inline bool bio_is_encrypted(struct bio *bio)
{
	return bio && bio->bi_crypt_context;
}

and maybe some inline helpers to reduce the clutter?

That way a kernel with blk crypto support, but using non-crypto I/O
saves all the function calls to blk-crypto.

> +	err = bio_crypt_check_alignment(bio);
> +	if (err)
> +		goto fail;

This seems pretty late to check the alignment, it would be more
useful in bio_add_page.  Then again Jens didn't like alignment checks
in the block layer at all even for the normal non-crypto alignment,
so I don't see why we'd have them here but not for the general case
(I'd actually like to ee them for the general case, btw).

> +int blk_crypto_evict_key(struct request_queue *q,
> +			 const struct blk_crypto_key *key)
> +{
> +	if (q->ksm && blk_ksm_crypto_mode_supported(q->ksm, key))
> +		return blk_ksm_evict_key(q->ksm, key);
> +
> +	return 0;
> +}

Is there any point in this wrapper that just has a single caller?
Als why doesn't blk_ksm_evict_key have the blk_ksm_crypto_mode_supported
sanity check itself?

> @@ -1998,6 +2007,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
>  
>  	cookie = request_to_qc_t(data.hctx, rq);
>  
> +	if (blk_crypto_init_request(rq, q, bio)) {
> +		bio->bi_status = BLK_STS_RESOURCE;
> +		bio_endio(bio);
> +		blk_mq_end_request(rq, BLK_STS_RESOURCE);
> +		return BLK_QC_T_NONE;
> +	}

This looks fundamentally wrong given that layers above blk-mq
can't handle BLK_STS_RESOURCE.  It will just show up as an error
in the calller insteaf of being requeued.

That being said I think the only error return from
blk_crypto_init_request is and actual hardware error.  So failing this
might be ok, but it should be BLK_STS_IOERR, or even better an error
directly propagated from the driver. 

> +int bio_crypt_ctx_init(void);
> +
> +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
> +
> +void bio_crypt_free_ctx(struct bio *bio);

These can go into block layer internal headers.

> +static inline bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
> +					       unsigned int bytes,
> +					u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
> +{
> +	int i = 0;
> +	unsigned int inc = bytes >> bc->bc_key->data_unit_size_bits;
> +
> +	while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
> +		if (bc->bc_dun[i] + inc != next_dun[i])
> +			return false;
> +		inc = ((bc->bc_dun[i] + inc)  < inc);
> +		i++;
> +	}
> +
> +	return true;
> +}
> +
> +static inline void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
> +					   unsigned int inc)
> +{
> +	int i = 0;
> +
> +	while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
> +		dun[i] += inc;
> +		inc = (dun[i] < inc);
> +		i++;
> +	}
> +}

Should these really be inline?

> +bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio);
> +
> +bool bio_crypt_ctx_front_mergeable(struct request *req, struct bio *bio);
> +
> +bool bio_crypt_ctx_back_mergeable(struct request *req, struct bio *bio);
> +
> +bool bio_crypt_ctx_merge_rq(struct request *req, struct request *next);
> +
> +void blk_crypto_bio_back_merge(struct request *req, struct bio *bio);
> +
> +void blk_crypto_bio_front_merge(struct request *req, struct bio *bio);
> +
> +void blk_crypto_free_request(struct request *rq);
> +
> +int blk_crypto_init_request(struct request *rq, struct request_queue *q,
> +			    struct bio *bio);
> +
> +int blk_crypto_bio_prep(struct bio **bio_ptr);
> +
> +void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio);
> +
> +void blk_crypto_rq_prep_clone(struct request *dst, struct request *src);
> +
> +int blk_crypto_insert_cloned_request(struct request_queue *q,
> +				     struct request *rq);
> +
> +int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
> +			enum blk_crypto_mode_num crypto_mode,
> +			unsigned int blk_crypto_dun_bytes,
> +			unsigned int data_unit_size);
> +
> +int blk_crypto_evict_key(struct request_queue *q,
> +			 const struct blk_crypto_key *key);

Most of this should be block layer private.

> +struct rq_crypt_ctx {
> +	struct bio_crypt_ctx *bc;
> +	int keyslot;
> +};
> +
>  /*
>   * Try to put the fields that are referenced together in the same cacheline.
>   *
> @@ -224,6 +230,10 @@ struct request {
>  	unsigned short nr_integrity_segments;
>  #endif
>  
> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +	struct rq_crypt_ctx rq_crypt_ctx;
> +#endif

I'd be tempted to just add

	struct bio_crypt_ctx *crypt_ctx;
	int crypt_keyslot;

directly to struct request.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-21 11:50 ` [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
@ 2020-02-21 17:22   ` Christoph Hellwig
  2020-02-21 18:11     ` Eric Biggers
  0 siblings, 1 reply; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-21 17:22 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:47AM -0800, Satya Tangirala wrote:
> Wire up ufshcd.c with the UFS Crypto API, the block layer inline
> encryption additions and the keyslot manager.
> 
> Also, introduce UFSHCD_QUIRK_BROKEN_CRYPTO that certain UFS drivers
> that don't yet support inline encryption need to use - taken from
> patches by John Stultz <john.stultz@linaro.org>
> (https://android-review.googlesource.com/c/kernel/common/+/1162224/5)
> (https://android-review.googlesource.com/c/kernel/common/+/1162225/5)
> (https://android-review.googlesource.com/c/kernel/common/+/1164506/1)

Between all these quirks, with what upstream SOC does this feature
actually work?

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption
  2020-02-21 11:50 ` [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption Satya Tangirala
  2020-02-21 16:51   ` Randy Dunlap
@ 2020-02-21 17:25   ` Christoph Hellwig
  2020-02-21 17:35   ` Christoph Hellwig
  2020-02-27 19:25   ` Eric Biggers
  3 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-21 17:25 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

That REQ_NO_SPECIAL think really needs a separate patch and be
explained.  In fact I hope we can get away without it as it is horrible,
so maybe the explanation will lead to a better outcome.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption
  2020-02-21 17:04   ` Christoph Hellwig
@ 2020-02-21 17:31     ` Christoph Hellwig
  2020-02-27 18:14       ` Eric Biggers
  0 siblings, 1 reply; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-21 17:31 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 09:04:34AM -0800, Christoph Hellwig wrote:
> Given that blk_ksm_get_slot_for_key returns a signed keyslot that
> can return errors, and the only callers stores it in a signed variable
> I think this function should take a signed slot as well, and the check
> for a non-negative slot should be moved here from the only caller.

Actually looking over the code again I think it might be better to
return only the error code (and that might actually be a blk_status_t),
and then use an argument to return a pointer to the actual struct
keyslot.  That gives us much easier to understand code and better
type safety.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption
  2020-02-21 11:50 ` [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption Satya Tangirala
  2020-02-21 16:51   ` Randy Dunlap
  2020-02-21 17:25   ` Christoph Hellwig
@ 2020-02-21 17:35   ` Christoph Hellwig
  2020-02-21 18:34     ` Eric Biggers
  2020-02-27 19:25   ` Eric Biggers
  3 siblings, 1 reply; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-21 17:35 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

High-level question:  Does the whole keyslot manager concept even make
sense for the fallback?  With the work-queue we have item that exectutes
at a time per cpu.  So just allocatea per-cpu crypto_skcipher for
each encryption mode and there should never be a slot limitation.  Or
do I miss something?

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-21 17:22   ` Christoph Hellwig
@ 2020-02-21 18:11     ` Eric Biggers
  2020-02-23 13:47       ` Stanley Chu
  0 siblings, 1 reply; 40+ messages in thread
From: Eric Biggers @ 2020-02-21 18:11 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt,
	linux-fsdevel, linux-f2fs-devel, linux-ext4, Barani Muthukumaran,
	Kuohong Wang, Stanley Chu, Kim Boojin, Ladvine D Almeida,
	Parshuram Raju Thombare

On Fri, Feb 21, 2020 at 09:22:44AM -0800, Christoph Hellwig wrote:
> On Fri, Feb 21, 2020 at 03:50:47AM -0800, Satya Tangirala wrote:
> > Wire up ufshcd.c with the UFS Crypto API, the block layer inline
> > encryption additions and the keyslot manager.
> > 
> > Also, introduce UFSHCD_QUIRK_BROKEN_CRYPTO that certain UFS drivers
> > that don't yet support inline encryption need to use - taken from
> > patches by John Stultz <john.stultz@linaro.org>
> > (https://android-review.googlesource.com/c/kernel/common/+/1162224/5)
> > (https://android-review.googlesource.com/c/kernel/common/+/1162225/5)
> > (https://android-review.googlesource.com/c/kernel/common/+/1164506/1)
> 
> Between all these quirks, with what upstream SOC does this feature
> actually work?

It will work on DragonBoard 845c, i.e. Qualcomm's Snapdragon 845 SoC, if we
apply my patchset
https://lkml.kernel.org/linux-block/20200110061634.46742-1-ebiggers@kernel.org/.
It's currently based on Satya's v6 patchset, but I'll be rebasing it onto v7 and
resending.  It uses all the UFS standard crypto code that Satya is adding except
for ufshcd_program_key(), which has to be replaced with a vendor-specific
operation.  It does also add vendor-specific code to ufs-qcom to initialize the
crypto hardware, but that's in addition to the standard code, not replacing it.

DragonBoard 845c is a commercially available development board that boots the
mainline kernel (modulo two arm-smmu IOMMU patches that Linaro is working on),
so I think it counts as an "upstream SoC".

That's all that we currently have the hardware to verify ourselves, though
Mediatek says that Satya's patches are working on their hardware too.  And the
UFS controller on Mediatek SoCs is supported by the upstream kernel via
ufs-mediatek.  But I don't know whether it just works exactly as-is or whether
they needed to patch ufs-mediatek too.  Stanley or Kuohong, can you confirm?

We're also hoping that the patches are usable with the UFS controllers from
Cadence Design Systems and Synopsys, which have upstream kernel support in
drivers/scsi/ufs/cdns-pltfrm.c and drivers/scsi/ufs/ufshcd-dwc.c.  But we don't
currently have a way to verify this.  But in 2018, both companies had tried to
get the UFS v2.1 standard crypto support upstream, so presumably they must have
implemented it in their hardware.  +Cc the people who were working on that.

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption
  2020-02-21 17:35   ` Christoph Hellwig
@ 2020-02-21 18:34     ` Eric Biggers
  2020-02-24 23:36       ` Christoph Hellwig
  0 siblings, 1 reply; 40+ messages in thread
From: Eric Biggers @ 2020-02-21 18:34 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt,
	linux-fsdevel, linux-f2fs-devel, linux-ext4, Barani Muthukumaran,
	Kuohong Wang, Kim Boojin

On Fri, Feb 21, 2020 at 09:35:39AM -0800, Christoph Hellwig wrote:
> High-level question:  Does the whole keyslot manager concept even make
> sense for the fallback?  With the work-queue we have item that exectutes
> at a time per cpu.  So just allocatea per-cpu crypto_skcipher for
> each encryption mode and there should never be a slot limitation.  Or
> do I miss something?

It does make sense because if blk-crypto-fallback didn't use a keyslot manager,
it would have to call crypto_skcipher_setkey() on the I/O path for every bio to
ensure that the CPU's crypto_skcipher has the correct key.  That's undesirable,
because setting a new key can be expensive with some encryption algorithms, and
also it can require a memory allocation which can fail.  For example, with the
Adiantum algorithm, setting a key requires encrypting ~1100 bytes of data in
order to generate subkeys.  It's better to set a key once and use it many times.

Making blk-crypto-fallback use the keyslot manager also allows the keyslot
manager to be tested by routine filesystem regression testing, e.g.
'gce-xfstests -c ext4/encrypt -g auto -m inlinecrypt'.

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 7/9] fscrypt: add inline encryption support
  2020-02-21 11:50 ` [PATCH v7 7/9] fscrypt: add inline encryption support Satya Tangirala
@ 2020-02-21 18:40   ` Eric Biggers
  2020-02-22  5:39   ` Eric Biggers
  2020-02-26  0:30   ` Eric Biggers
  2 siblings, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-21 18:40 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:48AM -0800, Satya Tangirala wrote:
> diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
> index 4fa18fff9c4e..82d06cf4b94a 100644
> --- a/fs/crypto/bio.c
> +++ b/fs/crypto/bio.c
> @@ -24,6 +24,8 @@
>  #include <linux/module.h>
>  #include <linux/bio.h>
>  #include <linux/namei.h>
> +#include <linux/fscrypt.h>

No need to include <linux/fscrypt.h> explicitly here, since everything in
fs/crypto/ already gets it via "fscrypt_private.h".

> +static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode,
> +					      pgoff_t lblk, sector_t pblk,
> +					      unsigned int len)
> +{
> +	const unsigned int blockbits = inode->i_blkbits;
> +	const unsigned int blocks_per_page = 1 << (PAGE_SHIFT - blockbits);
> +	struct bio *bio;
> +	int ret, err = 0;
> +	int num_pages = 0;
> +
> +	/* This always succeeds since __GFP_DIRECT_RECLAIM is set. */
> +	bio = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
> +
> +	while (len) {
> +		unsigned int blocks_this_page = min(len, blocks_per_page);
> +		unsigned int bytes_this_page = blocks_this_page << blockbits;
> +
> +		if (num_pages == 0) {
> +			fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOIO);

This should use GFP_NOFS rather than the stricter GFP_NOIO.

> +			bio_set_dev(bio, inode->i_sb->s_bdev);
> +			bio->bi_iter.bi_sector =
> +					pblk << (blockbits - SECTOR_SHIFT);
> +			bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
> +		}
> +		ret = bio_add_page(bio, ZERO_PAGE(0), bytes_this_page, 0);
> +		if (WARN_ON(ret != bytes_this_page)) {
> +			err = -EIO;
> +			goto out;
> +		}
> +		num_pages++;
> +		len -= blocks_this_page;
> +		lblk += blocks_this_page;
> +		pblk += blocks_this_page;
> +		if (num_pages == BIO_MAX_PAGES || !len) {
> +			err = submit_bio_wait(bio);
> +			if (!err && bio->bi_status)
> +				err = -EIO;

submit_bio_wait() already checks bi_status and reflects it in the returned
error, so checking it again here is redundant.

> @@ -69,12 +119,17 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
>  	unsigned int nr_pages;
>  	unsigned int i;
>  	unsigned int offset;
> +	const bool inlinecrypt = fscrypt_inode_uses_inline_crypto(inode);
>  	struct bio *bio;
>  	int ret, err;
>  
>  	if (len == 0)
>  		return 0;
>  
> +	if (inlinecrypt)
> +		return fscrypt_zeroout_range_inline_crypt(inode, lblk, pblk,
> +							  len);
> +

No need for the 'inlinecrypt' bool variable.  Just do:

	if (fscrypt_inode_uses_inline_crypto(inode))

FYI, I had suggested a merge resolution to use here which didn't have the above
problems.  Looks like you missed it?
https://lkml.kernel.org/linux-block/20200114211243.GC41220@gmail.com/

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 2/9] block: Inline encryption support for blk-mq
  2020-02-21 17:22   ` Christoph Hellwig
@ 2020-02-22  0:52     ` Satya Tangirala
  2020-02-24 23:34       ` Christoph Hellwig
  2020-02-27 18:25     ` Eric Biggers
  1 sibling, 1 reply; 40+ messages in thread
From: Satya Tangirala @ 2020-02-22  0:52 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 09:22:05AM -0800, Christoph Hellwig wrote:
> > index bf62c25cde8f..bce563031e7c 100644
> > --- a/block/bio-integrity.c
> > +++ b/block/bio-integrity.c
> > @@ -42,6 +42,11 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
> >  	struct bio_set *bs = bio->bi_pool;
> >  	unsigned inline_vecs;
> >  
> > +	if (bio_has_crypt_ctx(bio)) {
> > +		pr_warn("blk-integrity can't be used together with blk-crypto en/decryption.");
> > +		return ERR_PTR(-EOPNOTSUPP);
> > +	}
> 
> What is the rationale for this limitation?  Restricting unrelated
> features from being used together is a pretty bad design pattern and
> should be avoided where possible.  If it can't it needs to be documented
> very clearly.
> 
My understanding of blk-integrity is that for writes, blk-integrity
generates some integrity info for a bio and sends it along with the bio,
and the device on the other end verifies that the data it received to
write matches up with the integrity info provided with the bio, and
saves the integrity info along with the data. As for reads, the device
sends the data along with the saved integrity info and blk-integrity
verifies that the data received matches up with the integrity info.

So for encryption and integrity to work together, in systems without
hardware inline encryption support, encryption of data in a bio must
happen in the kernel before bio_integrity_prep, since we shouldn't
modify data in the bio after integrity calculation, and similarly,
decryption of data in the bio must happen in the kernel after integrity
verification. The integrity info saved on disk is the integrity info of
the ciphertext.

But in systems with hardware inline encryption support, during write, the
device will receive integrity info of the plaintext data, and during
read, it will send integrity info of the plaintext data to the kernel. I'm
worried that the device may not do anything special and simply save the
integrity info of the plaintext on disk which will cause the on disk
format to no longer be the same whether or not hardware inline encryption
support is present, which will cause issues like not being able to ever
switch between using the "inlinecrypt" mount option (and in particular,
that means people with existing encrypted filesystems on disks with
integrity can't just turn on the "inlinecrypt" mount option).

As far as I can tell, I think all the issues go away if the hardware
explicitly computes integrity info for the ciphertext and stores that on
disk instead of what the kernel passes it, and on the read path, it
decrypts the data on disk and generates integrity info for the plain
text, and passes it to the kernel. Eric also points out that a device
that just stores integrity info of the plaintext on disk would be broken
since the integrity info would leak information about the plaintext, and
that the correct thing for the device to do is store integrity info
computed using only the ciphertext.

So if we're alright with assuming for now that hardware vendors will at
least plan to do the "right" thing, I'll just remove the exclusivity
check in bio-integrity and also the REQ_NO_SPECIAL code :) (and figure
out how to handle dm-integrity) because the rest of the code will
otherwise work as intended w.r.t encryption with integrity. As for
handling cases like devices that actually don't do the "right" thing, or
devices that support both features but just not at the same time, I think
I'll leave that to when we actually have hardware that has both these
features.

> > +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> > +	rq->rq_crypt_ctx.keyslot = -EINVAL;
> > +#endif
> 
> All the other core block calls to the crypto code are in helpers that
> are stubbed out.  It might make sense to follow that style here.
> 
> >  
> >  free_and_out:
> > @@ -1813,5 +1826,8 @@ int __init blk_dev_init(void)
> >  	blk_debugfs_root = debugfs_create_dir("block", NULL);
> >  #endif
> >  
> > +	if (bio_crypt_ctx_init() < 0)
> > +		panic("Failed to allocate mem for bio crypt ctxs\n");
> 
> Maybe move that panic into bio_crypt_ctx_init itself?
> 
> > +static int num_prealloc_crypt_ctxs = 128;
> > +
> > +module_param(num_prealloc_crypt_ctxs, int, 0444);
> > +MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
> > +		"Number of bio crypto contexts to preallocate");
> 
> Please write a comment why this is a tunable, how the default is choosen
> and why someone might want to chane it.
> 
> > +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
> > +{
> > +	return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
> > +}
> 
> I'd rather move bio_crypt_set_ctx out of line, at which point we don't
> really need this helper.
> 
> > +/* Return: 0 on success, negative on error */
> > +int rq_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
> > +				  struct keyslot_manager *ksm,
> > +				  struct rq_crypt_ctx *rc)
> > +{
> > +	rc->keyslot = blk_ksm_get_slot_for_key(ksm, bc->bc_key);
> > +	return rc->keyslot >= 0 ? 0 : rc->keyslot;
> > +}
> > +
> > +void rq_crypt_ctx_release_keyslot(struct keyslot_manager *ksm,
> > +				  struct rq_crypt_ctx *rc)
> > +{
> > +	if (rc->keyslot >= 0)
> > +		blk_ksm_put_slot(ksm, rc->keyslot);
> > +	rc->keyslot = -EINVAL;
> > +}
> 
> Is there really much of a need for these helpers?  I think the
> callers would generally be simpler without them.  Especially the
> fallback code can avoid having to declare rq_crypt_ctx variables
> on stack without the helpers.
> 
> > +int blk_crypto_init_request(struct request *rq, struct request_queue *q,
> > +			    struct bio *bio)
> 
> We can always derive the request_queue from rq->q, so there is no need
> to pass it explicitly (even if a lot of legacy block code does, but
> it is slowly getting cleaned up).
> 
> > +{
> > +	struct rq_crypt_ctx *rc = &rq->rq_crypt_ctx;
> > +	struct bio_crypt_ctx *bc;
> > +	int err;
> > +
> > +	rc->bc = NULL;
> > +	rc->keyslot = -EINVAL;
> > +
> > +	if (!bio)
> > +		return 0;
> > +
> > +	bc = bio->bi_crypt_context;
> > +	if (!bc)
> > +		return 0;
> 
> Shouldn't the checks if the bio actually requires crypto handling be
> done by the caller based on a new handler ala:
> 
> static inline bool bio_is_encrypted(struct bio *bio)
> {
> 	return bio && bio->bi_crypt_context;
> }
> 
> and maybe some inline helpers to reduce the clutter?
> 
> That way a kernel with blk crypto support, but using non-crypto I/O
> saves all the function calls to blk-crypto.
> 
> > +	err = bio_crypt_check_alignment(bio);
> > +	if (err)
> > +		goto fail;
> 
> This seems pretty late to check the alignment, it would be more
> useful in bio_add_page.  Then again Jens didn't like alignment checks
> in the block layer at all even for the normal non-crypto alignment,
> so I don't see why we'd have them here but not for the general case
> (I'd actually like to ee them for the general case, btw).
> 
> > +int blk_crypto_evict_key(struct request_queue *q,
> > +			 const struct blk_crypto_key *key)
> > +{
> > +	if (q->ksm && blk_ksm_crypto_mode_supported(q->ksm, key))
> > +		return blk_ksm_evict_key(q->ksm, key);
> > +
> > +	return 0;
> > +}
> 
> Is there any point in this wrapper that just has a single caller?
> Als why doesn't blk_ksm_evict_key have the blk_ksm_crypto_mode_supported
> sanity check itself?
> 
> > @@ -1998,6 +2007,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> >  
> >  	cookie = request_to_qc_t(data.hctx, rq);
> >  
> > +	if (blk_crypto_init_request(rq, q, bio)) {
> > +		bio->bi_status = BLK_STS_RESOURCE;
> > +		bio_endio(bio);
> > +		blk_mq_end_request(rq, BLK_STS_RESOURCE);
> > +		return BLK_QC_T_NONE;
> > +	}
> 
> This looks fundamentally wrong given that layers above blk-mq
> can't handle BLK_STS_RESOURCE.  It will just show up as an error
> in the calller insteaf of being requeued.
> 
> That being said I think the only error return from
> blk_crypto_init_request is and actual hardware error.  So failing this
> might be ok, but it should be BLK_STS_IOERR, or even better an error
> directly propagated from the driver. 
> 
> > +int bio_crypt_ctx_init(void);
> > +
> > +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
> > +
> > +void bio_crypt_free_ctx(struct bio *bio);
> 
> These can go into block layer internal headers.
> 
> > +static inline bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
> > +					       unsigned int bytes,
> > +					u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
> > +{
> > +	int i = 0;
> > +	unsigned int inc = bytes >> bc->bc_key->data_unit_size_bits;
> > +
> > +	while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
> > +		if (bc->bc_dun[i] + inc != next_dun[i])
> > +			return false;
> > +		inc = ((bc->bc_dun[i] + inc)  < inc);
> > +		i++;
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +static inline void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
> > +					   unsigned int inc)
> > +{
> > +	int i = 0;
> > +
> > +	while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
> > +		dun[i] += inc;
> > +		inc = (dun[i] < inc);
> > +		i++;
> > +	}
> > +}
> 
> Should these really be inline?
> 
> > +bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio);
> > +
> > +bool bio_crypt_ctx_front_mergeable(struct request *req, struct bio *bio);
> > +
> > +bool bio_crypt_ctx_back_mergeable(struct request *req, struct bio *bio);
> > +
> > +bool bio_crypt_ctx_merge_rq(struct request *req, struct request *next);
> > +
> > +void blk_crypto_bio_back_merge(struct request *req, struct bio *bio);
> > +
> > +void blk_crypto_bio_front_merge(struct request *req, struct bio *bio);
> > +
> > +void blk_crypto_free_request(struct request *rq);
> > +
> > +int blk_crypto_init_request(struct request *rq, struct request_queue *q,
> > +			    struct bio *bio);
> > +
> > +int blk_crypto_bio_prep(struct bio **bio_ptr);
> > +
> > +void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio);
> > +
> > +void blk_crypto_rq_prep_clone(struct request *dst, struct request *src);
> > +
> > +int blk_crypto_insert_cloned_request(struct request_queue *q,
> > +				     struct request *rq);
> > +
> > +int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
> > +			enum blk_crypto_mode_num crypto_mode,
> > +			unsigned int blk_crypto_dun_bytes,
> > +			unsigned int data_unit_size);
> > +
> > +int blk_crypto_evict_key(struct request_queue *q,
> > +			 const struct blk_crypto_key *key);
> 
> Most of this should be block layer private.
> 
> > +struct rq_crypt_ctx {
> > +	struct bio_crypt_ctx *bc;
> > +	int keyslot;
> > +};
> > +
> >  /*
> >   * Try to put the fields that are referenced together in the same cacheline.
> >   *
> > @@ -224,6 +230,10 @@ struct request {
> >  	unsigned short nr_integrity_segments;
> >  #endif
> >  
> > +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> > +	struct rq_crypt_ctx rq_crypt_ctx;
> > +#endif
> 
> I'd be tempted to just add
> 
> 	struct bio_crypt_ctx *crypt_ctx;
> 	int crypt_keyslot;
> 
> directly to struct request.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 5/9] scsi: ufs: UFS crypto API
  2020-02-21 11:50 ` [PATCH v7 5/9] scsi: ufs: UFS crypto API Satya Tangirala
@ 2020-02-22  4:59   ` Eric Biggers
  0 siblings, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-22  4:59 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:46AM -0800, Satya Tangirala wrote:
> +static int ufshcd_crypto_cap_find(struct ufs_hba *hba,
> +				  enum blk_crypto_mode_num crypto_mode,
> +				  unsigned int data_unit_size)
> +{
> +	enum ufs_crypto_alg ufs_alg;
> +	u8 data_unit_mask;
> +	int cap_idx;
> +	enum ufs_crypto_key_size ufs_key_size;
> +	union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
> +
> +	if (!ufshcd_hba_is_crypto_supported(hba))
> +		return -EINVAL;
> +
> +	switch (crypto_mode) {
> +	case BLK_ENCRYPTION_MODE_AES_256_XTS:
> +		ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
> +		ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
[...]
> +bool ufshcd_blk_crypto_mode_num_for_alg_dusize(
> +					enum ufs_crypto_alg ufs_crypto_alg,
> +					enum ufs_crypto_key_size key_size,
> +					enum blk_crypto_mode_num *blk_mode_num,
> +					unsigned int *max_dun_bytes_supported)
> +{
> +	/*
> +	 * This is currently the only mode that UFS and blk-crypto both support.
> +	 */
> +	if (ufs_crypto_alg == UFS_CRYPTO_ALG_AES_XTS &&
> +	    key_size == UFS_CRYPTO_KEY_SIZE_256) {
> +		*blk_mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS;
> +		*max_dun_bytes_supported = 8;
> +		return true;
> +	}
> +
> +	return false;
> +}

In UFS, max_dun_bytes_supported is always 8 because it's a property of how the
DUN is conveyed in the UFS standard, not specific to the crypto algorithm.  So,
ufshcd_hba_init_crypto() should just set to 8, and there's no need for this code
that pretends like it could be a per-algorithm thing.

Also, perhaps ufshcd_crypto_cap_find() and this would be better served by a
table that maps between the different conventions for representing the
algorithms?  For now it would just have one entry:

	static const struct {
		enum ufs_crypto_alg ufs_alg;
		enum ufs_crypto_key_size ufs_key_size;
		enum blk_crypto_mode_num blk_mode;
	} ufs_crypto_algs[] = {
		{
			.ufs_alg = UFS_CRYPTO_ALG_AES_XTS,
			.ufs_key_size = UFS_CRYPTO_KEY_SIZE_256, 
			.blk_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
		},
	};      

But then it would be super easy to add another entry later.

I think the only reason not to do that is if we didn't expect any more
algorithms to be added later.  But in that case it would be simpler to remove
ufshcd_blk_crypto_mode_num_for_alg_dusize() and just hard-code AES-256-XTS, and
likewise make ufshcd_crypto_cap_find() use 'if' instead of 'switch'.

(Note that ufshcd_blk_crypto_mode_num_for_alg_dusize() is also misnamed, as it
doesn't have anything to do with the data unit size. And it should be 'static'.)

> +
> +/**
> + * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba
> + * @hba: Per adapter instance
> + *
> + * Return: 0 if crypto was initialized or is not supported, else a -errno value.
> + */
> +int ufshcd_hba_init_crypto(struct ufs_hba *hba)
> +{
> +	int cap_idx = 0;
> +	int err = 0;
> +	enum blk_crypto_mode_num blk_mode_num;
> +	unsigned int max_dun_bytes;
> +
> +	/* Default to disabling crypto */
> +	hba->caps &= ~UFSHCD_CAP_CRYPTO;
> +
> +	/* Return 0 if crypto support isn't present */
> +	if (!(hba->capabilities & MASK_CRYPTO_SUPPORT) ||
> +	    (hba->quirks & UFSHCD_QUIRK_BROKEN_CRYPTO))
> +		goto out;
> +
> +	/*
> +	 * Crypto Capabilities should never be 0, because the
> +	 * config_array_ptr > 04h. So we use a 0 value to indicate that
> +	 * crypto init failed, and can't be enabled.
> +	 */
> +	hba->crypto_capabilities.reg_val =
> +			cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
> +	hba->crypto_cfg_register =
> +		(u32)hba->crypto_capabilities.config_array_ptr * 0x100;
> +	hba->crypto_cap_array =
> +		devm_kcalloc(hba->dev,
> +			     hba->crypto_capabilities.num_crypto_cap,
> +			     sizeof(hba->crypto_cap_array[0]),
> +			     GFP_KERNEL);
> +	if (!hba->crypto_cap_array) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +
> +	err = blk_ksm_init(&hba->ksm, hba->dev, ufshcd_num_keyslots(hba));
> +	if (err)
> +		goto out_free_caps;
> +
> +	hba->ksm.ksm_ll_ops = ufshcd_ksm_ops;
> +	hba->ksm.ll_priv_data = hba;

ll_priv_data isn't used anymore, so it should be removed.

> +
> +	memset(hba->ksm.crypto_modes_supported, 0,
> +	       sizeof(hba->ksm.crypto_modes_supported));
> +	memset(hba->ksm.max_dun_bytes_supported, 0,
> +	       sizeof(hba->ksm.max_dun_bytes_supported));

No need to zero these arrays here, since it's already done by blk_ksm_init().

> +	/*
> +	 * Store all the capabilities now so that we don't need to repeatedly
> +	 * access the device each time we want to know its capabilities
> +	 */

This comment is a bit misleading now, since now this loop also initializes the
crypto_modes_supported array, which is *required* and not just a performance
optimization.

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 9/9] ext4: add inline encryption support
  2020-02-21 11:50 ` [PATCH v7 9/9] ext4: " Satya Tangirala
@ 2020-02-22  5:21   ` Eric Biggers
  0 siblings, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-22  5:21 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:50AM -0800, Satya Tangirala wrote:
> From: Eric Biggers <ebiggers@google.com>
> 
> Wire up ext4 to support inline encryption via the helper functions which
> fs/crypto/ now provides.  This includes:
> 
> - Adding a mount option 'inlinecrypt' which enables inline encryption
>   on encrypted files where it can be used.
> 
> - Setting the bio_crypt_ctx on bios that will be submitted to an
>   inline-encrypted file.
> 
>   Note: submit_bh_wbc() in fs/buffer.c also needed to be patched for
>   this part, since ext4 sometimes uses ll_rw_block() on file data.
> 
> - Not adding logically discontiguous data to bios that will be submitted
>   to an inline-encrypted file.
> 
> - Not doing filesystem-layer crypto on inline-encrypted files.
> 
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> Signed-off-by: Satya Tangirala <satyat@google.com>

You've changed enough in this patch that you should probably should add your
Co-developed-by, or at least summarize your changes in between the Signed-off-by
(e.g. "[satyat: use SB_INLINECRYPT and update docs]")

> ---
>  Documentation/admin-guide/ext4.rst |  4 ++++
>  fs/buffer.c                        |  7 ++++---
>  fs/ext4/ext4.h                     |  1 +
>  fs/ext4/inode.c                    |  4 ++--
>  fs/ext4/page-io.c                  |  6 ++++--
>  fs/ext4/readpage.c                 | 11 ++++++++---
>  fs/ext4/super.c                    | 19 +++++++++++++++++++
>  7 files changed, 42 insertions(+), 10 deletions(-)
> 
> diff --git a/Documentation/admin-guide/ext4.rst b/Documentation/admin-guide/ext4.rst
> index 9443fcef1876..b7cd3bdf827e 100644
> --- a/Documentation/admin-guide/ext4.rst
> +++ b/Documentation/admin-guide/ext4.rst
> @@ -395,6 +395,10 @@ When mounting an ext4 filesystem, the following option are accepted:
>          Documentation/filesystems/dax.txt.  Note that this option is
>          incompatible with data=journal.
>  
> +  inlinecrypt
> +        Use blk-crypto, rather than fscrypt, for file content en/decryption.
> +        See Documentation/block/inline-encryption.rst.

"fscrypt" refers to the ext4 encryption feature itself.  I.e., when inline
encryption is enabled, fscrypt is still being used.  Instead you probably want
to write something like:

	Encrypt/decrypt the contents of encrypted files using the blk-crypto
	framework rather than filesystem-layer encryption.  This allows the use
	of inline encryption hardware.  The on-disk format is unaffected.  For
	more details, see Documentation/block/inline-encryption.rst.

> diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
> index 4441331d06cc..3917f4b3ec30 100644
> --- a/fs/ext4/ext4.h
> +++ b/fs/ext4/ext4.h
> @@ -1151,6 +1151,7 @@ struct ext4_inode_info {
>  #define EXT4_MOUNT_JOURNAL_CHECKSUM	0x800000 /* Journal checksums */
>  #define EXT4_MOUNT_JOURNAL_ASYNC_COMMIT	0x1000000 /* Journal Async Commit */
>  #define EXT4_MOUNT_WARN_ON_ERROR	0x2000000 /* Trigger WARN_ON on error */
> +#define EXT4_MOUNT_INLINECRYPT		0x4000000 /* Inline encryption support */
>  #define EXT4_MOUNT_DELALLOC		0x8000000 /* Delalloc support */
>  #define EXT4_MOUNT_DATA_ERR_ABORT	0x10000000 /* Abort on file data write */
>  #define EXT4_MOUNT_BLOCK_VALIDITY	0x20000000 /* Block validity checking */

Isn't this ext4-specific flag unused now that you switched to SB_INLINECRYPT
instead?

> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index e60aca791d3f..9ac1dd131586 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -1089,7 +1089,7 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
>  	}
>  	if (unlikely(err)) {
>  		page_zero_new_buffers(page, from, to);
> -	} else if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)) {
> +	} else if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
>  		for (i = 0; i < nr_wait; i++) {
>  			int err2;
>  
> @@ -3720,7 +3720,7 @@ static int __ext4_block_zero_page_range(handle_t *handle,
>  		/* Uhhuh. Read error. Complain and punt. */
>  		if (!buffer_uptodate(bh))
>  			goto unlock;
> -		if (S_ISREG(inode->i_mode) && IS_ENCRYPTED(inode)) {
> +		if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
>  			/* We expect the key to be set. */
>  			BUG_ON(!fscrypt_has_encryption_key(inode));
>  			err = fscrypt_decrypt_pagecache_blocks(page, blocksize,
> diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
> index 68b39e75446a..0757145a31b2 100644
> --- a/fs/ext4/page-io.c
> +++ b/fs/ext4/page-io.c
> @@ -404,6 +404,7 @@ static void io_submit_init_bio(struct ext4_io_submit *io,
>  	 * __GFP_DIRECT_RECLAIM is set, see comments for bio_alloc_bioset().
>  	 */
>  	bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
> +	fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
>  	bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
>  	bio_set_dev(bio, bh->b_bdev);
>  	bio->bi_end_io = ext4_end_bio;
> @@ -420,7 +421,8 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
>  {
>  	int ret;
>  
> -	if (io->io_bio && bh->b_blocknr != io->io_next_block) {
> +	if (io->io_bio && (bh->b_blocknr != io->io_next_block ||
> +			   !fscrypt_mergeable_bio_bh(io->io_bio, bh))) {
>  submit_and_retry:
>  		ext4_io_submit(io);
>  	}
> @@ -508,7 +510,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
>  	 * (e.g. holes) to be unnecessarily encrypted, but this is rare and
>  	 * can't happen in the common case of blocksize == PAGE_SIZE.
>  	 */
> -	if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && nr_to_submit) {
> +	if (fscrypt_inode_uses_fs_layer_crypto(inode) && nr_to_submit) {
>  		gfp_t gfp_flags = GFP_NOFS;
>  		unsigned int enc_bytes = round_up(len, i_blocksize(inode));
>  
> diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
> index c1769afbf799..68eac0aeffad 100644
> --- a/fs/ext4/readpage.c
> +++ b/fs/ext4/readpage.c
> @@ -195,7 +195,7 @@ static void ext4_set_bio_post_read_ctx(struct bio *bio,
>  {
>  	unsigned int post_read_steps = 0;
>  
> -	if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode))
> +	if (fscrypt_inode_uses_fs_layer_crypto(inode))
>  		post_read_steps |= 1 << STEP_DECRYPT;
>  
>  	if (ext4_need_verity(inode, first_idx))
> @@ -232,6 +232,7 @@ int ext4_mpage_readpages(struct address_space *mapping,
>  	const unsigned blkbits = inode->i_blkbits;
>  	const unsigned blocks_per_page = PAGE_SIZE >> blkbits;
>  	const unsigned blocksize = 1 << blkbits;
> +	sector_t next_block;
>  	sector_t block_in_file;
>  	sector_t last_block;
>  	sector_t last_block_in_file;
> @@ -264,7 +265,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
>  		if (page_has_buffers(page))
>  			goto confused;
>  
> -		block_in_file = (sector_t)page->index << (PAGE_SHIFT - blkbits);
> +		block_in_file = next_block =
> +			(sector_t)page->index << (PAGE_SHIFT - blkbits);
>  		last_block = block_in_file + nr_pages * blocks_per_page;
>  		last_block_in_file = (ext4_readpage_limit(inode) +
>  				      blocksize - 1) >> blkbits;
> @@ -364,7 +366,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
>  		 * This page will go to BIO.  Do we need to send this
>  		 * BIO off first?
>  		 */
> -		if (bio && (last_block_in_bio != blocks[0] - 1)) {
> +		if (bio && (last_block_in_bio != blocks[0] - 1 ||
> +			    !fscrypt_mergeable_bio(bio, inode, next_block))) {
>  		submit_and_realloc:
>  			submit_bio(bio);
>  			bio = NULL;
> @@ -376,6 +379,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
>  			 */
>  			bio = bio_alloc(GFP_KERNEL,
>  				min_t(int, nr_pages, BIO_MAX_PAGES));
> +			fscrypt_set_bio_crypt_ctx(bio, inode, next_block,
> +						  GFP_KERNEL);
>  			ext4_set_bio_post_read_ctx(bio, inode, page->index);
>  			bio_set_dev(bio, bdev);
>  			bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
> diff --git a/fs/ext4/super.c b/fs/ext4/super.c
> index f464dff09774..490e748b048c 100644
> --- a/fs/ext4/super.c
> +++ b/fs/ext4/super.c
> @@ -1504,6 +1504,7 @@ enum {
>  	Opt_journal_path, Opt_journal_checksum, Opt_journal_async_commit,
>  	Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback,
>  	Opt_data_err_abort, Opt_data_err_ignore, Opt_test_dummy_encryption,
> +	Opt_inlinecrypt,
>  	Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota,
>  	Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota,
>  	Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err,
> @@ -1601,6 +1602,7 @@ static const match_table_t tokens = {
>  	{Opt_noinit_itable, "noinit_itable"},
>  	{Opt_max_dir_size_kb, "max_dir_size_kb=%u"},
>  	{Opt_test_dummy_encryption, "test_dummy_encryption"},
> +	{Opt_inlinecrypt, "inlinecrypt"},
>  	{Opt_nombcache, "nombcache"},
>  	{Opt_nombcache, "no_mbcache"},	/* for backward compatibility */
>  	{Opt_removed, "check=none"},	/* mount option from ext2/3 */
> @@ -1812,6 +1814,11 @@ static const struct mount_opts {
>  	{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
>  	{Opt_max_dir_size_kb, 0, MOPT_GTE0},
>  	{Opt_test_dummy_encryption, 0, MOPT_GTE0},
> +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
> +	{Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_SET},
> +#else
> +	{Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_NOSUPPORT},
> +#endif
>  	{Opt_nombcache, EXT4_MOUNT_NO_MBCACHE, MOPT_SET},
>  	{Opt_err, 0, 0}
>  };
> @@ -1888,6 +1895,13 @@ static int handle_mount_opt(struct super_block *sb, char *opt, int token,
>  	case Opt_nolazytime:
>  		sb->s_flags &= ~SB_LAZYTIME;
>  		return 1;
> +	case Opt_inlinecrypt:
> +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
> +		sb->s_flags |= SB_INLINE_CRYPT;
> +#else
> +		ext4_msg(sb, KERN_ERR, "inline encryption not supported");
> +#endif
> +		return 1;
>  	}
>  
>  	for (m = ext4_mount_opts; m->token != Opt_err; m++)
> @@ -2300,6 +2314,8 @@ static int _ext4_show_options(struct seq_file *seq, struct super_block *sb,
>  		SEQ_OPTS_PUTS("data_err=abort");
>  	if (DUMMY_ENCRYPTION_ENABLED(sbi))
>  		SEQ_OPTS_PUTS("test_dummy_encryption");
> +	if (sb->s_flags & SB_INLINE_CRYPT)
> +		SEQ_OPTS_PUTS("inlinecrypt");

If we're going with the VFS-level mount option flag, it looks like the code to
show it should go in show_sb_opts() in fs/proc_namespace.c instead of here.

> @@ -3808,6 +3824,9 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
>  	 */
>  	sbi->s_li_wait_mult = EXT4_DEF_LI_WAIT_MULT;
>  
> +	/* disable blk-crypto file content encryption by default */
> +	sb->s_flags &= ~SB_INLINE_CRYPT;
> +

Don't the flags default to 0?  If they do, this part is unnecessary.

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 7/9] fscrypt: add inline encryption support
  2020-02-21 11:50 ` [PATCH v7 7/9] fscrypt: add inline encryption support Satya Tangirala
  2020-02-21 18:40   ` Eric Biggers
@ 2020-02-22  5:39   ` Eric Biggers
  2020-02-26  0:30   ` Eric Biggers
  2 siblings, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-22  5:39 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:48AM -0800, Satya Tangirala wrote:
> diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
> index 65cb09fa6ead..7c157130c16a 100644
> --- a/fs/crypto/keysetup.c
> +++ b/fs/crypto/keysetup.c
> @@ -19,6 +19,8 @@ struct fscrypt_mode fscrypt_modes[] = {
>  		.cipher_str = "xts(aes)",
>  		.keysize = 64,
>  		.ivsize = 16,
> +		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
> +		.blk_crypto_dun_bytes_required = 8,
>  	},
>  	[FSCRYPT_MODE_AES_256_CTS] = {
>  		.friendly_name = "AES-256-CTS-CBC",
> @@ -31,6 +33,8 @@ struct fscrypt_mode fscrypt_modes[] = {
>  		.cipher_str = "essiv(cbc(aes),sha256)",
>  		.keysize = 16,
>  		.ivsize = 16,
> +		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
> +		.blk_crypto_dun_bytes_required = 8,
>  	},
>  	[FSCRYPT_MODE_AES_128_CTS] = {
>  		.friendly_name = "AES-128-CTS-CBC",
> @@ -43,6 +47,8 @@ struct fscrypt_mode fscrypt_modes[] = {
>  		.cipher_str = "adiantum(xchacha12,aes)",
>  		.keysize = 32,
>  		.ivsize = 32,
> +		.blk_crypto_mode = BLK_ENCRYPTION_MODE_ADIANTUM,
> +		.blk_crypto_dun_bytes_required = 24,
>  	},
>  };

The DUN bytes required is actually determined by the IV generation method too.
Currently fscrypt has the following combinations:

	AES-256-XTS: 8 bytes
	AES-128-CBC-ESSIV: 8 bytes
	Adiantum without DIRECT_KEY: 8 bytes
	Adiantum with DIRECT_KEY: 24 bytes

I.e., DIRECT_KEY is only allowed with Adiantum, but not required for it.

So it's technically incorrect to always pass dun_bytes_required=24 for Adiantum.

And it's conceivable that in the future we could add an fscrypt setting that
uses AES-256-XTS with 16 IV bytes.  Such a setting wouldn't be usable with UFS
inline encryption, yet the existing AES-256-XTS settings still would.

So, how about instead of putting .blk_crypto_dun_bytes_required in the
crypto_mode table, using logic like:

	dun_bytes_required = 8;
	if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY)
		dun_bytes_required += 16;

> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 3cd4fe6b845e..2331ff0464b2 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -1370,6 +1370,7 @@ extern int send_sigurg(struct fown_struct *fown);
>  #define SB_NODIRATIME	2048	/* Do not update directory access times */
>  #define SB_SILENT	32768
>  #define SB_POSIXACL	(1<<16)	/* VFS does not apply the umask */
> +#define SB_INLINE_CRYPT	(1<<17)	/* inodes in SB use blk-crypto */
>  #define SB_KERNMOUNT	(1<<22) /* this is a kern_mount call */
>  #define SB_I_VERSION	(1<<23) /* Update inode I_version field */
>  #define SB_LAZYTIME	(1<<25) /* Update the on-disk [acm]times lazily */

This flag probably should be called "SB_INLINECRYPT" to match the mount option,
which is "inlinecrypt" not "inline_crypt".

Also, the addition of this flag, along with the update to show_sb_opts() in
fs/proc_namespace.c which I think is needed, maybe should go in a separate patch
whose subject is prefixed with "fs: " to make it clearer to reviewers that this
part is a VFS-level change.

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-21 18:11     ` Eric Biggers
@ 2020-02-23 13:47       ` Stanley Chu
  2020-02-24 23:37         ` Christoph Hellwig
  0 siblings, 1 reply; 40+ messages in thread
From: Stanley Chu @ 2020-02-23 13:47 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Christoph Hellwig, Satya Tangirala, linux-block, linux-scsi,
	linux-fscrypt, linux-fsdevel, linux-f2fs-devel, linux-ext4,
	Barani Muthukumaran, Kuohong Wang, Kim Boojin, Ladvine D Almeida,
	Parshuram Raju Thombare

Hi, 

On Fri, 2020-02-21 at 10:11 -0800, Eric Biggers wrote:
> On Fri, Feb 21, 2020 at 09:22:44AM -0800, Christoph Hellwig wrote:
> > On Fri, Feb 21, 2020 at 03:50:47AM -0800, Satya Tangirala wrote:
> > > Wire up ufshcd.c with the UFS Crypto API, the block layer inline
> > > encryption additions and the keyslot manager.
> > > 
> > > Also, introduce UFSHCD_QUIRK_BROKEN_CRYPTO that certain UFS drivers
> > > that don't yet support inline encryption need to use - taken from
> > > patches by John Stultz <john.stultz@linaro.org>
> > > (https://android-review.googlesource.com/c/kernel/common/+/1162224/5)
> > > (https://android-review.googlesource.com/c/kernel/common/+/1162225/5)
> > > (https://android-review.googlesource.com/c/kernel/common/+/1164506/1)
> > 
> > Between all these quirks, with what upstream SOC does this feature
> > actually work?
> 
> It will work on DragonBoard 845c, i.e. Qualcomm's Snapdragon 845 SoC, if we
> apply my patchset
> https://lkml.kernel.org/linux-block/20200110061634.46742-1-ebiggers@kernel.org/.
> It's currently based on Satya's v6 patchset, but I'll be rebasing it onto v7 and
> resending.  It uses all the UFS standard crypto code that Satya is adding except
> for ufshcd_program_key(), which has to be replaced with a vendor-specific
> operation.  It does also add vendor-specific code to ufs-qcom to initialize the
> crypto hardware, but that's in addition to the standard code, not replacing it.
> 
> DragonBoard 845c is a commercially available development board that boots the
> mainline kernel (modulo two arm-smmu IOMMU patches that Linaro is working on),
> so I think it counts as an "upstream SoC".
> 
> That's all that we currently have the hardware to verify ourselves, though
> Mediatek says that Satya's patches are working on their hardware too.  And the
> UFS controller on Mediatek SoCs is supported by the upstream kernel via
> ufs-mediatek.  But I don't know whether it just works exactly as-is or whether
> they needed to patch ufs-mediatek too.  Stanley or Kuohong, can you confirm?

Yes, MediaTek is keeping work closely with inline encryption patch sets.
Currently the v6 version can work well (without
UFSHCD_QUIRK_BROKEN_CRYPTO quirk) at least in our MT6779 SoC platform
which basic SoC support and some other peripheral drivers are under
upstreaming as below link,

https://patchwork.kernel.org/project/linux-mediatek/list/?state=%
2A&q=6779&series=&submitter=&delegate=&archive=both

The integration with inline encryption patch set needs to patch
ufs-mediatek and patches are ready in downstream. We plan to upstream
them soon after inline encryption patch sets get merged.

> 
> We're also hoping that the patches are usable with the UFS controllers from
> Cadence Design Systems and Synopsys, which have upstream kernel support in
> drivers/scsi/ufs/cdns-pltfrm.c and drivers/scsi/ufs/ufshcd-dwc.c.  But we don't
> currently have a way to verify this.  But in 2018, both companies had tried to
> get the UFS v2.1 standard crypto support upstream, so presumably they must have
> implemented it in their hardware.  +Cc the people who were working on that.
> 
> - Eric

Thanks,
Stanley Chu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 2/9] block: Inline encryption support for blk-mq
  2020-02-22  0:52     ` Satya Tangirala
@ 2020-02-24 23:34       ` Christoph Hellwig
  0 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-24 23:34 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: Christoph Hellwig, linux-block, linux-scsi, linux-fscrypt,
	linux-fsdevel, linux-f2fs-devel, linux-ext4, Barani Muthukumaran,
	Kuohong Wang, Kim Boojin

On Fri, Feb 21, 2020 at 04:52:33PM -0800, Satya Tangirala wrote:
> > What is the rationale for this limitation?  Restricting unrelated
> > features from being used together is a pretty bad design pattern and
> > should be avoided where possible.  If it can't it needs to be documented
> > very clearly.
> > 
> My understanding of blk-integrity is that for writes, blk-integrity
> generates some integrity info for a bio and sends it along with the bio,
> and the device on the other end verifies that the data it received to
> write matches up with the integrity info provided with the bio, and
> saves the integrity info along with the data. As for reads, the device
> sends the data along with the saved integrity info and blk-integrity
> verifies that the data received matches up with the integrity info.

Yes, a device supporting inline encryption and integrity will have to
update the guard tag to match the encrypted data as well.  That alone
is a good enough reason to reject the combination for now until it
is fully supported.  It needs to be properly document, and I think
we should also do it at probe time if possible, not when submitting
I/O.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption
  2020-02-21 18:34     ` Eric Biggers
@ 2020-02-24 23:36       ` Christoph Hellwig
  0 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-24 23:36 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Christoph Hellwig, Satya Tangirala, linux-block, linux-scsi,
	linux-fscrypt, linux-fsdevel, linux-f2fs-devel, linux-ext4,
	Barani Muthukumaran, Kuohong Wang, Kim Boojin

On Fri, Feb 21, 2020 at 10:34:37AM -0800, Eric Biggers wrote:
> On Fri, Feb 21, 2020 at 09:35:39AM -0800, Christoph Hellwig wrote:
> > High-level question:  Does the whole keyslot manager concept even make
> > sense for the fallback?  With the work-queue we have item that exectutes
> > at a time per cpu.  So just allocatea per-cpu crypto_skcipher for
> > each encryption mode and there should never be a slot limitation.  Or
> > do I miss something?
> 
> It does make sense because if blk-crypto-fallback didn't use a keyslot manager,
> it would have to call crypto_skcipher_setkey() on the I/O path for every bio to
> ensure that the CPU's crypto_skcipher has the correct key.  That's undesirable,
> because setting a new key can be expensive with some encryption algorithms, and
> also it can require a memory allocation which can fail.  For example, with the
> Adiantum algorithm, setting a key requires encrypting ~1100 bytes of data in
> order to generate subkeys.  It's better to set a key once and use it many times.

I didn't think of such expensive operations when setting the key.
Note that you would not have to do it on every I/O, as chances are high
you'll get I/O from the same submitter and thus the same key, and we
can optimize for that case pretty easily.

But if you think the keyslot manager is better I accept that, this was
just a throught when looking over the code.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-23 13:47       ` Stanley Chu
@ 2020-02-24 23:37         ` Christoph Hellwig
  2020-02-25  7:21           ` Stanley Chu
  0 siblings, 1 reply; 40+ messages in thread
From: Christoph Hellwig @ 2020-02-24 23:37 UTC (permalink / raw)
  To: Stanley Chu
  Cc: Eric Biggers, Christoph Hellwig, Satya Tangirala, linux-block,
	linux-scsi, linux-fscrypt, linux-fsdevel, linux-f2fs-devel,
	linux-ext4, Barani Muthukumaran, Kuohong Wang, Kim Boojin,
	Ladvine D Almeida, Parshuram Raju Thombare

On Sun, Feb 23, 2020 at 09:47:36PM +0800, Stanley Chu wrote:
> Yes, MediaTek is keeping work closely with inline encryption patch sets.
> Currently the v6 version can work well (without
> UFSHCD_QUIRK_BROKEN_CRYPTO quirk) at least in our MT6779 SoC platform
> which basic SoC support and some other peripheral drivers are under
> upstreaming as below link,
> 
> https://patchwork.kernel.org/project/linux-mediatek/list/?state=%
> 2A&q=6779&series=&submitter=&delegate=&archive=both
> 
> The integration with inline encryption patch set needs to patch
> ufs-mediatek and patches are ready in downstream. We plan to upstream
> them soon after inline encryption patch sets get merged.

What amount of support do you need in ufs-mediatek?  It seems like
pretty much every ufs low-level driver needs some kind of specific
support now, right?  I wonder if we should instead opt into the support
instead of all the quirking here.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-24 23:37         ` Christoph Hellwig
@ 2020-02-25  7:21           ` Stanley Chu
  2020-02-26  1:12             ` Eric Biggers
  0 siblings, 1 reply; 40+ messages in thread
From: Stanley Chu @ 2020-02-25  7:21 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Eric Biggers, Satya Tangirala, linux-block, linux-scsi,
	linux-fscrypt, linux-fsdevel, linux-f2fs-devel, linux-ext4,
	Barani Muthukumaran, Kuohong Wang, Kim Boojin, Ladvine D Almeida,
	Parshuram Raju Thombare

Hi Christoph,

On Mon, 2020-02-24 at 15:37 -0800, Christoph Hellwig wrote:
> On Sun, Feb 23, 2020 at 09:47:36PM +0800, Stanley Chu wrote:
> > Yes, MediaTek is keeping work closely with inline encryption patch sets.
> > Currently the v6 version can work well (without
> > UFSHCD_QUIRK_BROKEN_CRYPTO quirk) at least in our MT6779 SoC platform
> > which basic SoC support and some other peripheral drivers are under
> > upstreaming as below link,
> > 
> > https://patchwork.kernel.org/project/linux-mediatek/list/?state=%
> > 2A&q=6779&series=&submitter=&delegate=&archive=both
> > 
> > The integration with inline encryption patch set needs to patch
> > ufs-mediatek and patches are ready in downstream. We plan to upstream
> > them soon after inline encryption patch sets get merged.
> 
> What amount of support do you need in ufs-mediatek?  It seems like
> pretty much every ufs low-level driver needs some kind of specific
> support now, right?  I wonder if we should instead opt into the support
> instead of all the quirking here.

The patch in ufs-mediatek is aimed to issue vendor-specific SMC calls
for host initialization and configuration. This is because MediaTek UFS
host has some "secure-protected" registers/features which need to be
accessed/switched in secure world. 

Such protection is not mentioned by UFSHCI specifications thus inline
encryption patch set assumes that every registers in UFSHCI can be
accessed normally in kernel. This makes sense and surely the patchset
can work fine in any "standard" UFS host. However if host has special
design then it can work normally only if some vendor-specific treatment
is applied.

I think one of the reason to apply UFSHCD_QUIRK_BROKEN_CRYPTO quirk in
ufs-qcom host is similar to above case.

Thanks,
Stanley Chu


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 7/9] fscrypt: add inline encryption support
  2020-02-21 11:50 ` [PATCH v7 7/9] fscrypt: add inline encryption support Satya Tangirala
  2020-02-21 18:40   ` Eric Biggers
  2020-02-22  5:39   ` Eric Biggers
@ 2020-02-26  0:30   ` Eric Biggers
  2 siblings, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-26  0:30 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:48AM -0800, Satya Tangirala wrote:
> +/**
> + * fscrypt_inode_uses_inline_crypto - test whether an inode uses inline
> + *				      encryption
> + * @inode: an inode
> + *
> + * Return: true if the inode requires file contents encryption and if the
> + *	   encryption should be done in the block layer via blk-crypto rather
> + *	   than in the filesystem layer.
> + */
> +bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
> +{
> +	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
> +		inode->i_crypt_info->ci_inlinecrypt;
> +}
> +EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto);
> +
> +/**
> + * fscrypt_inode_uses_fs_layer_crypto - test whether an inode uses fs-layer
> + *					encryption
> + * @inode: an inode
> + *
> + * Return: true if the inode requires file contents encryption and if the
> + *	   encryption should be done in the filesystem layer rather than in the
> + *	   block layer via blk-crypto.
> + */
> +bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
> +{
> +	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
> +		!inode->i_crypt_info->ci_inlinecrypt;
> +}
> +EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto);

We should use the fscrypt_needs_contents_encryption() helper function which I
added in v5.6.  I.e.:

diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c
index 72692366795aa9..36510802a3665a 100644
--- a/fs/crypto/inline_crypt.c
+++ b/fs/crypto/inline_crypt.c
@@ -32,7 +32,7 @@ void fscrypt_select_encryption_impl(struct fscrypt_info *ci)
 	struct super_block *sb = inode->i_sb;
 
 	/* The file must need contents encryption, not filenames encryption */
-	if (!S_ISREG(inode->i_mode))
+	if (!fscrypt_needs_contents_encryption(inode))
 		return;
 
 	/* blk-crypto must implement the needed encryption algorithm */
@@ -148,7 +148,7 @@ void fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key)
  */
 bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
 {
-	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
+	return fscrypt_needs_contents_encryption(inode) &&
 		inode->i_crypt_info->ci_inlinecrypt;
 }
 EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto);
@@ -164,7 +164,7 @@ EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto);
  */
 bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
 {
-	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
+	return fscrypt_needs_contents_encryption(inode) &&
 		!inode->i_crypt_info->ci_inlinecrypt;
 }
 EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto);
diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
index 2a84131ab270fd..1d9810eb88b113 100644
--- a/include/linux/fscrypt.h
+++ b/include/linux/fscrypt.h
@@ -528,7 +528,7 @@ static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
 
 static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
 {
-	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode);
+	return fscrypt_needs_contents_encryption(inode);
 }
 
 static inline void fscrypt_set_bio_crypt_ctx(struct bio *bio,

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-25  7:21           ` Stanley Chu
@ 2020-02-26  1:12             ` Eric Biggers
  2020-02-26  6:43               ` Stanley Chu
  0 siblings, 1 reply; 40+ messages in thread
From: Eric Biggers @ 2020-02-26  1:12 UTC (permalink / raw)
  To: Stanley Chu
  Cc: Christoph Hellwig, Satya Tangirala, linux-block, linux-scsi,
	linux-fscrypt, linux-fsdevel, linux-f2fs-devel, linux-ext4,
	Barani Muthukumaran, Kuohong Wang, Kim Boojin, Ladvine D Almeida,
	Parshuram Raju Thombare

On Tue, Feb 25, 2020 at 03:21:25PM +0800, Stanley Chu wrote:
> Hi Christoph,
> 
> On Mon, 2020-02-24 at 15:37 -0800, Christoph Hellwig wrote:
> > On Sun, Feb 23, 2020 at 09:47:36PM +0800, Stanley Chu wrote:
> > > Yes, MediaTek is keeping work closely with inline encryption patch sets.
> > > Currently the v6 version can work well (without
> > > UFSHCD_QUIRK_BROKEN_CRYPTO quirk) at least in our MT6779 SoC platform
> > > which basic SoC support and some other peripheral drivers are under
> > > upstreaming as below link,
> > > 
> > > https://patchwork.kernel.org/project/linux-mediatek/list/?state=%
> > > 2A&q=6779&series=&submitter=&delegate=&archive=both
> > > 
> > > The integration with inline encryption patch set needs to patch
> > > ufs-mediatek and patches are ready in downstream. We plan to upstream
> > > them soon after inline encryption patch sets get merged.
> > 
> > What amount of support do you need in ufs-mediatek?  It seems like
> > pretty much every ufs low-level driver needs some kind of specific
> > support now, right?  I wonder if we should instead opt into the support
> > instead of all the quirking here.
> 
> The patch in ufs-mediatek is aimed to issue vendor-specific SMC calls
> for host initialization and configuration. This is because MediaTek UFS
> host has some "secure-protected" registers/features which need to be
> accessed/switched in secure world. 
> 
> Such protection is not mentioned by UFSHCI specifications thus inline
> encryption patch set assumes that every registers in UFSHCI can be
> accessed normally in kernel. This makes sense and surely the patchset
> can work fine in any "standard" UFS host. However if host has special
> design then it can work normally only if some vendor-specific treatment
> is applied.
> 
> I think one of the reason to apply UFSHCD_QUIRK_BROKEN_CRYPTO quirk in
> ufs-qcom host is similar to above case.

So, I had originally assumed that most kernel developers would prefer to make
the UFS crypto support opt-out rather than opt-in, since that matches the normal
Linux way of doing things.  I.e. normally the kernel's default assumption is
that devices implement the relevant standard, and only when a device is known to
deviate from the standard does the driver apply quirks.

But indeed, as we've investigated more vendors' UFS hardware, it seems that
everyone has some quirk that needs to be handled in the platform driver:

  - ufs-qcom (tested on DragonBoard 845c with upstream kernel) needs
    vendor-specific crypto initialization logic and SMC calls to set keys

  - ufs-mediatek needs the quirks that Stanley mentioned above

  - ufs-hisi (tested on Hikey960 with upstream kernel) needs to write a
    vendor-specific register to use high keyslots, but even then I still
    couldn't get the crypto support working correctly.

I'm not sure about the UFS controllers from Synopsys, Cadence, or Samsung, all
of which apparently have implemented some form of the crypto support too.  But I
wouldn't get my hopes up that everyone followed the UFS standard precisely.

So if there are no objections, IMO we should make the crypto support opt-in.

That makes it even more important to upstream the crypto support for specific
hardware like ufs-qcom and ufs-mediatek, since otherwise the ufshcd-crypto code
would be unusable even theoretically.  I'm volunteering to handle ufs-qcom with
https://lkml.kernel.org/linux-block/20200110061634.46742-1-ebiggers@kernel.org/.
Stanley, could you send out ufs-mediatek support as an RFC so people can see
better what it involves?

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-26  1:12             ` Eric Biggers
@ 2020-02-26  6:43               ` Stanley Chu
  2020-03-02  9:17                 ` Stanley Chu
  0 siblings, 1 reply; 40+ messages in thread
From: Stanley Chu @ 2020-02-26  6:43 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Christoph Hellwig, Satya Tangirala, linux-block, linux-scsi,
	linux-fscrypt, linux-fsdevel, linux-f2fs-devel, linux-ext4,
	Barani Muthukumaran, Kuohong Wang, Kim Boojin, Ladvine D Almeida,
	Parshuram Raju Thombare

Hi Eric,

On Tue, 2020-02-25 at 17:12 -0800, Eric Biggers wrote:
> On Tue, Feb 25, 2020 at 03:21:25PM +0800, Stanley Chu wrote:
> > Hi Christoph,
> > 
> > On Mon, 2020-02-24 at 15:37 -0800, Christoph Hellwig wrote:
> > > On Sun, Feb 23, 2020 at 09:47:36PM +0800, Stanley Chu wrote:
> > > > Yes, MediaTek is keeping work closely with inline encryption patch sets.
> > > > Currently the v6 version can work well (without
> > > > UFSHCD_QUIRK_BROKEN_CRYPTO quirk) at least in our MT6779 SoC platform
> > > > which basic SoC support and some other peripheral drivers are under
> > > > upstreaming as below link,
> > > > 
> > > > https://patchwork.kernel.org/project/linux-mediatek/list/?state=%
> > > > 2A&q=6779&series=&submitter=&delegate=&archive=both
> > > > 
> > > > The integration with inline encryption patch set needs to patch
> > > > ufs-mediatek and patches are ready in downstream. We plan to upstream
> > > > them soon after inline encryption patch sets get merged.
> > > 
> > > What amount of support do you need in ufs-mediatek?  It seems like
> > > pretty much every ufs low-level driver needs some kind of specific
> > > support now, right?  I wonder if we should instead opt into the support
> > > instead of all the quirking here.
> > 
> > The patch in ufs-mediatek is aimed to issue vendor-specific SMC calls
> > for host initialization and configuration. This is because MediaTek UFS
> > host has some "secure-protected" registers/features which need to be
> > accessed/switched in secure world. 
> > 
> > Such protection is not mentioned by UFSHCI specifications thus inline
> > encryption patch set assumes that every registers in UFSHCI can be
> > accessed normally in kernel. This makes sense and surely the patchset
> > can work fine in any "standard" UFS host. However if host has special
> > design then it can work normally only if some vendor-specific treatment
> > is applied.
> > 
> > I think one of the reason to apply UFSHCD_QUIRK_BROKEN_CRYPTO quirk in
> > ufs-qcom host is similar to above case.
> 
> So, I had originally assumed that most kernel developers would prefer to make
> the UFS crypto support opt-out rather than opt-in, since that matches the normal
> Linux way of doing things.  I.e. normally the kernel's default assumption is
> that devices implement the relevant standard, and only when a device is known to
> deviate from the standard does the driver apply quirks.
> 
> But indeed, as we've investigated more vendors' UFS hardware, it seems that
> everyone has some quirk that needs to be handled in the platform driver:
> 
>   - ufs-qcom (tested on DragonBoard 845c with upstream kernel) needs
>     vendor-specific crypto initialization logic and SMC calls to set keys
> 
>   - ufs-mediatek needs the quirks that Stanley mentioned above
> 
>   - ufs-hisi (tested on Hikey960 with upstream kernel) needs to write a
>     vendor-specific register to use high keyslots, but even then I still
>     couldn't get the crypto support working correctly.
> 
> I'm not sure about the UFS controllers from Synopsys, Cadence, or Samsung, all
> of which apparently have implemented some form of the crypto support too.  But I
> wouldn't get my hopes up that everyone followed the UFS standard precisely.
> 
> So if there are no objections, IMO we should make the crypto support opt-in.
> 
> That makes it even more important to upstream the crypto support for specific
> hardware like ufs-qcom and ufs-mediatek, since otherwise the ufshcd-crypto code
> would be unusable even theoretically.  I'm volunteering to handle ufs-qcom with
> https://lkml.kernel.org/linux-block/20200110061634.46742-1-ebiggers@kernel.org/.
> Stanley, could you send out ufs-mediatek support as an RFC so people can see
> better what it involves?

Sure, I will send out our RFC patches. Please allow me some time for
submission.

Thanks,
Stanley Chu
> - Eric


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption
  2020-02-21 17:31     ` Christoph Hellwig
@ 2020-02-27 18:14       ` Eric Biggers
  2020-02-27 21:25         ` Satya Tangirala
  0 siblings, 1 reply; 40+ messages in thread
From: Eric Biggers @ 2020-02-27 18:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt,
	linux-fsdevel, linux-f2fs-devel, linux-ext4, Barani Muthukumaran,
	Kuohong Wang, Kim Boojin

On Fri, Feb 21, 2020 at 09:31:18AM -0800, Christoph Hellwig wrote:
> On Fri, Feb 21, 2020 at 09:04:34AM -0800, Christoph Hellwig wrote:
> > Given that blk_ksm_get_slot_for_key returns a signed keyslot that
> > can return errors, and the only callers stores it in a signed variable
> > I think this function should take a signed slot as well, and the check
> > for a non-negative slot should be moved here from the only caller.
> 
> Actually looking over the code again I think it might be better to
> return only the error code (and that might actually be a blk_status_t),
> and then use an argument to return a pointer to the actual struct
> keyslot.  That gives us much easier to understand code and better
> type safety.

That doesn't make sense because the caller only cares about the keyslot number,
not the 'struct keyslot'.  The 'struct keyslot' is internal to
keyslot-manager.c, as it only contains keyslot management information.

Your earlier suggestion of making blk_ksm_put_slot() be a no-op on a negative
keyslot number sounds fine though.

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 2/9] block: Inline encryption support for blk-mq
  2020-02-21 17:22   ` Christoph Hellwig
  2020-02-22  0:52     ` Satya Tangirala
@ 2020-02-27 18:25     ` Eric Biggers
  1 sibling, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-27 18:25 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt,
	linux-fsdevel, linux-f2fs-devel, linux-ext4, Barani Muthukumaran,
	Kuohong Wang, Kim Boojin

On Fri, Feb 21, 2020 at 09:22:05AM -0800, Christoph Hellwig wrote:
> > +int blk_crypto_evict_key(struct request_queue *q,
> > +			 const struct blk_crypto_key *key)
> > +{
> > +	if (q->ksm && blk_ksm_crypto_mode_supported(q->ksm, key))
> > +		return blk_ksm_evict_key(q->ksm, key);
> > +
> > +	return 0;
> > +}
> 
> Is there any point in this wrapper that just has a single caller?
> Als why doesn't blk_ksm_evict_key have the blk_ksm_crypto_mode_supported
> sanity check itself?

Later in the series it's changed to:

int blk_crypto_evict_key(struct request_queue *q,
                         const struct blk_crypto_key *key)
{
        if (q->ksm && blk_ksm_crypto_mode_supported(q->ksm, key))
                return blk_ksm_evict_key(q->ksm, key);

        return blk_crypto_fallback_evict_key(key);
}

I.e. if the encryption mode is using hardware, then the key needs to be evicted
from q->ksm.  Otherwise the key needs to be evicted from the fallback.

Also keep in mind that our goal is to define a clean API for any user of the
block layer to use encryption, not just fs/crypto/.  That API includes:

	blk_crypto_init_key()
	blk_crypto_start_using_key()
	bio_crypt_set_ctx()
	blk_crypto_evict_key()

If anyone else decides to use inline encryption (e.g., if inline encryption
support were added to dm-crypt or another device-mapper target), they'll use
these same functions.

So IMO it's important to define a clean API that won't need to be refactored as
soon as anyone else starts using it, and not e.g. micro-optimize for code length
based on there currently being only one user.

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption
  2020-02-21 11:50 ` [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption Satya Tangirala
  2020-02-21 17:04   ` Christoph Hellwig
@ 2020-02-27 18:48   ` Eric Biggers
  1 sibling, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-27 18:48 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:42AM -0800, Satya Tangirala wrote:
> Inline Encryption hardware allows software to specify an encryption context
> (an encryption key, crypto algorithm, data unit num, data unit size) along
> with a data transfer request to a storage device, and the inline encryption
> hardware will use that context to en/decrypt the data. The inline
> encryption hardware is part of the storage device, and it conceptually sits
> on the data path between system memory and the storage device.
> 
> Inline Encryption hardware implementations often function around the
> concept of "keyslots". These implementations often have a limited number
> of "keyslots", each of which can hold an encryption context (we say that
> an encryption context can be "programmed" into a keyslot). Requests made
> to the storage device may have a keyslot associated with them, and the
> inline encryption hardware will en/decrypt the data in the requests using
> the encryption context programmed into that associated keyslot.

The second paragraph uses the phrase "encryption context" differently from the
first.  In the first it's (key, alg, dun, dusize) while in the second it's (key,
alg, dusize).  Maybe simply say "key" in the second?  "Key" is already used as
shorthand in lots of places without specifying that it really means also the
(alg, dusize), like in "keyslot" and "keyslot manager".

This inconsistency shows up in some other places in the patchset too.

> diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
> new file mode 100644
> index 000000000000..15dfc0c12c7f
> --- /dev/null
> +++ b/block/keyslot-manager.c
[...]
> +/**
> + * blk_ksm_init() - Initialize a keyslot manager
> + * @ksm: The keyslot_manager to initialize.
> + * @dev: Device for runtime power management (NULL if none)
> + * @num_slots: The number of key slots to manage.
> + *
> + * Allocate memory for keyslots and initialize a keyslot manager. Called by
> + * e.g. storage drivers to set up a keyslot manager in their request_queue.
> + *
> + * Return: 0 on success, or else a negative error code.
> + */
> +int blk_ksm_init(struct keyslot_manager *ksm, struct device *dev,
> +		 unsigned int num_slots)
> +{
> +	unsigned int slot;
> +	unsigned int i;
> +
> +	memset(ksm, 0, sizeof(*ksm));
> +
> +	if (num_slots == 0)
> +		return -EINVAL;
> +
> +	ksm->slots = kvzalloc(sizeof(ksm->slots[0]) * num_slots, GFP_KERNEL);
> +	if (!ksm->slots)
> +		return -ENOMEM;

This should use kvcalloc() so that an integer overflow check is included.

> +
> +	ksm->num_slots = num_slots;
> +	blk_ksm_set_dev(ksm, dev);
> +
> +	init_rwsem(&ksm->lock);
> +
> +	init_waitqueue_head(&ksm->idle_slots_wait_queue);
> +	INIT_LIST_HEAD(&ksm->idle_slots);
> +
> +	for (slot = 0; slot < num_slots; slot++) {
> +		list_add_tail(&ksm->slots[slot].idle_slot_node,
> +			      &ksm->idle_slots);
> +	}
> +
> +	spin_lock_init(&ksm->idle_slots_lock);
> +
> +	ksm->slot_hashtable_size = roundup_pow_of_two(num_slots);
> +	ksm->slot_hashtable = kvmalloc_array(ksm->slot_hashtable_size,
> +					     sizeof(ksm->slot_hashtable[0]),
> +					     GFP_KERNEL);
> +	if (!ksm->slot_hashtable)
> +		goto err_free_ksm;
> +	for (i = 0; i < ksm->slot_hashtable_size; i++)
> +		INIT_HLIST_HEAD(&ksm->slot_hashtable[i]);
> +
> +	return 0;
> +
> +err_free_ksm:
> +	blk_ksm_destroy(ksm);
> +	return -ENOMEM;
> +}
> +EXPORT_SYMBOL_GPL(blk_ksm_init);

The label should be called 'err_destroy_ksm' now that it's not actually freeing
the memory of the 'struct keyslot_manager' itself.

> +/**
> + * blk_ksm_crypto_mode_supported() - Find out if a crypto_mode/data unit size
> + *				     combination is supported by a ksm.
> + * @ksm: The keyslot manager to check
> + * @crypto_mode: The crypto mode to check for.
> + * @blk_crypto_dun_bytes: The minimum number of bytes needed for specifying DUNs
> + * @data_unit_size: The data_unit_size for the mode.
> + *
> + * Checks for crypto_mode/data unit size support.
> + *
> + * Return: Whether or not this ksm supports the specified crypto_mode/
> + *	   data_unit_size combo.
> + */
> +bool blk_ksm_crypto_mode_supported(struct keyslot_manager *ksm,
> +				   const struct blk_crypto_key *key)
> +{
> +	if (!ksm)
> +		return false;
> +	if (WARN_ON((unsigned int)key->crypto_mode >= BLK_ENCRYPTION_MODE_MAX))
> +		return false;
> +	if (WARN_ON(!is_power_of_2(key->data_unit_size)))
> +		return false;
> +	return (ksm->crypto_modes_supported[key->crypto_mode] &
> +		key->data_unit_size) &&
> +	       (ksm->max_dun_bytes_supported[key->crypto_mode] >=
> +		key->dun_bytes);
> +}

The kerneldoc for this function is outdated now that it was changed to take in a
'struct blk_crypto_key'.

Also, since the crypto_mode and data_unit_size fields of 'struct blk_crypto_key'
are already validated by blk_crypto_init_key(), they don't really need to
validated again here anymore.

> +void blk_ksm_destroy(struct keyslot_manager *ksm)
> +{
> +	if (!ksm)
> +		return;
> +	kvfree(ksm->slot_hashtable);
> +	kvfree(ksm->slots);
> +	memzero_explicit(ksm, sizeof(*ksm));
> +}
> +EXPORT_SYMBOL_GPL(blk_ksm_destroy);

In v6 => v7, zeroing the keys was lost.  We need:

	memzero_explicit(ksm->slots, sizeof(ksm->slots[0]) * ksm->num_slots);

> diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
> new file mode 100644
> index 000000000000..b8d54eca1c0d
> --- /dev/null
> +++ b/include/linux/blk-crypto.h
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2019 Google LLC
> + */
> +
> +#ifndef __LINUX_BLK_CRYPTO_H
> +#define __LINUX_BLK_CRYPTO_H
> +
> +enum blk_crypto_mode_num {
> +	BLK_ENCRYPTION_MODE_INVALID,
> +	BLK_ENCRYPTION_MODE_AES_256_XTS,
> +	BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
> +	BLK_ENCRYPTION_MODE_ADIANTUM,
> +	BLK_ENCRYPTION_MODE_MAX,
> +};
> +
> +#define BLK_CRYPTO_MAX_KEY_SIZE		64
> +
> +/**
> + * struct blk_crypto_key - an inline encryption key
> + * @crypto_mode: encryption algorithm this key is for
> + * @data_unit_size: the data unit size for all encryption/decryptions with this
> + *	key.  This is the size in bytes of each individual plaintext and
> + *	ciphertext.  This is always a power of 2.  It might be e.g. the
> + *	filesystem block size or the disk sector size.
> + * @data_unit_size_bits: log2 of data_unit_size
> + * @dun_bytes: the number of bytes of DUN used when using this key

Shouldn't @dun_bytes be:

	the maximum number of bytes of DUN needed when using this key

> +struct keyslot_manager {
> +	unsigned int num_slots;
> +
> +	/*
> +	 * The struct keyslot_mgmt_ll_ops that this keyslot manager will use
> +	 * to perform operations like programming and evicting keys on the
> +	 * device
> +	 */
> +	struct keyslot_mgmt_ll_ops ksm_ll_ops;
> +
> +	/*
> +	 * Array of size BLK_ENCRYPTION_MODE_MAX of bitmasks that represents
> +	 * whether a crypto mode and data unit size are supported. The i'th
> +	 * bit of crypto_mode_supported[crypto_mode] is set iff a data unit
> +	 * size of (1 << i) is supported. We only support data unit sizes
> +	 * that are powers of 2.
> +	 */
> +	unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
> +	/*
> +	 * Array of size BLK_ENCRYPTION_MODE_MAX. The i'th entry specifies the
> +	 * maximum number of dun bytes supported by the i'th crypto mode.
> +	 */
> +	unsigned int max_dun_bytes_supported[BLK_ENCRYPTION_MODE_MAX];
> +
> +	/* Private data unused by keyslot_manager. */
> +	void *ll_priv_data;
> +
> +#ifdef CONFIG_PM
> +	/* Device for runtime power management (NULL if none) */
> +	struct device *dev;
> +#endif
> +
> +	/* Protects programming and evicting keys from the device */
> +	struct rw_semaphore lock;
> +
> +	/* List of idle slots, with least recently used slot at front */
> +	wait_queue_head_t idle_slots_wait_queue;
> +	struct list_head idle_slots;
> +	spinlock_t idle_slots_lock;
> +
> +	/*
> +	 * Hash table which maps key hashes to keyslots, so that we can find a
> +	 * key's keyslot in O(1) time rather than O(num_slots).  Protected by
> +	 * 'lock'.  A cryptographic hash function is used so that timing attacks
> +	 * can't leak information about the raw keys.
> +	 */
> +	struct hlist_head *slot_hashtable;
> +	unsigned int slot_hashtable_size;
> +
> +	/* Per-keyslot data */
> +	struct keyslot *slots;
> +};

Now that struct keyslot_manager is exposed to everyone, it would be helpful if
the comments and field order made the fields clearly divided into two parts: one
part that's public, and one part that's for keyslot manager use only.

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption
  2020-02-21 11:50 ` [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption Satya Tangirala
                     ` (2 preceding siblings ...)
  2020-02-21 17:35   ` Christoph Hellwig
@ 2020-02-27 19:25   ` Eric Biggers
  3 siblings, 0 replies; 40+ messages in thread
From: Eric Biggers @ 2020-02-27 19:25 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel,
	linux-f2fs-devel, linux-ext4, Barani Muthukumaran, Kuohong Wang,
	Kim Boojin

On Fri, Feb 21, 2020 at 03:50:44AM -0800, Satya Tangirala wrote:
> Blk-crypto delegates crypto operations to inline encryption hardware when
> available. The separately configurable blk-crypto-fallback contains a
> software fallback to the kernel crypto API - when enabled, blk-crypto
> will use this fallback for en/decryption when inline encryption hardware is
> not available. This lets upper layers not have to worry about whether or
> not the underlying device has support for inline encryption before
> deciding to specify an encryption context for a bio, and also allows for
> testing without actual inline encryption hardware. For more details, refer
> to Documentation/block/inline-encryption.rst.
> 
> Signed-off-by: Satya Tangirala <satyat@google.com>

In v7, only blk_mq_make_request() actually calls blk_crypto_bio_prep().
That will make the crypto contexts be silently ignored (no fallback) if
q->make_request_fn != blk_mq_make_request.

In recent kernels that *hopefully* won't matter in practice since almost
everyone is using blk_mq_make_request.  But it still seems like a poor design.
First, it's super important that if someone requests encryption, then they
either get it or get an error; it should *never* be silently ignored.  Second,
part of the goal of blk-crypto-fallback is that it should always work, so that
in principle users don't have to implement the encryption twice, once via
blk-crypto and once via fs or dm-layer crypto.

So is there any reason not to keep the blk_crypto_bio_prep() call in
generic_make_request()?

I think performance can't be much of a complaint, since if almost everyone is
using blk_mq_make_request() then they are making the function call anyway...

- Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption
  2020-02-27 18:14       ` Eric Biggers
@ 2020-02-27 21:25         ` Satya Tangirala
  2020-03-05 16:11           ` Christoph Hellwig
  0 siblings, 1 reply; 40+ messages in thread
From: Satya Tangirala @ 2020-02-27 21:25 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Christoph Hellwig, linux-block, linux-scsi, linux-fscrypt,
	linux-fsdevel, linux-f2fs-devel, linux-ext4, Barani Muthukumaran,
	Kuohong Wang, Kim Boojin

On Thu, Feb 27, 2020 at 10:14:11AM -0800, Eric Biggers wrote:
> On Fri, Feb 21, 2020 at 09:31:18AM -0800, Christoph Hellwig wrote:
> > On Fri, Feb 21, 2020 at 09:04:34AM -0800, Christoph Hellwig wrote:
> > > Given that blk_ksm_get_slot_for_key returns a signed keyslot that
> > > can return errors, and the only callers stores it in a signed variable
> > > I think this function should take a signed slot as well, and the check
> > > for a non-negative slot should be moved here from the only caller.
> > 
> > Actually looking over the code again I think it might be better to
> > return only the error code (and that might actually be a blk_status_t),
> > and then use an argument to return a pointer to the actual struct
> > keyslot.  That gives us much easier to understand code and better
> > type safety.
> 
> That doesn't make sense because the caller only cares about the keyslot number,
> not the 'struct keyslot'.  The 'struct keyslot' is internal to
> keyslot-manager.c, as it only contains keyslot management information.
> 
I think it does make some sense at least to make the keyslot type opaque
to most of the system other than the driver itself (the driver will now
have to call a function like blk_ksm_slot_idx_for_keyslot to actually get
a keyslot number at the end of the day). Also this way, the keyslot manager
can verify that the keyslot passed to blk_ksm_put_slot is actually part of
that keyslot manager (and that somebody isn't releasing a slot number that
was actually acquired from a different keyslot manager). I don't think
it's much benefit or loss either way, but I already switched to passing
pointers to struct keyslot around instead of ints, so I'll keep it that
way unless you strongly feel that using ints in this case is better
than struct keyslot *.
> Your earlier suggestion of making blk_ksm_put_slot() be a no-op on a negative
> keyslot number sounds fine though.
> 
> - Eric

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS
  2020-02-26  6:43               ` Stanley Chu
@ 2020-03-02  9:17                 ` Stanley Chu
  0 siblings, 0 replies; 40+ messages in thread
From: Stanley Chu @ 2020-03-02  9:17 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Christoph Hellwig, Satya Tangirala, linux-block, linux-scsi,
	linux-fscrypt, linux-fsdevel, linux-f2fs-devel, linux-ext4,
	Barani Muthukumaran, Kuohong Wang (王國鴻),
	Kim Boojin, Ladvine D Almeida, Parshuram Raju Thombare

Hi Eric and all,

On Wed, 2020-02-26 at 14:43 +0800, Stanley Chu wrote:
> Hi Eric,
> 
> On Tue, 2020-02-25 at 17:12 -0800, Eric Biggers wrote:

> > 
> > I'm not sure about the UFS controllers from Synopsys, Cadence, or Samsung, all
> > of which apparently have implemented some form of the crypto support too.  But I
> > wouldn't get my hopes up that everyone followed the UFS standard precisely.
> > 
> > So if there are no objections, IMO we should make the crypto support opt-in.
> > 
> > That makes it even more important to upstream the crypto support for specific
> > hardware like ufs-qcom and ufs-mediatek, since otherwise the ufshcd-crypto code
> > would be unusable even theoretically.  I'm volunteering to handle ufs-qcom with
> > https://lkml.kernel.org/linux-block/20200110061634.46742-1-ebiggers@kernel.org/.
> > Stanley, could you send out ufs-mediatek support as an RFC so people can see
> > better what it involves?
> 
> Sure, I will send out our RFC patches. Please allow me some time for
> submission.

The ufs-mediatek RFC patch is uploaded as
https://patchwork.kernel.org/patch/11415051/

This patch is rebased to the latest wip-inline-encryption branch in
Eric Biggers's git:
https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/

Thanks,
Stanley Chu

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption
  2020-02-27 21:25         ` Satya Tangirala
@ 2020-03-05 16:11           ` Christoph Hellwig
  0 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2020-03-05 16:11 UTC (permalink / raw)
  To: Satya Tangirala
  Cc: Eric Biggers, Christoph Hellwig, linux-block, linux-scsi,
	linux-fscrypt, linux-fsdevel, linux-f2fs-devel, linux-ext4,
	Barani Muthukumaran, Kuohong Wang, Kim Boojin

On Thu, Feb 27, 2020 at 01:25:12PM -0800, Satya Tangirala wrote:
> I think it does make some sense at least to make the keyslot type opaque
> to most of the system other than the driver itself (the driver will now
> have to call a function like blk_ksm_slot_idx_for_keyslot to actually get
> a keyslot number at the end of the day). Also this way, the keyslot manager
> can verify that the keyslot passed to blk_ksm_put_slot is actually part of
> that keyslot manager (and that somebody isn't releasing a slot number that
> was actually acquired from a different keyslot manager). I don't think
> it's much benefit or loss either way, but I already switched to passing
> pointers to struct keyslot around instead of ints, so I'll keep it that
> way unless you strongly feel that using ints in this case is better
> than struct keyslot *.

Exactly.  This provides a little type safety.

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2020-03-05 16:11 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
2020-02-21 11:50 ` [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption Satya Tangirala
2020-02-21 17:04   ` Christoph Hellwig
2020-02-21 17:31     ` Christoph Hellwig
2020-02-27 18:14       ` Eric Biggers
2020-02-27 21:25         ` Satya Tangirala
2020-03-05 16:11           ` Christoph Hellwig
2020-02-27 18:48   ` Eric Biggers
2020-02-21 11:50 ` [PATCH v7 2/9] block: Inline encryption support for blk-mq Satya Tangirala
2020-02-21 17:22   ` Christoph Hellwig
2020-02-22  0:52     ` Satya Tangirala
2020-02-24 23:34       ` Christoph Hellwig
2020-02-27 18:25     ` Eric Biggers
2020-02-21 11:50 ` [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption Satya Tangirala
2020-02-21 16:51   ` Randy Dunlap
2020-02-21 17:25   ` Christoph Hellwig
2020-02-21 17:35   ` Christoph Hellwig
2020-02-21 18:34     ` Eric Biggers
2020-02-24 23:36       ` Christoph Hellwig
2020-02-27 19:25   ` Eric Biggers
2020-02-21 11:50 ` [PATCH v7 4/9] scsi: ufs: UFS driver v2.1 spec crypto additions Satya Tangirala
2020-02-21 11:50 ` [PATCH v7 5/9] scsi: ufs: UFS crypto API Satya Tangirala
2020-02-22  4:59   ` Eric Biggers
2020-02-21 11:50 ` [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
2020-02-21 17:22   ` Christoph Hellwig
2020-02-21 18:11     ` Eric Biggers
2020-02-23 13:47       ` Stanley Chu
2020-02-24 23:37         ` Christoph Hellwig
2020-02-25  7:21           ` Stanley Chu
2020-02-26  1:12             ` Eric Biggers
2020-02-26  6:43               ` Stanley Chu
2020-03-02  9:17                 ` Stanley Chu
2020-02-21 11:50 ` [PATCH v7 7/9] fscrypt: add inline encryption support Satya Tangirala
2020-02-21 18:40   ` Eric Biggers
2020-02-22  5:39   ` Eric Biggers
2020-02-26  0:30   ` Eric Biggers
2020-02-21 11:50 ` [PATCH v7 8/9] f2fs: " Satya Tangirala
2020-02-21 11:50 ` [PATCH v7 9/9] ext4: " Satya Tangirala
2020-02-22  5:21   ` Eric Biggers
2020-02-21 17:16 ` [PATCH v7 0/9] Inline Encryption Support Eric Biggers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).