linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] Inline Encryption Support
@ 2019-05-06 22:35 Satya Tangirala
  2019-05-06 22:35 ` [RFC PATCH 1/4] block: Block Layer changes for " Satya Tangirala
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Satya Tangirala @ 2019-05-06 22:35 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

This patch series adds support for Inline Encryption to the block layer,
fscrypt and f2fs.

Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size, etc.)
along with a data transfer request to a storage device, and the inline
encryption hardware will use that context to en/decrypt the data. The
inline encryption hardware is part of the storage device, and it
conceptually sits on the data path between system memory and the storage
device. Inline Encryption hardware has become increasingly common, and we
want to support it in the kernel.

Inline Encryption hardware implementations often function around the
concept of a limited number of "keyslots", which can hold an encryption
context each. The storage device can be directed to en/decrypt any
particular request with the encryption context stored in any particular
keyslot.

Patch 1 introduces a Keyslot Manager to efficiently manage keyslots.
The keyslot manager also functions as the interface that upper layers will
use to program keys into inline encryption hardware. For more information
on the Keyslot Manager, refer to documentation found in
block/keyslot-manager.c and linux/keyslot-manager.h.

We also want to be able to make use of inline encryption hardware with
layered devices like device mapper. To this end, Patch 1 also introduces
blk-crypto. Blk-crypto delegates crypto operations to inline encryption
hardware when available, and also contains a software fallback to the
kernel crypto API. Given that blk-crypto works as a software fallback,
we are considering removing file content en/decryption from fscrypt and
simply using blk-crypto in a future patch. For more details on blk-crypto,
refer to Documentation/block/blk-crypto.txt.

Patch 2 adds support for inline encryption into the UFS driver according
to the JEDEC UFS HCI v2.1 specification. Inline encryption support for
other drivers (like eMMC) may be added in the same way - the device driver
should set up a Keyslot Manager in the device's request_queue (refer to
the UFS crypto additions in ufshcd-crypto.c for an example).

Patches 3 and 4 add support to fscrypt and f2fs, so that we have
a complete stack that can make use of inline encryption.

There have been a few patch sets addressing Inline Encryption Support in
the past. Briefly, this patch set differs from those as follows:

1) https://lkml.org/lkml/2018/10/17/1022
"crypto: qce: ice: Add support for Inline Crypto Engine"
is specific to certain hardware, while our patch set's Inline
Encryption support for UFS is implemented according to the JEDEC UFS
specification.

2) https://lkml.org/lkml/2018/5/28/1187
"scsi: ufs: UFS Host Controller crypto changes" registers inline
encryption support as a kernel crypto algorithm. Our patch set views
inline encryption as being fundamentally different from a generic crypto
provider (in that inline encryption is tied to a device), and so does
not use the kernel crypto API to represent inline encryption hardware.

3) https://lkml.org/lkml/2018/12/11/190
"scsi: ufs: add real time/inline crypto support to UFS HCD" requires
the device mapper to work - our patch does not.

Satya Tangirala (4):
  block: Block Layer changes for Inline Encryption Support
  scsi: ufs: UFS driver v2.1 crypto support
  fscrypt: wire up fscrypt to use blk-crypto
  f2fs: Wire up f2fs to use inline encryption via fscrypt

 Documentation/block/blk-crypto.txt | 185 ++++++++++
 block/Kconfig                      |  16 +
 block/Makefile                     |   3 +
 block/bio.c                        |  45 +++
 block/blk-core.c                   |  14 +-
 block/blk-crypto.c                 | 572 +++++++++++++++++++++++++++++
 block/blk-merge.c                  |  87 ++++-
 block/bounce.c                     |   1 +
 block/keyslot-manager.c            | 314 ++++++++++++++++
 drivers/scsi/ufs/Kconfig           |  10 +
 drivers/scsi/ufs/Makefile          |   1 +
 drivers/scsi/ufs/ufshcd-crypto.c   | 449 ++++++++++++++++++++++
 drivers/scsi/ufs/ufshcd-crypto.h   |  92 +++++
 drivers/scsi/ufs/ufshcd.c          |  85 ++++-
 drivers/scsi/ufs/ufshcd.h          |  23 ++
 drivers/scsi/ufs/ufshci.h          |  67 +++-
 fs/crypto/Kconfig                  |   7 +
 fs/crypto/bio.c                    | 156 ++++++--
 fs/crypto/crypto.c                 |   9 +
 fs/crypto/fscrypt_private.h        |  10 +
 fs/crypto/keyinfo.c                |  69 ++--
 fs/crypto/policy.c                 |  10 +
 fs/f2fs/data.c                     |  69 +++-
 fs/f2fs/super.c                    |   1 +
 include/linux/bio.h                | 166 +++++++++
 include/linux/blk-crypto.h         |  40 ++
 include/linux/blk_types.h          |  49 +++
 include/linux/blkdev.h             |   9 +
 include/linux/fscrypt.h            |  58 +++
 include/linux/keyslot-manager.h    | 131 +++++++
 include/uapi/linux/fs.h            |  12 +-
 31 files changed, 2701 insertions(+), 59 deletions(-)
 create mode 100644 Documentation/block/blk-crypto.txt
 create mode 100644 block/blk-crypto.c
 create mode 100644 block/keyslot-manager.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h
 create mode 100644 include/linux/blk-crypto.h
 create mode 100644 include/linux/keyslot-manager.h

-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] block: Block Layer changes for Inline Encryption Support
  2019-05-06 22:35 [RFC PATCH 0/4] Inline Encryption Support Satya Tangirala
@ 2019-05-06 22:35 ` Satya Tangirala
  2019-05-06 23:54   ` Randy Dunlap
                     ` (2 more replies)
  2019-05-06 22:35 ` [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support Satya Tangirala
                   ` (4 subsequent siblings)
  5 siblings, 3 replies; 16+ messages in thread
From: Satya Tangirala @ 2019-05-06 22:35 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Inline Encryption hardware allows software to specify an encryption context
(an encryption key, crypto algorithm, data unit num, data unit size, etc.)
along with a data transfer request to a storage device, and the inline
encryption hardware will use that context to en/decrypt the data. The
inline encryption hardware is part of the storage device, and it
conceptually sits on the data path between system memory and the storage
device.

To do this, we must have some way of letting a storage device driver know
what encryption context it should use for en/decrypting a request.
However, it's the filesystem/fscrypt that knows about and manages
encryption contexts. As such, when the filesystem layer submits a bio to
the block layer, and this bio eventually reaches a device driver with
support for inline encryption, the device driver will need to know what
the encryption context for that bio is. We want to communicate the
encryption context from the filesystem layer to the storage device along
with the bio, when the bio is submitted to the block layer. To do this, we
add a struct bio_crypt_ctx to struct bio, which can represent an
encryption context (note that we can't use the bi_private field in struct
bio to do this because that field does not function to pass information
across layers in the storage stack).

Inline Encryption hardware implementations often function around the
concept of "keyslots". These implementations often have a limited number
of "keyslots", each of which can hold an encryption context (we say that
an encryption context can be "programmed" into a keyslot). Requests made
to the storage device may have a keyslot associated with them, and the
inline encryption hardware will en/decrypt the data in the requests using
the encryption context programmed into that associated keyslot. As
keyslots are limited, and programming keys may be expensive in many
implementations, and multiple requests may use exactly the same encryption
contexts, we introduce a Keyslot Manager to efficiently manage keyslots.
The keyslot manager also functions as the interface that upper layers will
use to program keys into inline encryption hardware. For more information
on the Keyslot Manager, refer to documentation found in
block/keyslot-manager.c and linux/keyslot-manager.h.

We also want to be able to make use of inline encryption hardware with
layered devices like device mapper. To this end, we introduce blk-crypto.
Blk-crypto delegates crypto operations to inline encryption hardware when
available, and also contains a software fallback to the kernel crypto API.
For more details, refer to Documentation/block/blk-crypto.txt.

Known issues:
1) We are adding a relatively large struct (bio_crypt_ctx) to struct bio
	- we should instead add just a pointer to that struct.
2) Keyslot Manager has a performance bug where the same encryption
   context may be programmed into multiple keyslots at the same time in
   certain situations when all keyslots are being used.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 Documentation/block/blk-crypto.txt | 185 ++++++++++
 block/Kconfig                      |  16 +
 block/Makefile                     |   3 +
 block/bio.c                        |  45 +++
 block/blk-core.c                   |  14 +-
 block/blk-crypto.c                 | 573 +++++++++++++++++++++++++++++
 block/blk-merge.c                  |  87 ++++-
 block/bounce.c                     |   1 +
 block/keyslot-manager.c            | 314 ++++++++++++++++
 include/linux/bio.h                | 166 +++++++++
 include/linux/blk-crypto.h         |  40 ++
 include/linux/blk_types.h          |  49 +++
 include/linux/blkdev.h             |   9 +
 include/linux/keyslot-manager.h    | 131 +++++++
 14 files changed, 1628 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/block/blk-crypto.txt
 create mode 100644 block/blk-crypto.c
 create mode 100644 block/keyslot-manager.c
 create mode 100644 include/linux/blk-crypto.h
 create mode 100644 include/linux/keyslot-manager.h

diff --git a/Documentation/block/blk-crypto.txt b/Documentation/block/blk-crypto.txt
new file mode 100644
index 000000000000..a1b82361cb16
--- /dev/null
+++ b/Documentation/block/blk-crypto.txt
@@ -0,0 +1,185 @@
+BLK-CRYPTO and KEYSLOT MANAGER
+===========================
+
+CONTENTS
+1. Objective
+2. Constraints and notes
+3. Design
+4. Blk-crypto
+ 4-1 What does blk-crypto do on bio submission
+5. Layered Devices
+6. Future optimizations for layered devices
+
+1. Objective
+============
+
+We want to support inline encryption (IE) in the kernel.
+To allow for testing, we also want a software fallback when actual
+IE hardware is absent. We also want IE to work with layered devices
+like dm and loopback (i.e. we want to be able to use the IE hardware
+of the underlying devices if present, or else fall back to software
+en/decryption).
+
+
+2. Constraints and notes
+========================
+
+1) IE hardware have a limited number of “keyslots” that can be programmed
+with an encryption context (key, algorithm, data unit size, etc.) at any time.
+One can specify a keyslot in a data requests made to the device, and when the
+device will en/decrypt the data using the encryption context programmed into
+that specified keyslot. Of course, when possible, we want to make multiple
+requests with the the same encryption context share the same keyslot.
+
+2) We need a way for filesystems to specify an encryption context to use for
+en/decrypting a struct bio, and a device driver (like UFS) needs to be able
+to use that encryption context when it processes the bio.
+
+3) We need a way for device drivers to expose their capabilities in a unified
+way to the upper layers.
+
+
+3. Design
+=========
+
+We add a struct bio_crypt_context to struct bio that can represent an
+encryption context, because we need to able to pass this encryption context
+from the FS layer to the device driver to act upon.
+
+While IE hardware works on the notion of keyslots, the FS layer has no
+knowledge of keyslots - it simply wants to specify an encryption context to
+use while en/decrypting a bio.
+
+We introduce a keyslot manager (KSM) that handles the translation from
+encryption contexts specified by the FS to keyslots on the IE hardware.
+This KSM also serves as the way IE hardware can expose their capabilities to
+upper layers. The generic mode of operation is: each device driver that wants
+to support IE will construct a KSM and set it up in its struct request_queue.
+Upper layers that want to use IE on this device can then use this KSM in
+the device’s struct request_queue to translate an encryption context into
+a keyslot. The presence of the KSM in the request queue shall be used to mean
+that the device supports IE.
+
+On the device driver end of the interface, the device driver needs to tell the
+KSM how to actually manipulate the IE hardware in the device to do things like
+programming the crypto key into the IE hardware into a particular keyslot. All
+this is achieved through the struct keyslot_mgmt_ll_ops that the device driver
+passes to the KSM when creating it.
+
+It uses refcounts to track which keyslots are idle (either they have no
+encryption context programmed, or there are no in flight struct bios
+referencing that keyslot). When a new encryption context needs a keyslot, it
+tries to find a keyslot that has already been programmed with the same
+encryption context, and if there is no such keyslot, it evicts the least
+recently used idle keyslot and programs the new encryption context into that
+one. If no idle keyslots are available, then the caller will sleep until there
+is at least one.
+
+
+4. Blk-crypto
+=============
+
+The above is sufficient for simple cases, but does not work if there is a
+need for a software fallback, or if we are want to use IE with layered devices.
+To these ends, we introduce blk-crypto. Blk-crypto allows us to present a
+unified view of encryption to the FS (so FS only needs to specify an
+encryption context and not worry about keyslots at all), and block crypto can
+decide whether to delegate the en/decryption to IE hardware or to software
+(i.e. to the kernel crypto API). Block crypto maintains an internal KSM that
+serves as the software fallback to the kernel crypto API.
+
+Blk-crypto needs to ensure that the encryption context is programmed into the
+"correct" keyslot manager for IE. If a bio is submitted to a layered device
+that eventually passes the bio down to a device that really does support IE, we
+want the encryption context to be programmed into a keyslot for the KSM of the
+device with IE support. However, blk-crypto does not know a-priori whether a
+particular device is the final device in the layering structure for a bio or
+not. So in the case that a particular device does not support IE, since it is
+possibly the final destination device for the bio, if the bio requires
+encryption (i.e. the bio is doing a write operation), blk-crypto must fallback
+to software *before* sending the bio to the device.
+
+Blk-crypto ensures that
+1) The bio’s encryption context is programmed into a keyslot in the KSM of the
+request queue that the bio is being submitted to (or the software fallback KSM
+if the request queue doesn’t have a KSM), and that the processing_ksm in the
+bi_crypt_context is set to this KSM
+
+2) That the bio has its own individual reference to the keyslot in this KSM.
+Once the bio passes through block crypto, its encryption context is programmed
+in some KSM. The “its own individual reference to the keyslot” ensures that
+keyslots can be released by each bio independently of other bios while ensuring
+that the bio has a valid reference to the keyslot when, for e.g., the software
+fallback KSM in blk-crypto performs crypto for on the device’s behalf. The
+individual references are ensured by increasing the refcount for the keyslot in
+the processing_ksm when a bio with a programmed encryption context is cloned.
+
+
+4-1. What blk-crypto does on bio submission
+-------------------------------------------
+
+Case 1: blk-crypto is given a bio with only an encryption context that hasn’t
+been programmed into any keyslot in any KSM (for e.g. a bio from the FS). In
+this case, blk-crypto will program the encryption context into the KSM of the
+request queue the bio is being submitted to (and if this KSM does not exist,
+then it will program it into blk-crypto’s internal KSM for software fallback).
+The KSM that this encryption context was programmed into is stored as the
+processing_ksm in the bio’s bi_crypt_context.
+
+Case 2: blk-crypto is given a bio whose encryption context has already been
+programmed into a keyslot in the *software fallback KSM*. In this case,
+blk-crypto does nothing; it treats the bio as not having specified an
+encryption context. Note that we cannot do what we will do in Case 3 here
+because we would have already encrypted the bio in software by this point.
+
+Case 3: blk-crypto is given a bio whose encryption context has already been
+programmed into a keyslot in some KSM (that is *not* the software fallback
+KSM). In this case, blk-crypto first releases that keyslot from that KSM and
+then treats the bio as in Case 1.
+
+This way, when a device driver is processing a bio, it can be sure that
+the bio’s encryption context has been programmed into some KSM (either the
+device driver’s request queue’s KSM, or blk-crypto’s software fallback KSM).
+It then simply needs to check if the bio’s processing_ksm is the device’s
+request queue’s KSM. If so, then it should proceed with IE. If not, it should
+simply do nothing with respect to crypto, because some other KSM (perhaps the
+blk-crypto software fallback KSM) is handling the en/decryption.
+
+Blk-crypto will release the keyslot that is being held by the bio (and also
+decrypt it if the bio is using the software fallback KSM) once
+bio_remaining_done returns true for the bio.
+
+
+5. Layered Devices
+==================
+
+Layered devices that wish to support IE need to create their own keyslot
+manager for their request queue, and expose whatever functionality they choose.
+When a layered device wants to pass a bio to another layer (either by
+resubmitting the same bio, or by submitting a clone), it doesn’t need to do
+anything special because the bio (or the clone) will once again pass through
+blk-crypto, which will work as described in Case 3. If a layered device wants
+for some reason to do the IO by itself instead of passing it on to a child
+device, but it also chose to expose IE capabilities by setting up a KSM in its
+request queue, it is then responsible for en/decrypting the data itself. In
+such cases, the device can choose to call the blk-crypto function
+blk_crypto_fallback_to_software (TODO: Not yet implemented), which will
+cause the en/decryption to be done via software fallback.
+
+
+6. Future Optimizations for layered devices
+===========================================
+
+Creating a keyslot manager for the layered device uses up memory for each
+keyslot, and in general, a layered device (like dm-linear) merely passes the
+request on to a “child” device, so the keyslots in the layered device itself
+might be completely unused. We can instead define a new type of KSM; the
+“passthrough KSM”, that layered devices can use to let blk-crypto know that
+this layered device *will* pass the bio to some child device (and hence
+through blk-crypto again, at which point blk-crypto can program the encryption
+context, instead of programming it into the layered device’s KSM). Again, if
+the device “lies” and decides to do the IO itself instead of passing it on to
+a child device, it is responsible for doing the en/decryption (and can choose
+to call blk_crypto_fallback_to_software). Another use case for the
+"passthrough KSM" is for IE devices that want to manage their own keyslots/do
+not have a limited number of keyslots.
diff --git a/block/Kconfig b/block/Kconfig
index 028bc085dac8..65213769d2a2 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -187,6 +187,22 @@ config BLK_SED_OPAL
 	Enabling this option enables users to setup/unlock/lock
 	Locking ranges for SED devices using the Opal protocol.
 
+config BLK_CRYPT_CTX
+	bool
+
+config BLK_KEYSLOT_MANAGER
+	bool
+
+config BLK_CRYPTO
+	bool "Enable encryption in block layer"
+	select BLK_CRYPT_CTX
+	select BLK_KEYSLOT_MANAGER
+	help
+	Build the blk-crypto subsystem.
+	Enabling this lets the block layer handle encryption,
+	so users can take advantage of inline encryption
+	hardware if present.
+
 menu "Partition Types"
 
 source "block/partitions/Kconfig"
diff --git a/block/Makefile b/block/Makefile
index eee1b4ceecf9..b265506cdf3a 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -35,3 +35,6 @@ obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
 obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
 obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
 obj-$(CONFIG_BLK_PM)		+= blk-pm.o
+obj-$(CONFIG_BLK_CRYPTO)	+= blk-crypto.o
+
+obj-$(CONFIG_BLK_KEYSLOT_MANAGER) += keyslot-manager.o
diff --git a/block/bio.c b/block/bio.c
index 716510ecd7ff..ef975389ecce 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -29,6 +29,7 @@
 #include <linux/workqueue.h>
 #include <linux/cgroup.h>
 #include <linux/blk-cgroup.h>
+#include <linux/keyslot-manager.h>
 
 #include <trace/events/block.h>
 #include "blk.h"
@@ -271,6 +272,44 @@ static void bio_free(struct bio *bio)
 	}
 }
 
+#ifdef CONFIG_BLK_CRYPT_CTX
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
+{
+	if (bio_is_encrypted(bio)) {
+		bio->bi_crypt_context.data_unit_num +=
+			bytes >> bio->bi_crypt_context.data_unit_size_bits;
+	}
+}
+
+void bio_clone_crypt_context(struct bio *dst, struct bio *src)
+{
+	if (bio_crypt_swhandled(src))
+		return;
+	dst->bi_crypt_context = src->bi_crypt_context;
+
+	if (!bio_crypt_has_keyslot(src))
+		return;
+
+	/**
+	 * This should always succeed because the src bio should already
+	 * have a reference to the keyslot.
+	 */
+	BUG_ON(!keyslot_manager_get_slot(src->bi_crypt_context.processing_ksm,
+					  src->bi_crypt_context.keyslot));
+}
+
+bool bio_crypt_should_process(struct bio *bio, struct request_queue *q)
+{
+	if (!bio_is_encrypted(bio))
+		return false;
+
+	WARN_ON(!bio_crypt_has_keyslot(bio));
+	return q->ksm == bio->bi_crypt_context.processing_ksm;
+}
+EXPORT_SYMBOL(bio_crypt_should_process);
+
+#endif /* CONFIG_BLK_CRYPT_CTX */
+
 /*
  * Users of this function have their own bio allocation. Subsequently,
  * they must remember to pair any call to bio_init() with bio_uninit()
@@ -608,6 +647,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
 	bio->bi_write_hint = bio_src->bi_write_hint;
 	bio->bi_iter = bio_src->bi_iter;
 	bio->bi_io_vec = bio_src->bi_io_vec;
+	bio_clone_crypt_context(bio, bio_src);
 
 	bio_clone_blkg_association(bio, bio_src);
 	blkcg_bio_issue_init(bio);
@@ -1006,6 +1046,7 @@ void bio_advance(struct bio *bio, unsigned bytes)
 		bio_integrity_advance(bio, bytes);
 
 	bio_advance_iter(bio, &bio->bi_iter, bytes);
+	bio_crypt_advance(bio, bytes);
 }
 EXPORT_SYMBOL(bio_advance);
 
@@ -1832,6 +1873,10 @@ void bio_endio(struct bio *bio)
 again:
 	if (!bio_remaining_done(bio))
 		return;
+
+	if (blk_crypto_endio(bio) == -EAGAIN)
+		return;
+
 	if (!bio_integrity_endio(bio))
 		return;
 
diff --git a/block/blk-core.c b/block/blk-core.c
index a55389ba8779..3361acbbbe48 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -35,6 +35,7 @@
 #include <linux/blk-cgroup.h>
 #include <linux/debugfs.h>
 #include <linux/bpf.h>
+#include <linux/blk-crypto.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/block.h>
@@ -524,6 +525,10 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 
 	init_waitqueue_head(&q->mq_freeze_wq);
 
+#ifdef CONFIG_BLK_KEYSLOT_MANAGER
+	q->ksm = NULL;
+#endif
+
 	/*
 	 * Init percpu_ref in atomic mode so that it's faster to shutdown.
 	 * See blk_register_queue() for details.
@@ -1086,7 +1091,9 @@ blk_qc_t generic_make_request(struct bio *bio)
 			/* Create a fresh bio_list for all subordinate requests */
 			bio_list_on_stack[1] = bio_list_on_stack[0];
 			bio_list_init(&bio_list_on_stack[0]);
-			ret = q->make_request_fn(q, bio);
+
+			if (!blk_crypto_submit_bio(&bio))
+				ret = q->make_request_fn(q, bio);
 
 			/* sort new bios into those for a lower level
 			 * and those for the same level
@@ -1139,6 +1146,9 @@ blk_qc_t direct_make_request(struct bio *bio)
 	if (!generic_make_request_checks(bio))
 		return BLK_QC_T_NONE;
 
+	if (blk_crypto_submit_bio(&bio))
+		return BLK_QC_T_NONE;
+
 	if (unlikely(blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0))) {
 		if (nowait && !blk_queue_dying(q))
 			bio->bi_status = BLK_STS_AGAIN;
@@ -1815,5 +1825,7 @@ int __init blk_dev_init(void)
 	blk_debugfs_root = debugfs_create_dir("block", NULL);
 #endif
 
+	blk_crypto_init();
+
 	return 0;
 }
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
new file mode 100644
index 000000000000..503f9e3a770b
--- /dev/null
+++ b/block/blk-crypto.c
@@ -0,0 +1,573 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+#include <linux/blk-crypto.h>
+#include <linux/keyslot-manager.h>
+#include <linux/mempool.h>
+#include <linux/blk-cgroup.h>
+#include <crypto/skcipher.h>
+#include <crypto/aes.h>
+#include <crypto/algapi.h>
+
+struct blk_crypt_mode {
+	const char *friendly_name;
+	const char *cipher_str;
+	size_t keysize;
+	size_t ivsize;
+	bool needs_essiv;
+};
+
+static const struct blk_crypt_mode blk_crypt_modes[] = {
+	[BLK_ENCRYPTION_MODE_AES_256_XTS] = {
+		.friendly_name = "AES-256-XTS",
+		.cipher_str = "xts(aes)",
+		.keysize = 64,
+		.ivsize = 16,
+	},
+	/* TODO: the rest of the algs that fscrypt supports */
+};
+
+#define BLK_CRYPTO_MAX_KEY_SIZE 64
+/* TODO: Do we want to make this user configurable somehow? */
+#define BLK_CRYPTO_NUM_KEYSLOTS 100
+
+static struct blk_crypto_keyslot {
+	struct crypto_skcipher *tfm;
+	int crypto_alg_id;
+	union {
+		u8 key[BLK_CRYPTO_MAX_KEY_SIZE];
+		u32 key_words[BLK_CRYPTO_MAX_KEY_SIZE/4];
+	};
+} *slot_mem;
+
+struct work_mem {
+	struct work_struct crypto_work;
+	struct bio *bio;
+};
+
+static struct keyslot_manager *blk_crypto_ksm;
+static struct workqueue_struct *blk_crypto_wq;
+static mempool_t *blk_crypto_page_pool;
+static struct kmem_cache *blk_crypto_work_mem_cache;
+
+static unsigned int num_prealloc_bounce_pg = 32;
+
+/* TODO: handle modes that need essiv */
+static int blk_crypto_keyslot_program(void *priv, const u8 *key,
+			      unsigned int data_unit_size,
+			      unsigned int crypto_alg_id,
+			      unsigned int slot)
+{
+	struct crypto_skcipher *tfm = slot_mem[slot].tfm;
+	int err;
+	size_t keysize = blk_crypt_modes[crypto_alg_id].keysize;
+
+	if (crypto_alg_id != slot_mem[slot].crypto_alg_id || !tfm) {
+		crypto_free_skcipher(slot_mem[slot].tfm);
+		slot_mem[slot].tfm = NULL;
+		slot_mem[slot].crypto_alg_id = crypto_alg_id;
+		tfm = crypto_alloc_skcipher(
+			blk_crypt_modes[crypto_alg_id].cipher_str, 0, 0);
+		if (IS_ERR(tfm))
+			return PTR_ERR(tfm);
+
+		crypto_skcipher_set_flags(tfm,
+					  CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
+		slot_mem[slot].tfm = tfm;
+	}
+
+
+	err = crypto_skcipher_setkey(tfm, key, keysize);
+
+	if (err) {
+		crypto_free_skcipher(slot_mem[slot].tfm);
+		slot_mem[slot].tfm = NULL;
+		return err;
+	}
+
+	memcpy(slot_mem[slot].key, key, keysize);
+
+	return 0;
+}
+
+static int blk_crypto_keyslot_evict(void *priv, unsigned int slot,
+				    const u8 *key,
+				    unsigned int data_unit_size,
+				    unsigned int crypto_alg_id)
+{
+	crypto_free_skcipher(slot_mem[slot].tfm);
+	slot_mem[slot].tfm = NULL;
+	memset(slot_mem[slot].key, 0, BLK_CRYPTO_MAX_KEY_SIZE);
+
+	return 0;
+}
+
+static int blk_crypto_keyslot_find(void *priv,
+				   const u8 *key,
+				   unsigned int data_unit_size_bytes,
+				   unsigned int crypto_alg_id)
+{
+	int slot;
+
+	/* TODO: hashmap? */
+	for (slot = 0; slot < BLK_CRYPTO_NUM_KEYSLOTS; slot++) {
+		if (slot_mem[slot].crypto_alg_id == crypto_alg_id &&
+		    crypto_memneq(slot_mem[slot].key, key,
+			blk_crypt_modes[crypto_alg_id].keysize) == 0) {
+			return slot;
+		}
+	}
+
+	return -ENOKEY;
+}
+
+static int blk_crypto_alg_find(void *priv,
+			       enum blk_crypt_mode_index crypt_mode,
+			       unsigned int data_unit_size)
+{
+	/**
+	 * Blk-crypto supports all data unit sizes, so we can use
+	 * the crypt_mode directly as the internal crypto_alg_id.
+	 * Refer to comment in keyslot_manager.h for details
+	 * on this crypto_alg_id.
+	 */
+	return crypt_mode;
+}
+
+const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = {
+	.keyslot_program	= blk_crypto_keyslot_program,
+	.keyslot_evict		= blk_crypto_keyslot_evict,
+	.keyslot_find		= blk_crypto_keyslot_find,
+	.crypto_alg_find	= blk_crypto_alg_find,
+};
+
+static void blk_crypto_release_keyslot(struct bio *bio)
+{
+	struct bio_crypt_ctx *crypt_ctx = &bio->bi_crypt_context;
+
+	keyslot_manager_put_slot(crypt_ctx->processing_ksm,
+				 crypt_ctx->keyslot);
+	bio_crypt_unset_keyslot(bio);
+}
+
+static int blk_crypto_program_keyslot(struct bio *bio,
+				      struct keyslot_manager *ksm)
+{
+	int slot;
+	enum blk_crypt_mode_index crypt_mode = bio_crypt_mode(bio);
+
+	slot = keyslot_manager_get_slot_for_key(ksm,
+						bio_crypt_raw_key(bio),
+						crypt_mode, PAGE_SIZE);
+	if (slot >= 0) {
+		bio_crypt_set_keyslot(bio, slot, ksm);
+		return 0;
+	}
+
+	return slot;
+}
+
+static void blk_crypto_encrypt_endio(struct bio *enc_bio)
+{
+	struct bio *src_bio = enc_bio->bi_private;
+	struct bio_vec *enc_bvec, *enc_bvec_end;
+
+	enc_bvec = enc_bio->bi_io_vec;
+	enc_bvec_end = enc_bvec + enc_bio->bi_vcnt;
+	for (; enc_bvec != enc_bvec_end; enc_bvec++)
+		mempool_free(enc_bvec->bv_page, blk_crypto_page_pool);
+
+	bio_put(enc_bio);
+	bio_endio(src_bio);
+}
+
+static struct bio *blk_crypto_clone_bio(struct bio *bio_src)
+{
+	struct bvec_iter iter;
+	struct bio_vec bv;
+	struct bio *bio;
+
+	bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL);
+	if (!bio)
+		return NULL;
+	bio->bi_disk		= bio_src->bi_disk;
+	bio->bi_opf		= bio_src->bi_opf;
+	bio->bi_ioprio		= bio_src->bi_ioprio;
+	bio->bi_write_hint	= bio_src->bi_write_hint;
+	bio->bi_iter.bi_sector	= bio_src->bi_iter.bi_sector;
+	bio->bi_iter.bi_size	= bio_src->bi_iter.bi_size;
+
+	bio_for_each_segment(bv, bio_src, iter)
+		bio->bi_io_vec[bio->bi_vcnt++] = bv;
+
+	if (bio_integrity(bio_src)) {
+		int ret;
+
+		ret = bio_integrity_clone(bio, bio_src, GFP_NOIO);
+		if (ret < 0) {
+			bio_put(bio);
+			return NULL;
+		}
+	}
+
+	bio_clone_blkg_association(bio, bio_src);
+	blkcg_bio_issue_init(bio);
+
+	return bio;
+}
+
+static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
+{
+	struct bio *src_bio = *bio_ptr;
+	int slot;
+	struct skcipher_request *ciph_req = NULL;
+	struct crypto_wait wait;
+	struct bio_vec bv;
+	struct bvec_iter iter;
+	int err = 0;
+	__le64 curr_dun;
+	union {
+		__le64 dun;
+		u8 bytes[16];
+	} iv;
+	struct scatterlist src, dst;
+	struct bio *enc_bio;
+	struct bio_vec *enc_bvec;
+	int i, j;
+	unsigned int num_sectors;
+
+	/* Split the bio if it's too big for single page bvec */
+	i = 0;
+	num_sectors = 0;
+	bio_for_each_segment(bv, src_bio, iter) {
+		num_sectors += bv.bv_len >> 9;
+		if (++i == BIO_MAX_PAGES)
+			break;
+	}
+	if (num_sectors < bio_sectors(src_bio)) {
+		struct bio *split_bio;
+
+		split_bio = bio_split(src_bio, num_sectors, GFP_NOIO, NULL);
+		if (!split_bio) {
+			src_bio->bi_status = BLK_STS_RESOURCE;
+			err = -ENOMEM;
+			goto out;
+		}
+		bio_chain(split_bio, src_bio);
+		generic_make_request(src_bio);
+		*bio_ptr = split_bio;
+	}
+
+	src_bio = *bio_ptr;
+
+	enc_bio = blk_crypto_clone_bio(src_bio);
+	if (!enc_bio) {
+		src_bio->bi_status = BLK_STS_RESOURCE;
+		err = -ENOMEM;
+		goto out;
+	}
+
+	err = blk_crypto_program_keyslot(src_bio, blk_crypto_ksm);
+	if (err) {
+		src_bio->bi_status = BLK_STS_IOERR;
+		bio_put(enc_bio);
+		goto out;
+	}
+	bio_crypt_set_swhandled(src_bio);
+	slot = bio_crypt_get_slot(src_bio);
+
+	ciph_req = skcipher_request_alloc(slot_mem[slot].tfm, GFP_NOFS);
+	if (!ciph_req) {
+		src_bio->bi_status = BLK_STS_RESOURCE;
+		err = -ENOMEM;
+		bio_put(enc_bio);
+		goto out_release_keyslot;
+	}
+
+	skcipher_request_set_callback(ciph_req,
+				      CRYPTO_TFM_REQ_MAY_BACKLOG |
+				      CRYPTO_TFM_REQ_MAY_SLEEP,
+				      crypto_req_done, &wait);
+
+	curr_dun = cpu_to_le64(bio_crypt_sw_data_unit_num(src_bio));
+	sg_init_table(&src, 1);
+	sg_init_table(&dst, 1);
+	for (i = 0, enc_bvec = enc_bio->bi_io_vec; i < enc_bio->bi_vcnt;
+	     enc_bvec++, i++) {
+		struct page *page = enc_bvec->bv_page;
+		struct page *ciphertext_page =
+			mempool_alloc(blk_crypto_page_pool, GFP_NOFS);
+
+		enc_bvec->bv_page = ciphertext_page;
+
+		if (!ciphertext_page)
+			goto no_mem_for_ciph_page;
+
+		memset(&iv, 0, sizeof(iv));
+		iv.dun = curr_dun;
+
+		sg_set_page(&src, page, enc_bvec->bv_len, enc_bvec->bv_offset);
+		sg_set_page(&dst, ciphertext_page, enc_bvec->bv_len,
+			    enc_bvec->bv_offset);
+
+		skcipher_request_set_crypt(ciph_req, &src, &dst,
+					   enc_bvec->bv_len, iv.bytes);
+		crypto_init_wait(&wait);
+		err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req), &wait);
+		if (err)
+			goto no_mem_for_ciph_page;
+
+		le64_add_cpu(&curr_dun, 1);
+		continue;
+no_mem_for_ciph_page:
+		err = -ENOMEM;
+		for (j = i - 1; j >= 0; j--) {
+			mempool_free(enc_bio->bi_io_vec->bv_page,
+				     blk_crypto_page_pool);
+		}
+		bio_put(enc_bio);
+		goto out_release_cipher;
+	}
+
+	enc_bio->bi_private = src_bio;
+	enc_bio->bi_end_io = blk_crypto_encrypt_endio;
+
+	*bio_ptr = enc_bio;
+out_release_cipher:
+	skcipher_request_free(ciph_req);
+out_release_keyslot:
+	blk_crypto_release_keyslot(src_bio);
+out:
+	return err;
+}
+
+/* TODO: assumption right now is:
+ * each segment in bio has length == the data_unit_size
+ */
+static void blk_crypto_decrypt_bio(struct work_struct *w)
+{
+	struct work_mem *work_mem =
+		container_of(w, struct work_mem, crypto_work);
+	struct bio *bio = work_mem->bio;
+	int slot = bio_crypt_get_slot(bio);
+	struct skcipher_request *ciph_req;
+	struct crypto_wait wait;
+	struct bio_vec bv;
+	struct bvec_iter iter;
+	__le64 curr_dun;
+	union {
+		__le64 dun;
+		u8 bytes[16];
+	} iv;
+	struct scatterlist src;
+
+	curr_dun = cpu_to_le64(bio_crypt_sw_data_unit_num(bio));
+
+	kmem_cache_free(blk_crypto_work_mem_cache, work_mem);
+	ciph_req = skcipher_request_alloc(slot_mem[slot].tfm, GFP_NOFS);
+	if (!ciph_req) {
+		bio->bi_status = BLK_STS_RESOURCE;
+		goto out_ciph_req;
+	}
+
+	skcipher_request_set_callback(ciph_req,
+				      CRYPTO_TFM_REQ_MAY_BACKLOG |
+				      CRYPTO_TFM_REQ_MAY_SLEEP,
+				      crypto_req_done, &wait);
+
+	sg_init_table(&src, 1);
+	__bio_for_each_segment(bv, bio, iter,
+			       bio->bi_crypt_context.crypt_iter) {
+		struct page *page = bv.bv_page;
+		int err;
+
+		memset(&iv, 0, sizeof(iv));
+		iv.dun = curr_dun;
+
+		sg_set_page(&src, page, bv.bv_len, bv.bv_offset);
+		skcipher_request_set_crypt(ciph_req, &src, &src,
+					   bv.bv_len, iv.bytes);
+		crypto_init_wait(&wait);
+		err = crypto_wait_req(crypto_skcipher_decrypt(ciph_req), &wait);
+		if (err) {
+			bio->bi_status = BLK_STS_IOERR;
+			goto out;
+		}
+		le64_add_cpu(&curr_dun, 1);
+	}
+
+out:
+	skcipher_request_free(ciph_req);
+out_ciph_req:
+	blk_crypto_release_keyslot(bio);
+	bio_endio(bio);
+}
+
+static void blk_crypto_queue_decrypt_bio(struct bio *bio)
+{
+	struct work_mem *work_mem =
+		kmem_cache_zalloc(blk_crypto_work_mem_cache, GFP_ATOMIC);
+
+	if (!work_mem) {
+		bio->bi_status = BLK_STS_RESOURCE;
+		return bio_endio(bio);
+	}
+
+	INIT_WORK(&work_mem->crypto_work, blk_crypto_decrypt_bio);
+	work_mem->bio = bio;
+	queue_work(blk_crypto_wq, &work_mem->crypto_work);
+}
+
+/**
+ * Ensures that:
+ * 1) The bio’s encryption context is programmed into a keyslot in the
+ * keyslot manager (KSM) of the request queue that the bio is being submitted
+ * to (or the software fallback KSM if the request queue doesn’t have a KSM),
+ * and that the processing_ksm in the bi_crypt_context of this bio is set to
+ * this KSM.
+ *
+ * 2) That the bio has a reference to this keyslot in this KSM.
+ */
+int blk_crypto_submit_bio(struct bio **bio_ptr)
+{
+	struct bio *bio = *bio_ptr;
+	struct request_queue *q;
+	int err;
+	enum blk_crypt_mode_index crypt_mode;
+	struct bio_crypt_ctx *crypt_ctx;
+
+	if (!bio_has_data(bio))
+		return 0;
+
+	if (!bio_is_encrypted(bio) || bio_crypt_swhandled(bio))
+		return 0;
+
+	crypt_ctx = &bio->bi_crypt_context;
+	q = bio->bi_disk->queue;
+	crypt_mode = bio_crypt_mode(bio);
+
+	if (bio_crypt_has_keyslot(bio)) {
+		if (q->ksm) {
+			if (q->ksm == crypt_ctx->processing_ksm)
+				return 0;
+
+			blk_crypto_release_keyslot(bio);
+
+			err = blk_crypto_program_keyslot(bio, q->ksm);
+			if (!err)
+				return 0;
+			/* Fallback to software */
+		} else {
+			/**
+			 * We have been lied to. A device on upper layer
+			 * claimed to support ICE, but passed the crypt
+			 * ctx to a device below that doesn't claim to
+			 * support ICE, and the upper layer itself didn't
+			 * handle the crypt either. If this was the bio that
+			 * set up the keyslot, free it up. In either case,
+			 * fallback to software.
+			 */
+			blk_crypto_release_keyslot(bio);
+		}
+	} else if (q->ksm) {
+		/**
+		 * We haven't programmed the key anywhere,
+		 * and the device claims to have ICE.
+		 * Try using it.
+		 */
+		err = blk_crypto_program_keyslot(bio, q->ksm);
+		if (!err)
+			return 0;
+	}
+
+	/* Fallback to software crypto */
+	if (bio_data_dir(bio) == WRITE) {
+		/* Encrypt the data now */
+		err = blk_crypto_encrypt_bio(bio_ptr);
+		if (err)
+			goto out_encrypt_err;
+	} else {
+		err = blk_crypto_program_keyslot(bio, blk_crypto_ksm);
+		if (err)
+			goto out_err;
+		bio_crypt_set_swhandled(bio);
+	}
+	return 0;
+out_err:
+	bio->bi_status = BLK_STS_IOERR;
+out_encrypt_err:
+	bio_endio(bio);
+	return err;
+}
+
+/**
+ * If the bio is not en/decrypted in software, this function releases the
+ * reference to the keyslot that blk_crypto_submit_bio got.
+ * If blk_crypto_submit_bio decided to fallback to software crypto for this
+ * bio, then if the bio is doing a write, we free the allocated bounce pages,
+ * and if the bio is doing a read, we queue the bio for decryption into a
+ * workqueue and return -EAGAIN. After the bio has been decrypted, we release
+ * the keyslot before we call bio_endio(bio).
+ */
+int blk_crypto_endio(struct bio *bio)
+{
+	if (!bio_crypt_has_keyslot(bio))
+		return 0;
+
+	if (!bio_crypt_swhandled(bio)) {
+		blk_crypto_release_keyslot(bio);
+		return 0;
+	}
+
+	/* bio_data_dir(bio) == READ. So decrypt bio */
+	blk_crypto_queue_decrypt_bio(bio);
+	return -EAGAIN;
+}
+
+int __init blk_crypto_init(void)
+{
+	blk_crypto_ksm = keyslot_manager_create(BLK_CRYPTO_NUM_KEYSLOTS,
+				       &blk_crypto_ksm_ll_ops,
+				       NULL);
+	if (!blk_crypto_ksm)
+		goto out_ksm;
+
+	blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
+			       WQ_UNBOUND | WQ_HIGHPRI,
+			       num_online_cpus());
+	if (!blk_crypto_wq)
+		goto out_wq;
+
+	slot_mem = kzalloc(sizeof(*slot_mem) * BLK_CRYPTO_NUM_KEYSLOTS,
+			   GFP_KERNEL);
+	if (!slot_mem)
+		goto out_slot_mem;
+
+	blk_crypto_page_pool =
+		mempool_create_page_pool(num_prealloc_bounce_pg, 0);
+	if (!blk_crypto_page_pool)
+		goto out_bounce_pool;
+
+	blk_crypto_work_mem_cache = KMEM_CACHE(work_mem, SLAB_RECLAIM_ACCOUNT);
+	if (!blk_crypto_work_mem_cache)
+		goto out_work_mem_cache;
+
+	return 0;
+
+out_work_mem_cache:
+	mempool_destroy(blk_crypto_page_pool);
+	blk_crypto_page_pool = NULL;
+out_bounce_pool:
+	kzfree(slot_mem);
+	slot_mem = NULL;
+out_slot_mem:
+	destroy_workqueue(blk_crypto_wq);
+	blk_crypto_wq = NULL;
+out_wq:
+	keyslot_manager_destroy(blk_crypto_ksm);
+	blk_crypto_ksm = NULL;
+out_ksm:
+	pr_warn("No memory for block crypto software fallback.");
+	return -ENOMEM;
+}
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 1c9d4f0f96ea..55133c547bdf 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -614,6 +614,59 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
 }
 EXPORT_SYMBOL(blk_rq_map_sg);
 
+#ifdef CONFIG_BLK_CRYPT_CTX
+/*
+ * Checks that two bio crypt contexts are compatible - i.e. that
+ * they are mergeable except for data_unit_num continuity.
+ */
+static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
+{
+	struct bio_crypt_ctx *bc1 = &b_1->bi_crypt_context;
+	struct bio_crypt_ctx *bc2 = &b_2->bi_crypt_context;
+
+	if (bio_is_encrypted(b_1) != bio_is_encrypted(b_2) ||
+	    bc1->keyslot != bc2->keyslot)
+		return false;
+
+	return !bio_is_encrypted(b_1) ||
+		bc1->data_unit_size_bits == bc2->data_unit_size_bits;
+}
+
+/*
+ * Checks that two bio crypt contexts are compatible, and also
+ * that their data_unit_nums are continuous (and can hence be merged)
+ */
+static inline bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
+						unsigned int b1_sectors,
+						struct bio *b_2)
+{
+	struct bio_crypt_ctx *bc1 = &b_1->bi_crypt_context;
+	struct bio_crypt_ctx *bc2 = &b_2->bi_crypt_context;
+
+	if (!bio_crypt_ctx_compatible(b_1, b_2))
+		return false;
+
+	return !bio_is_encrypted(b_1) ||
+		(bc1->data_unit_num +
+		(b1_sectors >> (bc1->data_unit_size_bits - 9)) ==
+		bc2->data_unit_num);
+}
+
+#else /* CONFIG_BLK_CRYPT_CTX */
+static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
+{
+	return true;
+}
+
+static inline bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
+						unsigned int b1_sectors,
+						struct bio *b_2)
+{
+	return true;
+}
+
+#endif /* CONFIG_BLK_CRYPT_CTX */
+
 static inline int ll_new_hw_segment(struct request_queue *q,
 				    struct request *req,
 				    struct bio *bio)
@@ -626,6 +679,9 @@ static inline int ll_new_hw_segment(struct request_queue *q,
 	if (blk_integrity_merge_bio(q, req, bio) == false)
 		goto no_merge;
 
+	if (WARN_ON(!bio_crypt_ctx_compatible(bio, req->bio)))
+		goto no_merge;
+
 	/*
 	 * This will form the start of a new hw segment.  Bump both
 	 * counters.
@@ -801,8 +857,13 @@ static enum elv_merge blk_try_req_merge(struct request *req,
 {
 	if (blk_discard_mergable(req))
 		return ELEVATOR_DISCARD_MERGE;
-	else if (blk_rq_pos(req) + blk_rq_sectors(req) == blk_rq_pos(next))
+	else if (blk_rq_pos(req) + blk_rq_sectors(req) == blk_rq_pos(next)) {
+		if (!bio_crypt_ctx_back_mergeable(
+			req->bio, blk_rq_sectors(req), next->bio)) {
+			return ELEVATOR_NO_MERGE;
+		}
 		return ELEVATOR_BACK_MERGE;
+	}
 
 	return ELEVATOR_NO_MERGE;
 }
@@ -838,6 +899,9 @@ static struct request *attempt_merge(struct request_queue *q,
 	if (req->ioprio != next->ioprio)
 		return NULL;
 
+	if (!bio_crypt_ctx_compatible(req->bio, next->bio))
+		return NULL;
+
 	/*
 	 * If we are allowed to merge, then append bio list
 	 * from next to rq and release next. merge_requests_fn
@@ -970,16 +1034,31 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
 	if (rq->ioprio != bio_prio(bio))
 		return false;
 
+	/* Only merge if the crypt contexts are compatible */
+	if (!bio_crypt_ctx_compatible(bio, rq->bio))
+		return false;
+
 	return true;
 }
 
 enum elv_merge blk_try_merge(struct request *rq, struct bio *bio)
 {
-	if (blk_discard_mergable(rq))
+	if (blk_discard_mergable(rq)) {
 		return ELEVATOR_DISCARD_MERGE;
-	else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector)
+	} else if (blk_rq_pos(rq) + blk_rq_sectors(rq) ==
+		   bio->bi_iter.bi_sector) {
+		if (!bio_crypt_ctx_back_mergeable(rq->bio,
+						  blk_rq_sectors(rq), bio)) {
+			return ELEVATOR_NO_MERGE;
+		}
 		return ELEVATOR_BACK_MERGE;
-	else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector)
+	} else if (blk_rq_pos(rq) - bio_sectors(bio) ==
+		   bio->bi_iter.bi_sector) {
+		if (!bio_crypt_ctx_back_mergeable(bio,
+						  bio_sectors(bio), rq->bio)) {
+			return ELEVATOR_NO_MERGE;
+		}
 		return ELEVATOR_FRONT_MERGE;
+	}
 	return ELEVATOR_NO_MERGE;
 }
diff --git a/block/bounce.c b/block/bounce.c
index 47eb7e936e22..6866e6a04beb 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -277,6 +277,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
 			return NULL;
 		}
 	}
+	bio_clone_crypt_context(bio, bio_src);
 
 	bio_clone_blkg_association(bio, bio_src);
 	blkcg_bio_issue_init(bio);
diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
new file mode 100644
index 000000000000..ed8d290831f3
--- /dev/null
+++ b/block/keyslot-manager.c
@@ -0,0 +1,314 @@
+// SPDX-License-Identifier: GPL-2.0
+/**
+ * DOC: The Keyslot Manager
+ *
+ * Many devices with inline encryption support have a limited number of "slots"
+ * into which encryption contexts may be programmed, and requests can be tagged
+ * with a slot number to specify the key to use for en/decryption.
+ *
+ * As the number of slots are limited, and programming keys is expensive on
+ * many inline encryption hardware, we don't want to program the same key into
+ * multiple slots - if multiple requests are using the same key, we want to
+ * program just one slot with that key and use that slot for all requests.
+ *
+ * The keyslot manager manages these keyslots appropriately, and also acts as
+ * an abstraction between the inline encryption hardware and the upper layers.
+ *
+ * Lower layer devices will set up a keyslot manager in their request queue
+ * and tell it how to perform device specific operations like programming/
+ * evicting keys from keyslots.
+ *
+ * Upper layers will call keyslot_manager_get_slot_for_key() to program a
+ * key into some slot in the inline encryption hardware.
+ *
+ * Copyright 2019 Google LLC
+ */
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/keyslot-manager.h>
+#include <linux/atomic.h>
+
+/**
+ * keyslot_manager_create() - Create a keyslot manager
+ * @num_slots: The number of key slots to manage.
+ * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot
+ *		manager will use to perform operations like programming and
+ *		evicting keys.
+ * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
+ *
+ * Allocate memory for and initialize a keyslot manager. Called by for e.g.
+ * storage drivers to set up a keyslot manager in their request_queue.
+ *
+ * Context: This function may sleep
+ * Return: Pointer to constructed keyslot manager or NULL on error.
+ */
+struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
+				const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
+				void *ll_priv_data)
+{
+	struct keyslot_manager *ksm;
+
+	if (num_slots == 0)
+		return NULL;
+
+	/* Check that all ops are specified */
+	if (ksm_ll_ops->keyslot_program == NULL ||
+	    ksm_ll_ops->keyslot_evict == NULL ||
+	    ksm_ll_ops->crypto_alg_find == NULL ||
+	    ksm_ll_ops->keyslot_find == NULL) {
+		return NULL;
+	}
+
+	ksm = kzalloc(struct_size(ksm, slot_refs, num_slots), GFP_KERNEL);
+	if (!ksm)
+		return NULL;
+
+	ksm->num_slots = num_slots;
+	atomic_set(&ksm->num_idle_slots, num_slots);
+	ksm->ksm_ll_ops = *ksm_ll_ops;
+	ksm->ll_priv_data = ll_priv_data;
+
+	mutex_init(&ksm->lock);
+	init_waitqueue_head(&ksm->wait_queue);
+
+	ksm->last_used_seq_nums = kcalloc(num_slots, sizeof(u64), GFP_KERNEL);
+	if (!ksm->last_used_seq_nums) {
+		kzfree(ksm);
+		ksm = NULL;
+	}
+
+	return ksm;
+}
+EXPORT_SYMBOL(keyslot_manager_create);
+
+/**
+ * keyslot_manager_get_slot_for_key() - Program a key into a keyslot.
+ * @ksm: The keyslot manager to program the key into.
+ * @key: Pointer to the bytes of the key to program. Must be of the length
+ *	 specified according to blk_crypt_modes in blk-crypto.c.
+ * @crypt_mode: The index into blk_crypt_modes representing the crypto alg to
+ *		use.
+ * @data_unit_size: The data unit size to use for en/decryption.
+ *
+ * Program a key into a keyslot with the specified crypt_mode and
+ * data_unit_size as follows: If the specified key has already been programmed
+ * into a keyslot, then this function increments the refcount on that keyslot
+ * and returns that keyslot. Otherwise, it waits for a keyslot to become idle
+ * and programs the key into an idle keyslot, increments its refcount, and
+ * returns that keyslot
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ * Return: The keyslot that the key was programmed into, or a negative error
+ *         code otherwise.
+ */
+int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+				     const u8 *key,
+				     enum blk_crypt_mode_index crypt_mode,
+				     unsigned int data_unit_size)
+{
+	int crypto_alg_id;
+	int slot;
+	int err;
+	int c;
+	int lru_idle_slot;
+	u64 min_seq_num;
+
+	crypto_alg_id = ksm->ksm_ll_ops.crypto_alg_find(ksm->ll_priv_data,
+							crypt_mode,
+							data_unit_size);
+	if (crypto_alg_id < 0)
+		return crypto_alg_id;
+
+	mutex_lock(&ksm->lock);
+	slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
+					    data_unit_size,
+					    crypto_alg_id);
+
+	if (slot < 0 && slot != -ENOKEY) {
+		mutex_unlock(&ksm->lock);
+		return slot;
+	}
+
+	if (WARN_ON(slot >= (int)ksm->num_slots)) {
+		mutex_unlock(&ksm->lock);
+		return -EINVAL;
+	}
+
+	/* Try to use the returned slot */
+	if (slot != -ENOKEY) {
+		/**
+		 * NOTE: We may fail to get a slot if the number of refs
+		 * overflows UINT_MAX. I don't think we care enough about
+		 * that possibility to make the refcounts u64, considering
+		 * the only way for that to happen is for at least UINT_MAX
+		 * requests to be in flight at the same time.
+		 */
+		if ((unsigned int)atomic_read(&ksm->slot_refs[slot]) ==
+		    UINT_MAX) {
+			mutex_unlock(&ksm->lock);
+			return -EBUSY;
+		}
+
+		if (atomic_fetch_inc(&ksm->slot_refs[slot]) == 0)
+			atomic_dec(&ksm->num_idle_slots);
+
+		ksm->last_used_seq_nums[slot] = ++ksm->seq_num;
+
+		mutex_unlock(&ksm->lock);
+		return slot;
+	}
+
+	/*
+	 * If we're here, that means there wasn't a slot that
+	 * was already programmed with the key
+	 */
+
+	/* Wait till there is a free slot available */
+	while (atomic_read(&ksm->num_idle_slots) == 0) {
+		mutex_unlock(&ksm->lock);
+		wait_event(ksm->wait_queue,
+			   (atomic_read(&ksm->num_idle_slots) > 0));
+		mutex_lock(&ksm->lock);
+	}
+
+	/* Todo: fix linear scan? */
+	/* Find least recently used idle slot (i.e. slot with minimum number) */
+	lru_idle_slot  = -1;
+	min_seq_num = 0;
+	for (c = 0; c < ksm->num_slots; c++) {
+		if (atomic_read(&ksm->slot_refs[c]) != 0)
+			continue;
+
+		if (lru_idle_slot == -1 ||
+		    ksm->last_used_seq_nums[c] < min_seq_num) {
+			lru_idle_slot = c;
+			min_seq_num = ksm->last_used_seq_nums[c];
+		}
+	}
+
+	if (WARN_ON(lru_idle_slot == -1)) {
+		mutex_unlock(&ksm->lock);
+		return -EBUSY;
+	}
+
+	atomic_dec(&ksm->num_idle_slots);
+	atomic_inc(&ksm->slot_refs[lru_idle_slot]);
+	err = ksm->ksm_ll_ops.keyslot_program(ksm->ll_priv_data, key,
+					      data_unit_size,
+					      crypto_alg_id,
+					      lru_idle_slot);
+
+	if (err) {
+		atomic_dec(&ksm->slot_refs[lru_idle_slot]);
+		atomic_inc(&ksm->num_idle_slots);
+		wake_up(&ksm->wait_queue);
+		mutex_unlock(&ksm->lock);
+		return err;
+	}
+
+	ksm->seq_num++;
+	ksm->last_used_seq_nums[lru_idle_slot] = ksm->seq_num;
+
+	mutex_unlock(&ksm->lock);
+	return lru_idle_slot;
+}
+EXPORT_SYMBOL(keyslot_manager_get_slot_for_key);
+
+/**
+ * keyslot_manager_get_slot() - Increment the refcount on the specified slot.
+ * @ksm - The keyslot manager that we want to modify.
+ * @slot - The slot to increment the refcount of.
+ *
+ * This function assumes that there is already an active reference to that slot
+ * and simply increments the refcount. This is useful when cloning a bio that
+ * already has a reference to a keyslot, and we want the cloned bio to also have
+ * its own reference.
+ *
+ * Context: Any context.
+ */
+bool keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+	if (WARN_ON(slot >= ksm->num_slots))
+		return false;
+
+	return atomic_inc_not_zero(&ksm->slot_refs[slot]);
+}
+EXPORT_SYMBOL(keyslot_manager_get_slot);
+
+/**
+ * keyslot_manager_put_slot() - Release a reference to a slot
+ * @ksm: The keyslot manager to release the reference from.
+ * @slot: The slot to release the reference from.
+ *
+ * Context: Any context.
+ */
+void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+	if (WARN_ON(slot >= ksm->num_slots))
+		return;
+
+	if (atomic_dec_and_test(&ksm->slot_refs[slot])) {
+		atomic_inc(&ksm->num_idle_slots);
+		wake_up(&ksm->wait_queue);
+	}
+}
+EXPORT_SYMBOL(keyslot_manager_put_slot);
+
+/**
+ * keyslot_manager_evict_key() - Evict a key from the lower layer device.
+ * @ksm - The keyslot manager to evict from
+ * @key - The key to evict
+ * @crypt_mode - The crypto algorithm the key was programmed with.
+ * @data_unit_size - The data_unit_size the key was programmed with.
+ *
+ * Finds the slot that the specified key, crypto_mode, data_unit_size combo
+ * was programmed into, and evicts that slot from the lower layer device if
+ * the refcount on the slot is 0. Returns -EBUSY if the refcount is not 0, and
+ * negative error code on error.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ */
+int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+			      const u8 *key,
+			      enum blk_crypt_mode_index crypt_mode,
+			      unsigned int data_unit_size)
+{
+	int slot;
+	int crypto_alg_id;
+	int err = 0;
+
+	crypto_alg_id = ksm->ksm_ll_ops.crypto_alg_find(ksm->ll_priv_data,
+							crypt_mode,
+							data_unit_size);
+	if (crypto_alg_id < 0)
+		return -EINVAL;
+
+	mutex_lock(&ksm->lock);
+	slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key,
+					    data_unit_size,
+					    crypto_alg_id);
+
+	if (slot < 0) {
+		mutex_unlock(&ksm->lock);
+		return slot;
+	}
+
+	if (atomic_read(&ksm->slot_refs[slot]) == 0) {
+		err = ksm->ksm_ll_ops.keyslot_evict(ksm->ll_priv_data, slot,
+						    key, data_unit_size,
+						    crypto_alg_id);
+	} else {
+		err = -EBUSY;
+	}
+
+	mutex_unlock(&ksm->lock);
+	return err;
+}
+EXPORT_SYMBOL(keyslot_manager_evict_key);
+
+void keyslot_manager_destroy(struct keyslot_manager *ksm)
+{
+	kzfree(ksm->last_used_seq_nums);
+	kzfree(ksm);
+}
+EXPORT_SYMBOL(keyslot_manager_destroy);
diff --git a/include/linux/bio.h b/include/linux/bio.h
index e584673c1881..4f2c54742b04 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -575,6 +575,172 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags)
 }
 #endif
 
+#ifdef CONFIG_BLK_CRYPT_CTX
+extern void bio_clone_crypt_context(struct bio *dst, struct bio *src);
+
+static inline bool bio_crypt_test_flag(struct bio *bio, int flag)
+{
+	return !!(bio->bi_crypt_context.flags & flag);
+}
+
+static inline void bio_crypt_set_flag(struct bio *bio, int flag)
+{
+	bio->bi_crypt_context.flags |= flag;
+}
+
+static inline void bio_crypt_clear_flag(struct bio *bio, int flag)
+{
+	bio->bi_crypt_context.flags &= ~flag;
+}
+
+static inline bool bio_is_encrypted(struct bio *bio)
+{
+	return bio && bio_crypt_test_flag(bio, BIO_CRYPT_ENABLED);
+}
+
+static inline bool bio_crypt_swhandled(struct bio *bio)
+{
+	return bio_crypt_test_flag(bio, BIO_CRYPT_SWHANDLED);
+}
+
+static inline bool bio_crypt_has_keyslot(struct bio *bio)
+{
+	return bio_is_encrypted(bio) &&
+	       bio_crypt_test_flag(bio, BIO_CRYPT_KEYSLOT);
+}
+
+static inline void bio_crypt_set_swhandled(struct bio *bio)
+{
+	WARN_ON(!bio_crypt_has_keyslot(bio));
+	bio_crypt_set_flag(bio, BIO_CRYPT_SWHANDLED);
+
+	bio->bi_crypt_context.crypt_iter = bio->bi_iter;
+	bio->bi_crypt_context.sw_data_unit_num =
+		bio->bi_crypt_context.data_unit_num;
+}
+
+static inline void bio_crypt_set_ctx(struct bio *bio,
+				     u8 *raw_key,
+				     enum blk_crypt_mode_index crypt_mode,
+				     u64 dun,
+				     unsigned int dun_bits)
+{
+	bio_crypt_set_flag(bio, BIO_CRYPT_ENABLED);
+	bio_crypt_clear_flag(bio, BIO_CRYPT_KEYSLOT);
+	bio_crypt_clear_flag(bio, BIO_CRYPT_SWHANDLED);
+	bio->bi_crypt_context.raw_key = raw_key;
+	bio->bi_crypt_context.data_unit_num = dun;
+	bio->bi_crypt_context.data_unit_size_bits = dun_bits;
+	bio->bi_crypt_context.crypt_mode = crypt_mode;
+	bio->bi_crypt_context.processing_ksm = NULL;
+	bio->bi_crypt_context.keyslot = 0;
+}
+
+static inline int bio_crypt_get_slot(struct bio *bio)
+{
+	return bio->bi_crypt_context.keyslot;
+}
+
+static inline void bio_crypt_set_keyslot(struct bio *bio,
+					 unsigned int keyslot,
+					 struct keyslot_manager *ksm)
+{
+	bio->bi_crypt_context.keyslot = keyslot;
+	bio_crypt_set_flag(bio, BIO_CRYPT_KEYSLOT);
+	bio->bi_crypt_context.processing_ksm = ksm;
+}
+
+static inline void bio_crypt_unset_keyslot(struct bio *bio)
+{
+	bio_crypt_clear_flag(bio, BIO_CRYPT_KEYSLOT);
+	bio_crypt_clear_flag(bio, BIO_CRYPT_SWHANDLED);
+	bio->bi_crypt_context.processing_ksm = NULL;
+	bio->bi_crypt_context.keyslot = 0;
+}
+
+static inline u8 *bio_crypt_raw_key(struct bio *bio)
+{
+	return bio->bi_crypt_context.raw_key;
+}
+
+static inline enum blk_crypt_mode_index bio_crypt_mode(struct bio *bio)
+{
+	return bio->bi_crypt_context.crypt_mode;
+}
+
+static inline u64 bio_crypt_data_unit_num(struct bio *bio)
+{
+	WARN_ON(!bio_is_encrypted(bio));
+	return bio->bi_crypt_context.data_unit_num;
+}
+
+static inline u64 bio_crypt_sw_data_unit_num(struct bio *bio)
+{
+	WARN_ON(!bio_is_encrypted(bio));
+	return bio->bi_crypt_context.sw_data_unit_num;
+}
+
+extern bool bio_crypt_should_process(struct bio *bio, struct request_queue *q);
+
+#else
+static inline void bio_clone_crypt_context(struct bio *dst,
+					   struct bio *src) { }
+static inline void bio_crypt_advance(struct bio *bio,
+				     unsigned int bytes) { }
+
+static inline bool bio_is_encrypted(struct bio *bio)
+{
+	return false;
+}
+
+static inline void bio_crypt_set_ctx(struct bio *bio,
+				     u8 *raw_key,
+				     enum blk_crypt_mode_index crypt_mode,
+				     u64 dun,
+				     unsigned int dun_bits) { }
+
+static inline bool bio_crypt_swhandled(struct bio *bio)
+{
+	return false;
+}
+
+static inline void bio_crypt_set_swhandled(struct bio *bio) { }
+
+static inline bool bio_crypt_has_keyslot(struct bio *bio)
+{
+	return false;
+}
+
+static inline void bio_crypt_set_keyslot(struct bio *bio,
+					 unsigned int keyslot,
+					 struct keyslot_manager *ksm) { }
+
+static inline void bio_crypt_unset_keyslot(struct bio *bio) { }
+
+static inline int bio_crypt_get_slot(struct bio *bio)
+{
+	return -1;
+}
+
+static inline u8 *bio_crypt_raw_key(struct bio *bio)
+{
+	return NULL;
+}
+
+static inline u64 bio_crypt_data_unit_num(struct bio *bio)
+{
+	WARN_ON(1);
+	return 0;
+}
+
+static inline bool bio_crypt_should_process(struct bio *bio,
+					    struct request_queue *q)
+{
+	return false;
+}
+
+#endif /* CONFIG_BLK_CRYPT_CTX */
+
 /*
  * BIO list management for use by remapping drivers (e.g. DM or MD) and loop.
  *
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
new file mode 100644
index 000000000000..da189aa26ac9
--- /dev/null
+++ b/include/linux/blk-crypto.h
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_BLK_CRYPTO_H
+#define __LINUX_BLK_CRYPTO_H
+
+#include <linux/bio.h>
+#include <linux/blk_types.h>
+#include <linux/blkdev.h>
+
+#ifdef CONFIG_BLK_CRYPTO
+
+int blk_crypto_init(void);
+
+int blk_crypto_submit_bio(struct bio **bio_ptr);
+
+int blk_crypto_endio(struct bio *bio);
+
+#else /* CONFIG_BLK_CRYPTO */
+
+static inline int blk_crypto_init(void)
+{
+	return 0;
+}
+
+static inline int blk_crypto_submit_bio(struct bio *bio)
+{
+	return 0;
+}
+
+static inline int blk_crypto_endio(struct bio *bio)
+{
+	return 0;
+}
+
+#endif /* CONFIG_BLK_CRYPTO */
+
+#endif /* __LINUX_BLK_CRYPTO_H */
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 791fee35df88..23a133c3e47c 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -137,6 +137,50 @@ static inline void bio_issue_init(struct bio_issue *issue,
 			((u64)size << BIO_ISSUE_SIZE_SHIFT));
 }
 
+/* Flags in bio_crypt_ctx */
+/* If this crypt_ctx is enabled */
+#define BIO_CRYPT_ENABLED BIT(0)
+/* If this crypt_ctx is going to be en/decrypted in software */
+#define BIO_CRYPT_SWHANDLED BIT(1)
+/**
+ * If this crypt_ctx has been programmed into some keyslot
+ * in some keyslot manager
+ */
+#define BIO_CRYPT_KEYSLOT BIT(2)
+
+struct bio;
+enum blk_crypt_mode_index {
+	BLK_ENCRYPTION_MODE_AES_256_XTS	= 0,
+	/** TODO: Support these too
+	 * BLK_ENCRYPTION_MODE_AES_256_CTS	= 1,
+	 * BLK_ENCRYPTION_MODE_AES_128_CBC	= 2,
+	 * BLK_ENCRYPTION_MODE_AES_128_CTS	= 3,
+	 * BLK_ENCRYPTION_MODE_ADIANTUM		= 4,
+	 */
+};
+
+struct bio_crypt_ctx {
+	u8 flags;
+	int keyslot;
+	u8 *raw_key;
+	enum blk_crypt_mode_index crypt_mode;
+	u64 data_unit_num;
+	unsigned int data_unit_size_bits;
+
+	/* The keyslot manager where the key has been programmed
+	 * with keyslot.
+	 */
+	struct keyslot_manager *processing_ksm;
+
+	/* Copy of the bvec_iter when this bio was submitted.
+	 * We only want to en/decrypt the part of the bio
+	 * as described by the bvec_iter upon submission because
+	 * bio might be split before being resubmitted
+	 */
+	struct bvec_iter crypt_iter;
+	u64 sw_data_unit_num;
+};
+
 /*
  * main unit of I/O for the block layer and lower layers (ie drivers and
  * stacking drivers)
@@ -182,6 +226,11 @@ struct bio {
 	struct blkcg_gq		*bi_blkg;
 	struct bio_issue	bi_issue;
 #endif
+
+#ifdef CONFIG_BLK_CRYPT_CTX
+	struct bio_crypt_ctx	bi_crypt_context;
+#endif
+
 	union {
 #if defined(CONFIG_BLK_DEV_INTEGRITY)
 		struct bio_integrity_payload *bi_integrity; /* data integrity */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 317ab30d2904..25e36e3f4f51 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -385,6 +385,10 @@ static inline int blkdev_reset_zones_ioctl(struct block_device *bdev,
 
 #endif /* CONFIG_BLK_DEV_ZONED */
 
+#ifdef CONFIG_KEYSLOT_MANAGER
+struct keyslot_manager;
+#endif
+
 struct request_queue {
 	/*
 	 * Together with queue_head for cacheline sharing
@@ -473,6 +477,11 @@ struct request_queue {
 	unsigned int		dma_pad_mask;
 	unsigned int		dma_alignment;
 
+#ifdef CONFIG_BLK_KEYSLOT_MANAGER
+	/* Inline crypto capabilities */
+	struct keyslot_manager *ksm;
+#endif
+
 	unsigned int		rq_timeout;
 	int			poll_nsec;
 
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
new file mode 100644
index 000000000000..30ee9b87b580
--- /dev/null
+++ b/include/linux/keyslot-manager.h
@@ -0,0 +1,131 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_KEYSLOT_MANAGER_H
+#define __LINUX_KEYSLOT_MANAGER_H
+
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/types.h>
+#include <linux/blk-crypto.h>
+
+struct keyslot_mgmt_ll_ops {
+	int (*keyslot_program)(void *ll_priv_data, const u8 *key,
+			       unsigned int data_unit_size,
+			       /* crypto_alg_id returned by crypto_alg_find */
+			       unsigned int crypto_alg_id,
+			       unsigned int slot);
+	/**
+	 * Evict key from all keyslots in the keyslot manager.
+	 * The key, data_unit_size and crypto_alg_id are also passed down
+	 * so that for e.g. dm layers that have their own keyslot
+	 * managers can evict keys from the devices that they map over.
+	 * Returns 0 on success, -errno otherwise.
+	 */
+	int (*keyslot_evict)(void *ll_priv_data, unsigned int slot,
+			     const u8 *key, unsigned int data_unit_size,
+			     unsigned int crypto_alg_id);
+	/**
+	 * Get a crypto_alg_id (used internally by the lower layer driver) that
+	 * represents the given blk-crypto crypt_mode and data_unit_size. The
+	 * returned crypto_alg_id will be used in future calls to the lower
+	 * layer driver (in keyslot_program and keyslot_evict) to reference
+	 * this crypt_mode, data_unit_size combo. Returns negative error code
+	 * if a crypt_mode, data_unit_size combo is not supported.
+	 */
+	int (*crypto_alg_find)(void *ll_priv_data,
+			       enum blk_crypt_mode_index crypt_mode,
+			       unsigned int data_unit_size);
+	/**
+	 * Returns the slot number that matches the key,
+	 * or -ENOKEY if no match found, or negative on error
+	 */
+	int (*keyslot_find)(void *ll_priv_data, const u8 *key,
+			    unsigned int data_unit_size,
+			    unsigned int crypto_alg_id);
+};
+
+struct keyslot_manager {
+	unsigned int num_slots;
+	atomic_t num_idle_slots;
+	struct keyslot_mgmt_ll_ops ksm_ll_ops;
+	void *ll_priv_data;
+	struct mutex lock;
+	wait_queue_head_t wait_queue;
+	u64 seq_num;
+	u64 *last_used_seq_nums;
+	atomic_t slot_refs[];
+};
+
+#ifdef CONFIG_BLK_KEYSLOT_MANAGER
+extern struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
+				const struct keyslot_mgmt_ll_ops *ksm_ops,
+				void *ll_priv_data);
+
+extern int
+keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+				 const u8 *key,
+				 enum blk_crypt_mode_index crypt_mode,
+				 unsigned int data_unit_size);
+
+extern bool keyslot_manager_get_slot(struct keyslot_manager *ksm,
+				     unsigned int slot);
+
+extern void keyslot_manager_put_slot(struct keyslot_manager *ksm,
+				     unsigned int slot);
+
+extern int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+				     const u8 *key,
+				     enum blk_crypt_mode_index crypt_mode,
+				     unsigned int data_unit_size);
+
+extern void keyslot_manager_destroy(struct keyslot_manager *ksm);
+
+#else /* CONFIG_BLK_KEYSLOT_MANAGER */
+
+static inline struct keyslot_manager *
+keyslot_manager_create(unsigned int num_slots,
+		       const struct keyslot_mgmt_ll_ops *ksm_ops,
+		       void *ll_priv_data)
+{
+	return NULL;
+}
+
+static inline int
+keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+				 const u8 *key,
+				 enum blk_crypt_mode_index crypt_mode,
+				 unsigned int data_unit_size)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline bool keyslot_manager_get_slot(struct keyslot_manager *ksm,
+					   unsigned int slot)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int keyslot_manager_put_slot(struct keyslot_manager *ksm,
+					   unsigned int slot)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+				     const u8 *key,
+				     enum blk_crypt_mode_index crypt_mode,
+				     unsigned int data_unit_size)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline void keyslot_manager_destroy(struct keyslot_manager *ksm)
+{ }
+
+#endif /* CONFIG_BLK_KEYSLOT_MANAGER */
+
+#endif /* __LINUX_KEYSLOT_MANAGER_H */
-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support
  2019-05-06 22:35 [RFC PATCH 0/4] Inline Encryption Support Satya Tangirala
  2019-05-06 22:35 ` [RFC PATCH 1/4] block: Block Layer changes for " Satya Tangirala
@ 2019-05-06 22:35 ` Satya Tangirala
  2019-05-06 23:51   ` Randy Dunlap
                     ` (2 more replies)
  2019-05-06 22:35 ` [RFC PATCH 3/4] fscrypt: wire up fscrypt to use blk-crypto Satya Tangirala
                   ` (3 subsequent siblings)
  5 siblings, 3 replies; 16+ messages in thread
From: Satya Tangirala @ 2019-05-06 22:35 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Uses the UFSHCI v2.1 spec to manage keys in inline crypto engine
hardware, and exposes that functionality through the keyslot manager it
sets up in the device's request_queue. Uses the keyslot in the
bio_crypt_ctx of the bio, if specified, as the encryption context.

Known Issues: In the current implementation, multiple keyslot managers
may be allocated for a single UFS host. We should tie keyslot managers
to hosts to avoid this issue.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 drivers/scsi/ufs/Kconfig         |  10 +
 drivers/scsi/ufs/Makefile        |   1 +
 drivers/scsi/ufs/ufshcd-crypto.c | 449 +++++++++++++++++++++++++++++++
 drivers/scsi/ufs/ufshcd-crypto.h |  92 +++++++
 drivers/scsi/ufs/ufshcd.c        |  85 +++++-
 drivers/scsi/ufs/ufshcd.h        |  23 ++
 drivers/scsi/ufs/ufshci.h        |  67 ++++-
 7 files changed, 720 insertions(+), 7 deletions(-)
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c
 create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h

diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
index 6db37cf306b0..c14f445a2522 100644
--- a/drivers/scsi/ufs/Kconfig
+++ b/drivers/scsi/ufs/Kconfig
@@ -135,3 +135,13 @@ config SCSI_UFS_BSG
 
 	  Select this if you need a bsg device node for your UFS controller.
 	  If unsure, say N.
+
+config SCSI_UFS_CRYPTO
+	bool "UFS Crypto Engine Support"
+	depends on SCSI_UFSHCD && BLK_KEYSLOT_MANAGER
+	help
+	Enable Crypto Engine Support in UFS.
+	Enabling this makes it possible for the kernel to use the crypto
+	capabilities of the UFS device (if present) to perform crypto
+	operations on data being transferred into/out of the device.
+
diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
index a3bd70c3652c..5b52463e8abf 100644
--- a/drivers/scsi/ufs/Makefile
+++ b/drivers/scsi/ufs/Makefile
@@ -10,3 +10,4 @@ ufshcd-core-$(CONFIG_SCSI_UFS_BSG)	+= ufs_bsg.o
 obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o
 obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
 obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
+ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
\ No newline at end of file
diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
new file mode 100644
index 000000000000..af1da161d53e
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.c
@@ -0,0 +1,449 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <crypto/algapi.h>
+
+#include "ufshcd.h"
+#include "ufshcd-crypto.h"
+
+/*TODO: worry about endianness and cpu_to_le32 */
+
+bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+	return hba->crypto_capabilities.reg_val != 0;
+}
+
+bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
+{
+	return hba->caps & UFSHCD_CAP_CRYPTO;
+}
+
+static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
+{
+	return cap_idx < hba->crypto_capabilities.num_crypto_cap;
+}
+
+bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
+{
+	/**
+	 * The actual number of configurations supported is (CFGC+1), so slot
+	 * numbers range from 0 to config_count inclusive.
+	 */
+	return slot <= hba->crypto_capabilities.config_count;
+}
+
+static u8 get_data_unit_size_mask(unsigned int data_unit_size)
+{
+	if (data_unit_size < 512 || data_unit_size > 65536 ||
+	    !is_power_of_2(data_unit_size)) {
+		return 0;
+	}
+
+	return data_unit_size / 512;
+}
+
+static size_t get_keysize_bytes(enum ufs_crypto_key_size size)
+{
+	switch (size) {
+	case UFS_CRYPTO_KEY_SIZE_128: return 16;
+	case UFS_CRYPTO_KEY_SIZE_192: return 24;
+	case UFS_CRYPTO_KEY_SIZE_256: return 32;
+	case UFS_CRYPTO_KEY_SIZE_512: return 64;
+	default: return 0;
+	}
+}
+
+/**
+ * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
+ *
+ *	Writes the key with the appropriate format - for AES_XTS,
+ *	the first half of the key is copied as is, the second half is
+ *	copied with an offset halfway into the cfg->crypto_key array.
+ *	For the other supported crypto algs, the key is just copied.
+ *
+ * @cfg: The crypto config to write to
+ * @key: The key to write
+ * @cap: The crypto capability (which specifies the crypto alg and key size)
+ *
+ * Returns 0 on success, or -errno
+ */
+static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg,
+					     const u8 *key,
+					     union ufs_crypto_cap_entry cap)
+{
+	size_t key_size_bytes = get_keysize_bytes(cap.key_size);
+
+	if (key_size_bytes == 0)
+		return -EINVAL;
+
+	switch (cap.algorithm_id) {
+	case UFS_CRYPTO_ALG_AES_XTS:
+		key_size_bytes *= 2;
+		if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE)
+			return -EINVAL;
+
+		memcpy(cfg->crypto_key, key, key_size_bytes/2);
+		memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
+		       key + key_size_bytes/2, key_size_bytes/2);
+		return 0;
+	case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC: // fallthrough
+	case UFS_CRYPTO_ALG_AES_ECB: // fallthrough
+	case UFS_CRYPTO_ALG_ESSIV_AES_CBC:
+		memcpy(cfg->crypto_key, key, key_size_bytes);
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static void program_key(struct ufs_hba *hba,
+			const union ufs_crypto_cfg_entry *cfg,
+			int slot)
+{
+	int i;
+	u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
+
+	/* Clear the dword 16 */
+	ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	/* Ensure that CFGE is cleared before programming the key */
+	wmb();
+	/* TODO: swab32 on the key? */
+	for (i = 0; i < 16; i++) {
+		ufshcd_writel(hba, cfg->reg_val[i],
+			      slot_offset + i * sizeof(cfg->reg_val[0]));
+		/* Spec says each dword in key must be written sequentially */
+		wmb();
+	}
+	/* Write dword 17 */
+	ufshcd_writel(hba, cfg->reg_val[17],
+		      slot_offset + 17 * sizeof(cfg->reg_val[0]));
+	/* Dword 16 must be written last */
+	wmb();
+	/* Write dword 16 */
+	ufshcd_writel(hba, cfg->reg_val[16],
+		      slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	wmb();
+}
+
+static int ufshcd_crypto_keyslot_program(void *hba_p, const u8 *key,
+			      unsigned int data_unit_size,
+			      unsigned int crypto_alg_id,
+			      unsigned int slot)
+{
+	struct ufs_hba *hba = hba_p;
+	int err = 0;
+	u8 data_unit_mask;
+	union ufs_crypto_cfg_entry cfg;
+	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_keyslot_valid(hba, slot) ||
+	    !ufshcd_cap_idx_valid(hba, crypto_alg_id)) {
+		return -EINVAL;
+	}
+
+	data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+	if (!(data_unit_mask &
+	      hba->crypto_cap_array[crypto_alg_id].sdus_mask)) {
+		return -EINVAL;
+	}
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.data_unit_size = data_unit_mask;
+	cfg.crypto_cap_idx = crypto_alg_id;
+	cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE;
+
+	err = ufshcd_crypto_cfg_entry_write_key(&cfg, key,
+					hba->crypto_cap_array[crypto_alg_id]);
+	if (err)
+		return err;
+
+	program_key(hba, &cfg, slot);
+
+	memcpy(&cfg_arr[slot], &cfg, sizeof(cfg));
+	memzero_explicit(&cfg, sizeof(cfg));
+
+	return 0;
+}
+
+static int ufshcd_crypto_keyslot_find(void *hba_p,
+				      const u8 *key,
+				      unsigned int data_unit_size,
+				      unsigned int crypto_alg_id)
+{
+	struct ufs_hba *hba = hba_p;
+	int err = 0;
+	int slot;
+	u8 data_unit_mask;
+	union ufs_crypto_cfg_entry cfg;
+	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    crypto_alg_id >= hba->crypto_capabilities.num_crypto_cap) {
+		return -EINVAL;
+	}
+
+	data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+	if (!(data_unit_mask &
+	      hba->crypto_cap_array[crypto_alg_id].sdus_mask)) {
+		return -EINVAL;
+	}
+
+	memset(&cfg, 0, sizeof(cfg));
+	err = ufshcd_crypto_cfg_entry_write_key(&cfg, key,
+					hba->crypto_cap_array[crypto_alg_id]);
+
+	if (err)
+		return -EINVAL;
+
+	for (slot = 0; slot <= hba->crypto_capabilities.config_count; slot++) {
+		if ((cfg_arr[slot].config_enable &
+		     UFS_CRYPTO_CONFIGURATION_ENABLE) &&
+		    data_unit_mask == cfg_arr[slot].data_unit_size &&
+		    crypto_alg_id == cfg_arr[slot].crypto_cap_idx &&
+		    crypto_memneq(&cfg.crypto_key, cfg_arr[slot].crypto_key,
+				  UFS_CRYPTO_KEY_MAX_SIZE) == 0) {
+			memzero_explicit(&cfg, sizeof(cfg));
+			return slot;
+		}
+	}
+
+	memzero_explicit(&cfg, sizeof(cfg));
+	return -ENOKEY;
+}
+
+static int ufshcd_crypto_keyslot_evict(void *hba_p, unsigned int slot,
+				       const u8 *key,
+				       unsigned int data_unit_size,
+				       unsigned int crypto_alg_id)
+{
+	struct ufs_hba *hba = hba_p;
+	int i = 0;
+	u32 reg_base;
+	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_keyslot_valid(hba, slot)) {
+		return -EINVAL;
+	}
+
+	memset(&cfg_arr[slot], 0, sizeof(cfg_arr[slot]));
+	reg_base = hba->crypto_cfg_register +
+			slot * sizeof(cfg_arr[0]);
+
+	/**
+	 * Clear the crypto cfg on the device. Clearing CFGE
+	 * might not be sufficient, so just clear the entire cfg.
+	 */
+	for (i = 0; i < sizeof(cfg_arr[0]); i += sizeof(__le32))
+		ufshcd_writel(hba, 0, reg_base + i);
+	wmb();
+
+	return 0;
+}
+
+static int ufshcd_crypto_alg_find(void *hba_p,
+			   enum blk_crypt_mode_index crypt_mode,
+			   unsigned int data_unit_size)
+{
+	struct ufs_hba *hba = hba_p;
+	enum ufs_crypto_alg ufs_alg;
+	u8 data_unit_mask;
+	int cap_idx;
+	enum ufs_crypto_key_size ufs_key_size;
+	union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
+
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return -EINVAL;
+
+	switch (crypt_mode) {
+	case BLK_ENCRYPTION_MODE_AES_256_XTS:
+		ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
+		ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
+		break;
+	/**
+	 * case BLK_CRYPTO_ALG_BITLOCKER_AES_CBC:
+	 *	ufs_alg = UFS_CRYPTO_ALG_BITLOCKER_AES_CBC;
+	 *	break;
+	 * case INLINECRYPT_ALG_AES_ECB:
+	 *	ufs_alg = UFS_CRYPTO_ALG_AES_ECB;
+	 *	break;
+	 * case INLINECRYPT_ALG_ESSIV_AES_CBC:
+	 *	ufs_alg = UFS_CRYPTO_ALG_ESSIV_AES_CBC;
+	 *	break;
+	 */
+	default: return -EINVAL;
+	}
+
+	data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+	/**
+	 * TODO: We can replace this for loop entirely by constructing
+	 * a table on init that translates blk_crypt_mode_index to
+	 * ufs crypt alg numbers. (By assuming that each alg/keysize combo
+	 * appears only once in the ufs crypto caps array.)
+	 */
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		if (ccap_array[cap_idx].algorithm_id == ufs_alg &&
+		    (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
+		    ccap_array[cap_idx].key_size == ufs_key_size) {
+			return cap_idx;
+		}
+	}
+
+	return -EINVAL;
+}
+
+int ufshcd_crypto_enable(struct ufs_hba *hba)
+{
+	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
+	int slot;
+
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return -EINVAL;
+
+	hba->caps |= UFSHCD_CAP_CRYPTO;
+	/**
+	 * Reset might clear all keys, so reprogram all the keys.
+	 * Also serves to clear keys on driver init.
+	 */
+	for (slot = 0; slot <= hba->crypto_capabilities.config_count; slot++)
+		program_key(hba, &cfg_arr[slot], slot);
+
+	return 0;
+}
+
+int ufshcd_crypto_disable(struct ufs_hba *hba)
+{
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return -EINVAL;
+
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+
+	return 0;
+}
+
+
+/**
+ * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba
+ * @hba: Per adapter instance
+ *
+ * Returns 0 on success. Returns -ENODEV if such capabilties don't exist, and
+ * -ENOMEM upon OOM.
+ */
+int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+	int cap_idx = 0;
+	int err = 0;
+	/* Default to disabling crypto */
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+
+	if (!(hba->capabilities & MASK_CRYPTO_SUPPORT)) {
+		err = -ENODEV;
+		goto out;
+	}
+
+	/**
+	 * Crypto Capabilities should never be 0, because the
+	 * config_array_ptr > 04h. So we use a 0 value to indicate that
+	 * crypto init failed, and can't be enabled.
+	 */
+	hba->crypto_capabilities.reg_val = ufshcd_readl(hba, REG_UFS_CCAP);
+	hba->crypto_cfg_register =
+		(u32)hba->crypto_capabilities.config_array_ptr * 0x100;
+	hba->crypto_cap_array =
+		devm_kcalloc(hba->dev,
+			     hba->crypto_capabilities.num_crypto_cap,
+			     sizeof(hba->crypto_cap_array[0]),
+			     GFP_KERNEL);
+	if (!hba->crypto_cap_array) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	hba->crypto_cfgs =
+		devm_kcalloc(hba->dev,
+			     hba->crypto_capabilities.config_count + 1,
+			     sizeof(union ufs_crypto_cfg_entry),
+			     GFP_KERNEL);
+	if (!hba->crypto_cfgs) {
+		err = -ENOMEM;
+		goto out_cfg_mem;
+	}
+
+	/**
+	 * Store all the capabilities now so that we don't need to repeatedly
+	 * access the device each time we want to know its capabilities
+	 */
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		hba->crypto_cap_array[cap_idx].reg_val =
+			ufshcd_readl(hba,
+				     REG_UFS_CRYPTOCAP +
+				     cap_idx * sizeof(__le32));
+	}
+
+	return 0;
+out_cfg_mem:
+	devm_kfree(hba->dev, hba->crypto_cap_array);
+out:
+	// TODO: print error?
+	/* Indicate that init failed by setting crypto_capabilities to 0 */
+	hba->crypto_capabilities.reg_val = 0;
+	return err;
+}
+
+const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = {
+	.keyslot_program	= ufshcd_crypto_keyslot_program,
+	.keyslot_evict		= ufshcd_crypto_keyslot_evict,
+	.keyslot_find		= ufshcd_crypto_keyslot_find,
+	.crypto_alg_find	= ufshcd_crypto_alg_find,
+};
+
+int ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					   struct request_queue *q)
+{
+	int err = 0;
+
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return 0;
+
+	if (!q) {
+		err = -ENODEV;
+		goto out_no_q;
+	}
+
+	q->ksm = keyslot_manager_create(
+	    hba->crypto_capabilities.config_count+1,
+	    &ufshcd_ksm_ops, hba);
+	/*
+	 * If we fail we make it look like
+	 * crypto is not supported, which will avoid issues
+	 * with reset
+	 */
+	if (!q->ksm) {
+		err = -ENOMEM;
+out_no_q:
+		ufshcd_crypto_disable(hba);
+		hba->crypto_capabilities.reg_val = 0;
+		devm_kfree(hba->dev, hba->crypto_cap_array);
+		devm_kfree(hba->dev, hba->crypto_cfgs);
+		return err;
+	}
+
+	return 0;
+}
+
+int ufshcd_crypto_destroy_rq_keyslot_manager(struct request_queue *q)
+{
+	if (q && q->ksm)
+		keyslot_manager_destroy(q->ksm);
+
+	return 0;
+}
+
diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h
new file mode 100644
index 000000000000..16445efe3666
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.h
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef _UFSHCD_CRYPTO_H
+#define _UFSHCD_CRYPTO_H
+
+struct ufs_hba;
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+#include <linux/keyslot-manager.h>
+
+#include "ufshci.h"
+
+bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot);
+
+bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba);
+
+bool ufshcd_is_crypto_enabled(struct ufs_hba *hba);
+
+int ufshcd_crypto_set_enable_slot(struct ufs_hba *hba,
+				  unsigned int slot,
+				  bool enable);
+
+int ufshcd_crypto_enable(struct ufs_hba *hba);
+
+int ufshcd_crypto_disable(struct ufs_hba *hba);
+
+int ufshcd_hba_init_crypto(struct ufs_hba *hba);
+
+int ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					   struct request_queue *q);
+
+int ufshcd_crypto_destroy_rq_keyslot_manager(struct request_queue *q);
+
+#else /* CONFIG_UFS_CRYPTO */
+
+static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba,
+					unsigned int slot)
+{
+	return false;
+}
+
+static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+	return false;
+}
+
+static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
+{
+	return false;
+}
+
+static inline int ufshcd_crypto_set_enable_slot(struct ufs_hba *hba,
+				  unsigned int slot,
+				  bool enable)
+{
+	return -1;
+}
+
+static inline int ufshcd_crypto_enable(struct ufs_hba *hba)
+{
+	return -1;
+}
+
+static inline int ufshcd_crypto_disable(struct ufs_hba *hba)
+{
+	return -1;
+}
+
+static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+	return -1;
+}
+
+static inline int ufshcd_crypto_setup_rq_keyslot_manager(
+					struct ufs_hba *hba,
+					struct request_queue *q)
+{
+	return -1;
+}
+
+static inline int ufshcd_crypto_destroy_rq_keyslot_manager(
+				struct request_queue *q)
+{
+	return -1;
+}
+
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
+
+#endif /* _UFSHCD_CRYPTO_H */
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index e040f9dd9ff3..65c51943e331 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -47,6 +47,7 @@
 #include "unipro.h"
 #include "ufs-sysfs.h"
 #include "ufs_bsg.h"
+#include "ufshcd-crypto.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/ufs.h>
@@ -855,7 +856,14 @@ static void ufshcd_enable_run_stop_reg(struct ufs_hba *hba)
  */
 static inline void ufshcd_hba_start(struct ufs_hba *hba)
 {
-	ufshcd_writel(hba, CONTROLLER_ENABLE, REG_CONTROLLER_ENABLE);
+	u32 val = CONTROLLER_ENABLE;
+
+	if (ufshcd_hba_is_crypto_supported(hba)) {
+		ufshcd_crypto_enable(hba);
+		val |= CRYPTO_GENERAL_ENABLE;
+	}
+
+	ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE);
 }
 
 /**
@@ -2208,9 +2216,21 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
 		dword_0 |= UTP_REQ_DESC_INT_CMD;
 
 	/* Transfer request descriptor header fields */
+	if (lrbp->crypto_enable) {
+		dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
+		dword_0 |= lrbp->crypto_key_slot;
+		req_desc->header.dword_1 =
+			cpu_to_le32((u32)lrbp->data_unit_num);
+		req_desc->header.dword_3 =
+			cpu_to_le32((u32)(lrbp->data_unit_num >> 32));
+	} else {
+		/* dword_1 and dword_3 are reserved, hence they are set to 0 */
+		req_desc->header.dword_1 = 0;
+		req_desc->header.dword_3 = 0;
+	}
+
 	req_desc->header.dword_0 = cpu_to_le32(dword_0);
-	/* dword_1 is reserved, hence it is set to 0 */
-	req_desc->header.dword_1 = 0;
+
 	/*
 	 * assigning invalid value for command status. Controller
 	 * updates OCS on command completion, with the command
@@ -2218,8 +2238,6 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp,
 	 */
 	req_desc->header.dword_2 =
 		cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
-	/* dword_3 is reserved, hence it is set to 0 */
-	req_desc->header.dword_3 = 0;
 
 	req_desc->prd_table_length = 0;
 }
@@ -2379,6 +2397,38 @@ static inline u16 ufshcd_upiu_wlun_to_scsi_wlun(u8 upiu_wlun_id)
 	return (upiu_wlun_id & ~UFS_UPIU_WLUN_ID) | SCSI_W_LUN_BASE;
 }
 
+static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+					     struct scsi_cmnd *cmd,
+					     struct ufshcd_lrb *lrbp)
+{
+	int key_slot;
+
+	if (!bio_crypt_should_process(cmd->request->bio,
+					cmd->request->q)) {
+		lrbp->crypto_enable = false;
+		return 0;
+	}
+
+	if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) {
+		/**
+		 * Upper layer asked us to do inline encryption
+		 * but that isn't enabled, so we fail this request.
+		 */
+		return -EINVAL;
+	}
+	key_slot = bio_crypt_get_slot(cmd->request->bio);
+	if (!ufshcd_keyslot_valid(hba, key_slot))
+		return -EINVAL;
+
+	lrbp->crypto_enable = true;
+	lrbp->crypto_key_slot = key_slot;
+	lrbp->data_unit_num =
+		bio_crypt_data_unit_num(cmd->request->bio);
+
+	return 0;
+}
+
+
 /**
  * ufshcd_queuecommand - main entry point for SCSI requests
  * @host: SCSI host pointer
@@ -2466,6 +2516,13 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
 	lrbp->task_tag = tag;
 	lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
 	lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false;
+
+	err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
+	if (err) {
+		lrbp->cmd = NULL;
+		clear_bit_unlock(tag, &hba->lrb_in_use);
+		goto out;
+	}
 	lrbp->req_abort_skip = false;
 
 	ufshcd_comp_scsi_upiu(hba, lrbp);
@@ -2499,6 +2556,7 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba,
 	lrbp->task_tag = tag;
 	lrbp->lun = 0; /* device management cmd is not specific to any LUN */
 	lrbp->intr_cmd = true; /* No interrupt aggregation */
+	lrbp->crypto_enable = false; /* No crypto operations */
 	hba->dev_cmd.type = cmd_type;
 
 	return ufshcd_comp_devman_upiu(hba, lrbp);
@@ -4191,6 +4249,8 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep)
 {
 	int err;
 
+	ufshcd_crypto_disable(hba);
+
 	ufshcd_writel(hba, CONTROLLER_DISABLE,  REG_CONTROLLER_ENABLE);
 	err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE,
 					CONTROLLER_ENABLE, CONTROLLER_DISABLE,
@@ -4584,10 +4644,13 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth)
 static int ufshcd_slave_configure(struct scsi_device *sdev)
 {
 	struct request_queue *q = sdev->request_queue;
+	struct ufs_hba *hba = shost_priv(sdev->host);
 
 	blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
 	blk_queue_max_segment_size(q, PRDT_DATA_BYTE_COUNT_MAX);
 
+	ufshcd_crypto_setup_rq_keyslot_manager(hba, q);
+
 	return 0;
 }
 
@@ -4598,6 +4661,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
 static void ufshcd_slave_destroy(struct scsi_device *sdev)
 {
 	struct ufs_hba *hba;
+	struct request_queue *q = sdev->request_queue;
 
 	hba = shost_priv(sdev->host);
 	/* Drop the reference as it won't be needed anymore */
@@ -4608,6 +4672,8 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev)
 		hba->sdev_ufs_device = NULL;
 		spin_unlock_irqrestore(hba->host->host_lock, flags);
 	}
+
+	ufshcd_crypto_destroy_rq_keyslot_manager(q);
 }
 
 /**
@@ -4723,6 +4789,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
 	case OCS_MISMATCH_RESP_UPIU_SIZE:
 	case OCS_PEER_COMM_FAILURE:
 	case OCS_FATAL_ERROR:
+	case OCS_INVALID_CRYPTO_CONFIG:
+	case OCS_GENERAL_CRYPTO_ERROR:
 	default:
 		result |= DID_ERROR << 16;
 		dev_err(hba->dev,
@@ -8287,6 +8355,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
 		goto exit_gating;
 	}
 
+	/* Init crypto */
+	err = ufshcd_hba_init_crypto(hba);
+	if (err) {
+		dev_err(hba->dev, "crypto setup failed\n");
+		goto out_remove_scsi_host;
+	}
+
 	/* Host controller enable */
 	err = ufshcd_hba_enable(hba);
 	if (err) {
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index ecfa898b9ccc..283014e0924f 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -167,6 +167,9 @@ struct ufs_pm_lvl_states {
  * @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation)
  * @issue_time_stamp: time stamp for debug purposes
  * @compl_time_stamp: time stamp for statistics
+ * @crypto_enable: whether or not the request needs inline crypto operations
+ * @crypto_key_slot: the key slot to use for inline crypto
+ * @data_unit_num: the data unit number for the first block for inline crypto
  * @req_abort_skip: skip request abort task flag
  */
 struct ufshcd_lrb {
@@ -191,6 +194,9 @@ struct ufshcd_lrb {
 	bool intr_cmd;
 	ktime_t issue_time_stamp;
 	ktime_t compl_time_stamp;
+	bool crypto_enable;
+	u8 crypto_key_slot;
+	u64 data_unit_num;
 
 	bool req_abort_skip;
 };
@@ -501,6 +507,10 @@ struct ufs_stats {
  * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
  *  device is known or not.
  * @scsi_block_reqs_cnt: reference counting for scsi block requests
+ * @crypto_capabilities: Content of crypto capabilities register (0x100)
+ * @crypto_cap_array: Array of crypto capabilities
+ * @crypto_cfg_register: Start of the crypto cfg array
+ * @crypto_cfgs: Array of crypto configurations (i.e. config for each slot)
  */
 struct ufs_hba {
 	void __iomem *mmio_base;
@@ -692,6 +702,11 @@ struct ufs_hba {
 	 * the performance of ongoing read/write operations.
 	 */
 #define UFSHCD_CAP_KEEP_AUTO_BKOPS_ENABLED_EXCEPT_SUSPEND (1 << 5)
+	/*
+	 * This capability allows the host controller driver to use the
+	 * inline crypto engine, if it is present
+	 */
+#define UFSHCD_CAP_CRYPTO (1 << 6)
 
 	struct devfreq *devfreq;
 	struct ufs_clk_scaling clk_scaling;
@@ -706,6 +721,14 @@ struct ufs_hba {
 
 	struct device		bsg_dev;
 	struct request_queue	*bsg_queue;
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+	/* crypto */
+	union ufs_crypto_capabilities crypto_capabilities;
+	union ufs_crypto_cap_entry *crypto_cap_array;
+	u32 crypto_cfg_register;
+	union ufs_crypto_cfg_entry *crypto_cfgs;
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
 };
 
 /* Returns true if clocks can be gated. Otherwise false */
diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
index 6fa889de5ee5..a757eaf99a19 100644
--- a/drivers/scsi/ufs/ufshci.h
+++ b/drivers/scsi/ufs/ufshci.h
@@ -90,6 +90,7 @@ enum {
 	MASK_64_ADDRESSING_SUPPORT		= 0x01000000,
 	MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT	= 0x02000000,
 	MASK_UIC_DME_TEST_MODE_SUPPORT		= 0x04000000,
+	MASK_CRYPTO_SUPPORT			= 0x10000000,
 };
 
 #define UFS_MASK(mask, offset)		((mask) << (offset))
@@ -143,6 +144,7 @@ enum {
 #define DEVICE_FATAL_ERROR			0x800
 #define CONTROLLER_FATAL_ERROR			0x10000
 #define SYSTEM_BUS_FATAL_ERROR			0x20000
+#define CRYPTO_ENGINE_FATAL_ERROR		0x40000
 
 #define UFSHCD_UIC_PWR_MASK	(UIC_HIBERNATE_ENTER |\
 				UIC_HIBERNATE_EXIT |\
@@ -153,11 +155,13 @@ enum {
 #define UFSHCD_ERROR_MASK	(UIC_ERROR |\
 				DEVICE_FATAL_ERROR |\
 				CONTROLLER_FATAL_ERROR |\
-				SYSTEM_BUS_FATAL_ERROR)
+				SYSTEM_BUS_FATAL_ERROR |\
+				CRYPTO_ENGINE_FATAL_ERROR)
 
 #define INT_FATAL_ERRORS	(DEVICE_FATAL_ERROR |\
 				CONTROLLER_FATAL_ERROR |\
-				SYSTEM_BUS_FATAL_ERROR)
+				SYSTEM_BUS_FATAL_ERROR |\
+				CRYPTO_ENGINE_FATAL_ERROR)
 
 /* HCS - Host Controller Status 30h */
 #define DEVICE_PRESENT				0x1
@@ -316,6 +320,61 @@ enum {
 	INTERRUPT_MASK_ALL_VER_21	= 0x71FFF,
 };
 
+/* CCAP - Crypto Capability 100h */
+union ufs_crypto_capabilities {
+	__le32 reg_val;
+	struct {
+		u8 num_crypto_cap;
+		u8 config_count;
+		u8 reserved;
+		u8 config_array_ptr;
+	};
+};
+
+enum ufs_crypto_key_size {
+	UFS_CRYPTO_KEY_SIZE_INVALID	= 0x0,
+	UFS_CRYPTO_KEY_SIZE_128		= 0x1,
+	UFS_CRYPTO_KEY_SIZE_192		= 0x2,
+	UFS_CRYPTO_KEY_SIZE_256		= 0x3,
+	UFS_CRYPTO_KEY_SIZE_512		= 0x4,
+};
+
+enum ufs_crypto_alg {
+	UFS_CRYPTO_ALG_AES_XTS			= 0x0,
+	UFS_CRYPTO_ALG_BITLOCKER_AES_CBC	= 0x1,
+	UFS_CRYPTO_ALG_AES_ECB			= 0x2,
+	UFS_CRYPTO_ALG_ESSIV_AES_CBC		= 0x3,
+};
+
+/* x-CRYPTOCAP - Crypto Capability X */
+union ufs_crypto_cap_entry {
+	__le32 reg_val;
+	struct {
+		u8 algorithm_id;
+		u8 sdus_mask; /* Supported data unit size mask */
+		u8 key_size;
+		u8 reserved;
+	};
+};
+
+#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
+#define UFS_CRYPTO_KEY_MAX_SIZE 64
+/* x-CRYPTOCFG - Crypto Configuration X */
+union ufs_crypto_cfg_entry {
+	__le32 reg_val[32];
+	struct {
+		u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE];
+		u8 data_unit_size;
+		u8 crypto_cap_idx;
+		u8 reserved_1;
+		u8 config_enable;
+		u8 reserved_multi_host;
+		u8 reserved_2;
+		u8 vsb[2];
+		u8 reserved_3[56];
+	};
+};
+
 /*
  * Request Descriptor Definitions
  */
@@ -337,6 +396,7 @@ enum {
 	UTP_NATIVE_UFS_COMMAND		= 0x10000000,
 	UTP_DEVICE_MANAGEMENT_FUNCTION	= 0x20000000,
 	UTP_REQ_DESC_INT_CMD		= 0x01000000,
+	UTP_REQ_DESC_CRYPTO_ENABLE_CMD	= 0x00800000,
 };
 
 /* UTP Transfer Request Data Direction (DD) */
@@ -356,6 +416,9 @@ enum {
 	OCS_PEER_COMM_FAILURE		= 0x5,
 	OCS_ABORTED			= 0x6,
 	OCS_FATAL_ERROR			= 0x7,
+	OCS_DEVICE_FATAL_ERROR		= 0x8,
+	OCS_INVALID_CRYPTO_CONFIG	= 0x9,
+	OCS_GENERAL_CRYPTO_ERROR	= 0xA,
 	OCS_INVALID_COMMAND_STATUS	= 0x0F,
 	MASK_OCS			= 0x0F,
 };
-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/4] fscrypt: wire up fscrypt to use blk-crypto
  2019-05-06 22:35 [RFC PATCH 0/4] Inline Encryption Support Satya Tangirala
  2019-05-06 22:35 ` [RFC PATCH 1/4] block: Block Layer changes for " Satya Tangirala
  2019-05-06 22:35 ` [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support Satya Tangirala
@ 2019-05-06 22:35 ` Satya Tangirala
  2019-05-07  1:23   ` Bart Van Assche
  2019-05-06 22:35 ` [RFC PATCH 4/4] f2fs: Wire up f2fs to use inline encryption via fscrypt Satya Tangirala
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Satya Tangirala @ 2019-05-06 22:35 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Introduce fscrypt_set_bio_crypt_ctx for filesystems to call to set up
encryption contexts in bios, and fscrypt_evict_crypt_key to evict
the encryption context associated with an inode.

Inline encryption is controlled by a policy flag in the fscrypt_info
in the inode, and filesystems may check if an inode should use inline
encryption by calling fscrypt_inode_is_hw_encrypted. Files can be marked
as inline encrypted from userspace by appropriately modifying the flags
(OR-ing FS_POLICY_FLAGS_HW_ENCRYPTION to it) in the fscrypt_policy
passed to fscrypt_ioctl_set_policy.

To test inline encryption with the fscrypt dummy context, add
ctx.flags |= FS_POLICY_FLAGS_HW_ENCRYPTION
when setting up the dummy context in fs/crypto/keyinfo.c.

Note that blk-crypto will fall back to software en/decryption in the
absence of inline crypto hardware, so setting up the ctx.flags in the
dummy context without inline crypto hardware serves as a test for
the software fallback in blk-crypto.

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 block/blk-crypto.c          |   1 -
 fs/crypto/Kconfig           |   7 ++
 fs/crypto/bio.c             | 156 +++++++++++++++++++++++++++++++-----
 fs/crypto/crypto.c          |   9 +++
 fs/crypto/fscrypt_private.h |  10 +++
 fs/crypto/keyinfo.c         |  69 +++++++++++-----
 fs/crypto/policy.c          |  10 +++
 include/linux/fscrypt.h     |  58 ++++++++++++++
 include/uapi/linux/fs.h     |  12 ++-
 9 files changed, 287 insertions(+), 45 deletions(-)

diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 503f9e3a770b..eb3a9736939f 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -77,7 +77,6 @@ static int blk_crypto_keyslot_program(void *priv, const u8 *key,
 		slot_mem[slot].tfm = tfm;
 	}
 
-
 	err = crypto_skcipher_setkey(tfm, key, keysize);
 
 	if (err) {
diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig
index f0de238000c0..54412f0c48be 100644
--- a/fs/crypto/Kconfig
+++ b/fs/crypto/Kconfig
@@ -14,3 +14,10 @@ config FS_ENCRYPTION
 	  efficient since it avoids caching the encrypted and
 	  decrypted pages in the page cache.  Currently Ext4,
 	  F2FS and UBIFS make use of this feature.
+
+config FS_ENCRYPTION_HW_CRYPT
+	tristate "Enable fscrypt to use inline crypto"
+	default n
+	depends on FS_ENCRYPTION && BLK_CRYPT_CTX
+	help
+	  Enables fscrypt to use inline crypto hardware if available.
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index 5759bcd018cd..8e8706694246 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -24,9 +24,12 @@
 #include <linux/module.h>
 #include <linux/bio.h>
 #include <linux/namei.h>
+#include <linux/keyslot-manager.h>
+#include <linux/blkdev.h>
+#include <crypto/algapi.h>
 #include "fscrypt_private.h"
 
-static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
+static void __fscrypt_decrypt_bio(struct bio *bio, bool done, bool decrypt)
 {
 	struct bio_vec *bv;
 	int i;
@@ -34,9 +37,12 @@ static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
 
 	bio_for_each_segment_all(bv, bio, i, iter_all) {
 		struct page *page = bv->bv_page;
-		int ret = fscrypt_decrypt_page(page->mapping->host, page,
-				PAGE_SIZE, 0, page->index);
+		int ret = 0;
 
+		if (decrypt) {
+			ret = fscrypt_decrypt_page(page->mapping->host, page,
+						   PAGE_SIZE, 0, page->index);
+		}
 		if (ret) {
 			WARN_ON_ONCE(1);
 			SetPageError(page);
@@ -50,7 +56,7 @@ static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
 
 void fscrypt_decrypt_bio(struct bio *bio)
 {
-	__fscrypt_decrypt_bio(bio, false);
+	__fscrypt_decrypt_bio(bio, false, true);
 }
 EXPORT_SYMBOL(fscrypt_decrypt_bio);
 
@@ -60,16 +66,27 @@ static void completion_pages(struct work_struct *work)
 		container_of(work, struct fscrypt_ctx, r.work);
 	struct bio *bio = ctx->r.bio;
 
-	__fscrypt_decrypt_bio(bio, true);
+	__fscrypt_decrypt_bio(bio, true, true);
+	fscrypt_release_ctx(ctx);
+	bio_put(bio);
+}
+
+static void decrypt_bio_hwcrypt(struct fscrypt_ctx *ctx, struct bio *bio)
+{
+	__fscrypt_decrypt_bio(bio, true, false);
 	fscrypt_release_ctx(ctx);
 	bio_put(bio);
 }
 
 void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
 {
-	INIT_WORK(&ctx->r.work, completion_pages);
-	ctx->r.bio = bio;
-	fscrypt_enqueue_decrypt_work(&ctx->r.work);
+	if (bio_is_encrypted(bio)) {
+		decrypt_bio_hwcrypt(ctx, bio);
+	} else {
+		INIT_WORK(&ctx->r.work, completion_pages);
+		ctx->r.bio = bio;
+		fscrypt_enqueue_decrypt_work(&ctx->r.work);
+	}
 }
 EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);
 
@@ -97,29 +114,33 @@ EXPORT_SYMBOL(fscrypt_pullback_bio_page);
 int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 				sector_t pblk, unsigned int len)
 {
-	struct fscrypt_ctx *ctx;
+	struct fscrypt_ctx *ctx = NULL;
 	struct page *ciphertext_page = NULL;
 	struct bio *bio;
 	int ret, err = 0;
 
 	BUG_ON(inode->i_sb->s_blocksize != PAGE_SIZE);
 
-	ctx = fscrypt_get_ctx(inode, GFP_NOFS);
-	if (IS_ERR(ctx))
-		return PTR_ERR(ctx);
+	if (!fscrypt_inode_is_hw_encrypted(inode)) {
+		ctx = fscrypt_get_ctx(inode, GFP_NOFS);
+		if (IS_ERR(ctx))
+			return PTR_ERR(ctx);
 
-	ciphertext_page = fscrypt_alloc_bounce_page(ctx, GFP_NOWAIT);
-	if (IS_ERR(ciphertext_page)) {
-		err = PTR_ERR(ciphertext_page);
-		goto errout;
+		ciphertext_page = fscrypt_alloc_bounce_page(ctx, GFP_NOWAIT);
+		if (IS_ERR(ciphertext_page)) {
+			err = PTR_ERR(ciphertext_page);
+			goto errout;
+		}
 	}
 
 	while (len--) {
-		err = fscrypt_do_page_crypto(inode, FS_ENCRYPT, lblk,
+		if (!fscrypt_inode_is_hw_encrypted(inode)) {
+			err = fscrypt_do_page_crypto(inode, FS_ENCRYPT, lblk,
 					     ZERO_PAGE(0), ciphertext_page,
 					     PAGE_SIZE, 0, GFP_NOFS);
-		if (err)
-			goto errout;
+			if (err)
+				goto errout;
+		}
 
 		bio = bio_alloc(GFP_NOWAIT, 1);
 		if (!bio) {
@@ -130,8 +151,14 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 		bio->bi_iter.bi_sector =
 			pblk << (inode->i_sb->s_blocksize_bits - 9);
 		bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
-		ret = bio_add_page(bio, ciphertext_page,
-					inode->i_sb->s_blocksize, 0);
+		if (!fscrypt_inode_is_hw_encrypted(inode)) {
+			ret = bio_add_page(bio, ciphertext_page,
+						inode->i_sb->s_blocksize, 0);
+		} else {
+			ret = bio_add_page(bio, ZERO_PAGE(0),
+						inode->i_sb->s_blocksize, 0);
+		}
+
 		if (ret != inode->i_sb->s_blocksize) {
 			/* should never happen! */
 			WARN_ON(1);
@@ -139,6 +166,7 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 			err = -EIO;
 			goto errout;
 		}
+		fscrypt_set_bio_crypt_ctx(inode, bio, pblk);
 		err = submit_bio_wait(bio);
 		if (err == 0 && bio->bi_status)
 			err = -EIO;
@@ -150,7 +178,91 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 	}
 	err = 0;
 errout:
-	fscrypt_release_ctx(ctx);
+	if (!fscrypt_inode_is_hw_encrypted(inode))
+		fscrypt_release_ctx(ctx);
 	return err;
 }
 EXPORT_SYMBOL(fscrypt_zeroout_range);
+
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+static enum blk_crypt_mode_index
+get_blk_crypto_alg_for_fscryptalg(u8 fscrypt_alg)
+{
+	switch (fscrypt_alg) {
+	case FS_ENCRYPTION_MODE_AES_256_XTS:
+		return BLK_ENCRYPTION_MODE_AES_256_XTS;
+	default: return -EINVAL;
+	}
+}
+
+int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
+				 struct bio *bio, u64 data_unit_num)
+{
+	struct fscrypt_info *ci = inode->i_crypt_info;
+
+	/* If inode is not hw encrypted, nothing to do. */
+	if (!fscrypt_inode_is_hw_encrypted(inode))
+		return 0;
+
+	if (!fscrypt_valid_enc_modes(ci->ci_data_mode, ci->ci_filename_mode))
+		return -1;
+
+	bio_crypt_set_ctx(bio, ci->ci_master_key->mk_raw,
+			  get_blk_crypto_alg_for_fscryptalg(ci->ci_data_mode),
+			  data_unit_num,
+			  PAGE_SHIFT);
+	return 0;
+}
+EXPORT_SYMBOL(fscrypt_set_bio_crypt_ctx);
+
+int fscrypt_evict_crypt_key(struct inode *inode)
+{
+	struct request_queue *q;
+	struct fscrypt_info *ci;
+
+	if (!inode)
+		return 0;
+
+	q = inode->i_sb->s_bdev->bd_queue;
+	ci = inode->i_crypt_info;
+
+	if (!q || !q->ksm || !ci ||
+	    !fscrypt_inode_is_hw_encrypted(inode)) {
+		return 0;
+	}
+
+	return keyslot_manager_evict_key(q->ksm,
+					 ci->ci_master_key->mk_raw,
+					 get_blk_crypto_alg_for_fscryptalg(
+						ci->ci_data_mode),
+					 PAGE_SIZE);
+}
+EXPORT_SYMBOL(fscrypt_evict_crypt_key);
+
+bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1,
+				   const struct inode *inode_2)
+{
+	struct fscrypt_info *ci_1, *ci_2;
+	bool enc_1 = fscrypt_inode_is_hw_encrypted(inode_1);
+	bool enc_2 = fscrypt_inode_is_hw_encrypted(inode_2);
+
+	if (enc_1 != enc_2)
+		return false;
+
+	if (!enc_1)
+		return true;
+
+	if (inode_1 == inode_2)
+		return true;
+
+	ci_1 = inode_1->i_crypt_info;
+	ci_2 = inode_2->i_crypt_info;
+
+	return ci_1->ci_data_mode == ci_2->ci_data_mode &&
+	       crypto_memneq(ci_1->ci_master_key->mk_raw,
+			     ci_2->ci_master_key->mk_raw,
+			     ci_1->ci_master_key->mk_mode->keysize) == 0;
+}
+EXPORT_SYMBOL(fscrypt_inode_crypt_mergeable);
+
+#endif /* FS_ENCRYPTION_HW_CRYPT */
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index 4dc788e3bc96..164824d2ea3c 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -245,6 +245,11 @@ struct page *fscrypt_encrypt_page(const struct inode *inode,
 
 	BUG_ON(len % FS_CRYPTO_BLOCK_SIZE != 0);
 
+	/* If HW encryption, then pretend we did in place encryption */
+	if (fscrypt_inode_is_hw_encrypted(inode)) {
+		return ciphertext_page;
+	}
+
 	if (inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES) {
 		/* with inplace-encryption we just encrypt the page */
 		err = fscrypt_do_page_crypto(inode, FS_ENCRYPT, lblk_num, page,
@@ -307,6 +312,10 @@ int fscrypt_decrypt_page(const struct inode *inode, struct page *page,
 	if (!(inode->i_sb->s_cop->flags & FS_CFLG_OWN_PAGES))
 		BUG_ON(!PageLocked(page));
 
+	/* If we have HW encryption, then this page is already decrypted */
+	if (fscrypt_inode_is_hw_encrypted(inode))
+		return 0;
+
 	return fscrypt_do_page_crypto(inode, FS_DECRYPT, lblk_num, page, page,
 				      len, offs, GFP_NOFS);
 }
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index 7da276159593..d6d65c88a629 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -49,6 +49,16 @@ struct fscrypt_symlink_data {
 	char encrypted_path[1];
 } __packed;
 
+/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
+struct fscrypt_master_key {
+	struct hlist_node mk_node;
+	refcount_t mk_refcount;
+	const struct fscrypt_mode *mk_mode;
+	struct crypto_skcipher *mk_ctfm;
+	u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+	u8 mk_raw[FS_MAX_KEY_SIZE];
+};
+
 /*
  * fscrypt_info - the "encryption key" for an inode
  *
diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
index 322ce9686bdb..04d808d8ff30 100644
--- a/fs/crypto/keyinfo.c
+++ b/fs/crypto/keyinfo.c
@@ -25,6 +25,21 @@ static struct crypto_shash *essiv_hash_tfm;
 static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */
 static DEFINE_SPINLOCK(fscrypt_master_keys_lock);
 
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+static inline bool __flags_hw_encrypted(u8 flags,
+					const struct inode *inode)
+{
+	return inode && (flags & FS_POLICY_FLAGS_HW_ENCRYPTION) &&
+	       S_ISREG(inode->i_mode);
+}
+#else
+static inline bool __flags_hw_encrypted(u8 flags,
+					const struct inode *inode)
+{
+	return false;
+}
+#endif /* CONFIG_FS_ENCRYPTION_HW_CRYPT */
+
 /*
  * Key derivation function.  This generates the derived key by encrypting the
  * master key with AES-128-ECB using the inode's nonce as the AES key.
@@ -220,6 +235,9 @@ static int find_and_derive_key(const struct inode *inode,
 			memcpy(derived_key, payload->raw, mode->keysize);
 			err = 0;
 		}
+	} else if (__flags_hw_encrypted(ctx->flags, inode)) {
+		memcpy(derived_key, payload->raw, mode->keysize);
+		err = 0;
 	} else {
 		err = derive_key_aes(payload->raw, ctx, derived_key,
 				     mode->keysize);
@@ -269,16 +287,6 @@ allocate_skcipher_for_mode(struct fscrypt_mode *mode, const u8 *raw_key,
 	return ERR_PTR(err);
 }
 
-/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
-struct fscrypt_master_key {
-	struct hlist_node mk_node;
-	refcount_t mk_refcount;
-	const struct fscrypt_mode *mk_mode;
-	struct crypto_skcipher *mk_ctfm;
-	u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
-	u8 mk_raw[FS_MAX_KEY_SIZE];
-};
-
 static void free_master_key(struct fscrypt_master_key *mk)
 {
 	if (mk) {
@@ -287,13 +295,15 @@ static void free_master_key(struct fscrypt_master_key *mk)
 	}
 }
 
-static void put_master_key(struct fscrypt_master_key *mk)
+static void put_master_key(struct fscrypt_master_key *mk,
+			   struct inode *inode)
 {
 	if (!refcount_dec_and_lock(&mk->mk_refcount, &fscrypt_master_keys_lock))
 		return;
 	hash_del(&mk->mk_node);
 	spin_unlock(&fscrypt_master_keys_lock);
 
+	fscrypt_evict_crypt_key(inode);
 	free_master_key(mk);
 }
 
@@ -360,11 +370,13 @@ fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode,
 		return ERR_PTR(-ENOMEM);
 	refcount_set(&mk->mk_refcount, 1);
 	mk->mk_mode = mode;
-	mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
-	if (IS_ERR(mk->mk_ctfm)) {
-		err = PTR_ERR(mk->mk_ctfm);
-		mk->mk_ctfm = NULL;
-		goto err_free_mk;
+	if (!__flags_hw_encrypted(ci->ci_flags, inode)) {
+		mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
+		if (IS_ERR(mk->mk_ctfm)) {
+			err = PTR_ERR(mk->mk_ctfm);
+			mk->mk_ctfm = NULL;
+			goto err_free_mk;
+		}
 	}
 	memcpy(mk->mk_descriptor, ci->ci_master_key_descriptor,
 	       FS_KEY_DESCRIPTOR_SIZE);
@@ -457,7 +469,8 @@ static int setup_crypto_transform(struct fscrypt_info *ci,
 	struct crypto_skcipher *ctfm;
 	int err;
 
-	if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) {
+	if ((ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) ||
+	    __flags_hw_encrypted(ci->ci_flags, inode)) {
 		mk = fscrypt_get_master_key(ci, mode, raw_key, inode);
 		if (IS_ERR(mk))
 			return PTR_ERR(mk);
@@ -487,13 +500,13 @@ static int setup_crypto_transform(struct fscrypt_info *ci,
 	return 0;
 }
 
-static void put_crypt_info(struct fscrypt_info *ci)
+static void put_crypt_info(struct fscrypt_info *ci, struct inode *inode)
 {
 	if (!ci)
 		return;
 
 	if (ci->ci_master_key) {
-		put_master_key(ci->ci_master_key);
+		put_master_key(ci->ci_master_key, inode);
 	} else {
 		crypto_free_skcipher(ci->ci_ctfm);
 		crypto_free_cipher(ci->ci_essiv_tfm);
@@ -578,7 +591,7 @@ int fscrypt_get_encryption_info(struct inode *inode)
 out:
 	if (res == -ENOKEY)
 		res = 0;
-	put_crypt_info(crypt_info);
+	put_crypt_info(crypt_info, NULL);
 	kzfree(raw_key);
 	return res;
 }
@@ -586,7 +599,21 @@ EXPORT_SYMBOL(fscrypt_get_encryption_info);
 
 void fscrypt_put_encryption_info(struct inode *inode)
 {
-	put_crypt_info(inode->i_crypt_info);
+	put_crypt_info(inode->i_crypt_info, inode);
 	inode->i_crypt_info = NULL;
 }
 EXPORT_SYMBOL(fscrypt_put_encryption_info);
+
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+bool fscrypt_inode_is_hw_encrypted(const struct inode *inode)
+{
+	struct fscrypt_info *ci;
+
+	if (!inode)
+		return false;
+	ci = inode->i_crypt_info;
+
+	return ci && __flags_hw_encrypted(ci->ci_flags, inode);
+}
+EXPORT_SYMBOL(fscrypt_inode_is_hw_encrypted);
+#endif /* CONFIG_FS_ENCRYPTION_HW_CRYPT */
diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
index bd7eaf9b3f00..210116fb2173 100644
--- a/fs/crypto/policy.c
+++ b/fs/crypto/policy.c
@@ -36,6 +36,7 @@ static int create_encryption_context_from_policy(struct inode *inode,
 	struct fscrypt_context ctx;
 
 	ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
+
 	memcpy(ctx.master_key_descriptor, policy->master_key_descriptor,
 					FS_KEY_DESCRIPTOR_SIZE);
 
@@ -46,8 +47,17 @@ static int create_encryption_context_from_policy(struct inode *inode,
 	if (policy->flags & ~FS_POLICY_FLAGS_VALID)
 		return -EINVAL;
 
+	/**
+	 * TODO: expose HW encryption via some toggleable knob
+	 * instead of as a policy?
+	 */
+	if (!inode->i_sb->s_cop->hw_crypt_supp &&
+	    (policy->flags & FS_POLICY_FLAGS_HW_ENCRYPTION))
+		return -EINVAL;
+
 	ctx.contents_encryption_mode = policy->contents_encryption_mode;
 	ctx.filenames_encryption_mode = policy->filenames_encryption_mode;
+
 	ctx.flags = policy->flags;
 	BUILD_BUG_ON(sizeof(ctx.nonce) != FS_KEY_DERIVATION_NONCE_SIZE);
 	get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
index e5194fc3983e..a357a13bec27 100644
--- a/include/linux/fscrypt.h
+++ b/include/linux/fscrypt.h
@@ -60,6 +60,9 @@ struct fscrypt_operations {
 	bool (*dummy_context)(struct inode *);
 	bool (*empty_dir)(struct inode *);
 	unsigned int max_namelen;
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+	bool hw_crypt_supp;
+#endif
 };
 
 struct fscrypt_ctx {
@@ -115,6 +118,22 @@ extern int fscrypt_inherit_context(struct inode *, struct inode *,
 extern int fscrypt_get_encryption_info(struct inode *);
 extern void fscrypt_put_encryption_info(struct inode *);
 
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+extern bool fscrypt_inode_is_hw_encrypted(const struct inode *inode);
+extern bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1,
+					  const struct inode *inode_2);
+#else
+static inline bool fscrypt_inode_is_hw_encrypted(const struct inode *inode)
+{
+	return false;
+}
+static inline bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1,
+						 const struct inode *inode_2)
+{
+	return true;
+}
+#endif /* CONFIG_FS_ENCRYPTION_HW_CRYPT */
+
 /* fname.c */
 extern int fscrypt_setup_filename(struct inode *, const struct qstr *,
 				int lookup, struct fscrypt_name *);
@@ -211,6 +230,22 @@ extern void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
 extern void fscrypt_pullback_bio_page(struct page **, bool);
 extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t,
 				 unsigned int);
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+extern int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
+				     struct bio *bio, u64 data_unit_num);
+extern int fscrypt_evict_crypt_key(struct inode *inode);
+#else
+static inline int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
+					    struct bio *bio, u64 data_unit_num)
+{
+	return 0;
+}
+
+static inline int fscrypt_evict_crypt_key(struct inode *inode)
+{
+	return 0;
+}
+#endif
 
 /* hooks.c */
 extern int fscrypt_file_open(struct inode *inode, struct file *filp);
@@ -322,6 +357,17 @@ static inline void fscrypt_put_encryption_info(struct inode *inode)
 	return;
 }
 
+static inline bool fscrypt_inode_is_hw_encrypted(const struct inode *inode)
+{
+	return false;
+}
+
+static inline bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1,
+						 const struct inode *inode_2)
+{
+	return true;
+}
+
  /* fname.c */
 static inline int fscrypt_setup_filename(struct inode *dir,
 					 const struct qstr *iname,
@@ -392,6 +438,18 @@ static inline int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 	return -EOPNOTSUPP;
 }
 
+static inline int fscrypt_set_bio_crypt_ctx(const struct inode *inode,
+					    struct bio *bio,
+					    u64 data_unit_num)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_evict_crypt_key(struct inode *inode)
+{
+	return 0;
+}
+
 /* hooks.c */
 
 static inline int fscrypt_file_open(struct inode *inode, struct file *filp)
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 121e82ce296b..60d0963c389c 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -224,7 +224,17 @@ struct fsxattr {
 #define FS_POLICY_FLAGS_PAD_32		0x03
 #define FS_POLICY_FLAGS_PAD_MASK	0x03
 #define FS_POLICY_FLAG_DIRECT_KEY	0x04	/* use master key directly */
-#define FS_POLICY_FLAGS_VALID		0x07
+#define FS_POLICY_FLAGS_VALID_BASE	0x07
+
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION_HW_CRYPT)
+#define FS_POLICY_FLAGS_HW_ENCRYPTION	0x08
+#else
+#define FS_POLICY_FLAGS_HW_ENCRYPTION	0x00
+#endif
+
+
+#define FS_POLICY_FLAGS_VALID (FS_POLICY_FLAGS_VALID_BASE | \
+			       FS_POLICY_FLAGS_HW_ENCRYPTION)
 
 /* Encryption algorithms */
 #define FS_ENCRYPTION_MODE_INVALID		0
-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/4] f2fs: Wire up f2fs to use inline encryption via fscrypt
  2019-05-06 22:35 [RFC PATCH 0/4] Inline Encryption Support Satya Tangirala
                   ` (2 preceding siblings ...)
  2019-05-06 22:35 ` [RFC PATCH 3/4] fscrypt: wire up fscrypt to use blk-crypto Satya Tangirala
@ 2019-05-06 22:35 ` Satya Tangirala
  2019-05-07  1:25   ` Bart Van Assche
  2019-05-08  3:02   ` Chao Yu
  2019-05-07  0:26 ` [RFC PATCH 0/4] Inline Encryption Support Bart Van Assche
  2019-05-07  9:35 ` Chao Yu
  5 siblings, 2 replies; 16+ messages in thread
From: Satya Tangirala @ 2019-05-06 22:35 UTC (permalink / raw)
  To: linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, Satya Tangirala

Signed-off-by: Satya Tangirala <satyat@google.com>
---
 fs/f2fs/data.c  | 69 ++++++++++++++++++++++++++++++++++++++++++++++---
 fs/f2fs/super.c |  1 +
 2 files changed, 67 insertions(+), 3 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 9727944139f2..7ac6768a52a5 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -279,9 +279,18 @@ static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
 	return bio;
 }
 
+static inline u64 hw_crypt_dun(struct inode *inode, struct page *page)
+{
+	return (((u64)inode->i_ino) << 32) | (page->index & 0xFFFFFFFF);
+}
+
 static inline void __submit_bio(struct f2fs_sb_info *sbi,
 				struct bio *bio, enum page_type type)
 {
+	struct page *page;
+	struct inode *inode;
+	int err = 0;
+
 	if (!is_read_io(bio_op(bio))) {
 		unsigned int start;
 
@@ -323,7 +332,21 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi,
 		trace_f2fs_submit_read_bio(sbi->sb, type, bio);
 	else
 		trace_f2fs_submit_write_bio(sbi->sb, type, bio);
-	submit_bio(bio);
+
+	if (bio_has_data(bio)) {
+		page = bio_page(bio);
+		if (page && page->mapping && page->mapping->host) {
+			inode = page->mapping->host;
+			err = fscrypt_set_bio_crypt_ctx(inode, bio,
+						hw_crypt_dun(inode, page));
+		}
+	}
+	if (err) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+	} else {
+		submit_bio(bio);
+	}
 }
 
 static void __submit_merged_bio(struct f2fs_bio_info *io)
@@ -484,6 +507,9 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
 	enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
 	struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
 	struct page *bio_page;
+	struct inode *fio_inode, *bio_inode;
+	struct page *first_page;
+	u64 next_dun = 0;
 
 	f2fs_bug_on(sbi, is_read_io(fio->op));
 
@@ -512,10 +538,29 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
 
 	inc_page_count(sbi, WB_DATA_TYPE(bio_page));
 
+	fio_inode = fio->page->mapping->host;
+	bio_inode = NULL;
+	first_page = NULL;
+	next_dun = 0;
+	if (io->bio) {
+		first_page = bio_page(io->bio);
+		if (first_page->mapping) {
+			bio_inode = first_page->mapping->host;
+			if (fscrypt_inode_is_hw_encrypted(bio_inode)) {
+				next_dun =
+					hw_crypt_dun(bio_inode, first_page) +
+				    (io->bio->bi_iter.bi_size >> PAGE_SHIFT);
+			}
+		}
+	}
 	if (io->bio && (io->last_block_in_bio != fio->new_blkaddr - 1 ||
 	    (io->fio.op != fio->op || io->fio.op_flags != fio->op_flags) ||
-			!__same_bdev(sbi, fio->new_blkaddr, io->bio)))
+			!__same_bdev(sbi, fio->new_blkaddr, io->bio) ||
+			!fscrypt_inode_crypt_mergeable(bio_inode, fio_inode) ||
+			(fscrypt_inode_is_hw_encrypted(bio_inode) &&
+			 next_dun != hw_crypt_dun(fio_inode, fio->page))))
 		__submit_merged_bio(io);
+
 alloc_new:
 	if (io->bio == NULL) {
 		if ((fio->type == DATA || fio->type == NODE) &&
@@ -570,7 +615,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
 	bio->bi_end_io = f2fs_read_end_io;
 	bio_set_op_attrs(bio, REQ_OP_READ, op_flag);
 
-	if (f2fs_encrypted_file(inode))
+	if (f2fs_encrypted_file(inode) && !fscrypt_inode_is_hw_encrypted(inode))
 		post_read_steps |= 1 << STEP_DECRYPT;
 	if (post_read_steps) {
 		ctx = mempool_alloc(bio_post_read_ctx_pool, GFP_NOFS);
@@ -1525,6 +1570,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
 	sector_t last_block_in_file;
 	sector_t block_nr;
 	struct f2fs_map_blocks map;
+	u64 next_dun = 0;
 
 	map.m_pblk = 0;
 	map.m_lblk = 0;
@@ -1606,6 +1652,13 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
 			__submit_bio(F2FS_I_SB(inode), bio, DATA);
 			bio = NULL;
 		}
+
+		if (bio && fscrypt_inode_is_hw_encrypted(inode) &&
+		    next_dun != hw_crypt_dun(inode, page)) {
+			__submit_bio(F2FS_I_SB(inode), bio, DATA);
+			bio = NULL;
+		}
+
 		if (bio == NULL) {
 			bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
 					is_readahead ? REQ_RAHEAD : 0);
@@ -1624,6 +1677,9 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
 		if (bio_add_page(bio, page, blocksize, 0) < blocksize)
 			goto submit_and_realloc;
 
+		if (fscrypt_inode_is_hw_encrypted(inode))
+			next_dun = hw_crypt_dun(inode, page) + 1;
+
 		inc_page_count(F2FS_I_SB(inode), F2FS_RD_DATA);
 		ClearPageError(page);
 		last_block_in_bio = block_nr;
@@ -2591,12 +2647,19 @@ static void f2fs_dio_submit_bio(struct bio *bio, struct inode *inode,
 {
 	struct f2fs_private_dio *dio;
 	bool write = (bio_op(bio) == REQ_OP_WRITE);
+	u64 data_unit_num = (((u64)inode->i_ino) << 32) |
+			    ((file_offset >> PAGE_SHIFT) & 0xFFFFFFFF);
 
 	dio = f2fs_kzalloc(F2FS_I_SB(inode),
 			sizeof(struct f2fs_private_dio), GFP_NOFS);
 	if (!dio)
 		goto out;
 
+	if (fscrypt_set_bio_crypt_ctx(inode, bio, data_unit_num) != 0) {
+		kvfree(dio);
+		goto out;
+	}
+
 	dio->inode = inode;
 	dio->orig_end_io = bio->bi_end_io;
 	dio->orig_private = bio->bi_private;
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index f2aaa2cc6b3e..e98c85d42e8d 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -2225,6 +2225,7 @@ static const struct fscrypt_operations f2fs_cryptops = {
 	.dummy_context	= f2fs_dummy_context,
 	.empty_dir	= f2fs_empty_dir,
 	.max_namelen	= F2FS_NAME_LEN,
+	.hw_crypt_supp	= true,
 };
 #endif
 
-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support
  2019-05-06 22:35 ` [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support Satya Tangirala
@ 2019-05-06 23:51   ` Randy Dunlap
  2019-05-07  0:39   ` Bart Van Assche
  2019-05-07  9:23   ` Avri Altman
  2 siblings, 0 replies; 16+ messages in thread
From: Randy Dunlap @ 2019-05-06 23:51 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

On 5/6/19 3:35 PM, Satya Tangirala wrote:
> diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
> index 6db37cf306b0..c14f445a2522 100644
> --- a/drivers/scsi/ufs/Kconfig
> +++ b/drivers/scsi/ufs/Kconfig
> @@ -135,3 +135,13 @@ config SCSI_UFS_BSG
>  
>  	  Select this if you need a bsg device node for your UFS controller.
>  	  If unsure, say N.
> +
> +config SCSI_UFS_CRYPTO
> +	bool "UFS Crypto Engine Support"
> +	depends on SCSI_UFSHCD && BLK_KEYSLOT_MANAGER
> +	help
> +	Enable Crypto Engine Support in UFS.
> +	Enabling this makes it possible for the kernel to use the crypto
> +	capabilities of the UFS device (if present) to perform crypto
> +	operations on data being transferred into/out of the device.

	                        (maybe:)     to/from the device.
> +

Help text should be indented with 1 tab + 2 spaces, please.

-- 
~Randy

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] block: Block Layer changes for Inline Encryption Support
  2019-05-06 22:35 ` [RFC PATCH 1/4] block: Block Layer changes for " Satya Tangirala
@ 2019-05-06 23:54   ` Randy Dunlap
  2019-05-07  0:37   ` Bart Van Assche
  2019-05-08  2:12   ` Randy Dunlap
  2 siblings, 0 replies; 16+ messages in thread
From: Randy Dunlap @ 2019-05-06 23:54 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

On 5/6/19 3:35 PM, Satya Tangirala wrote:
> diff --git a/block/Kconfig b/block/Kconfig
> index 028bc085dac8..65213769d2a2 100644
> --- a/block/Kconfig
> +++ b/block/Kconfig
> @@ -187,6 +187,22 @@ config BLK_SED_OPAL
>  	Enabling this option enables users to setup/unlock/lock
>  	Locking ranges for SED devices using the Opal protocol.
>  
> +config BLK_CRYPT_CTX
> +	bool
> +
> +config BLK_KEYSLOT_MANAGER
> +	bool
> +
> +config BLK_CRYPTO
> +	bool "Enable encryption in block layer"
> +	select BLK_CRYPT_CTX
> +	select BLK_KEYSLOT_MANAGER
> +	help
> +	Build the blk-crypto subsystem.
> +	Enabling this lets the block layer handle encryption,
> +	so users can take advantage of inline encryption
> +	hardware if present.

Last 4 lines should be indented with 1 tab + 2 spaces, please.


-- 
~Randy

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] Inline Encryption Support
  2019-05-06 22:35 [RFC PATCH 0/4] Inline Encryption Support Satya Tangirala
                   ` (3 preceding siblings ...)
  2019-05-06 22:35 ` [RFC PATCH 4/4] f2fs: Wire up f2fs to use inline encryption via fscrypt Satya Tangirala
@ 2019-05-07  0:26 ` Bart Van Assche
  2019-05-07  9:35 ` Chao Yu
  5 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2019-05-07  0:26 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

On 5/6/19 3:35 PM, Satya Tangirala wrote:
> This patch series adds support for Inline Encryption to the block layer,
> fscrypt and f2fs.

The worst time for posting a patch series is during the merge window.

Please address the checkpatch warnings triggered by this patch series.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] block: Block Layer changes for Inline Encryption Support
  2019-05-06 22:35 ` [RFC PATCH 1/4] block: Block Layer changes for " Satya Tangirala
  2019-05-06 23:54   ` Randy Dunlap
@ 2019-05-07  0:37   ` Bart Van Assche
  2019-05-08  2:12   ` Randy Dunlap
  2 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2019-05-07  0:37 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

On 5/6/19 3:35 PM, Satya Tangirala wrote:
> +#ifdef CONFIG_BLK_CRYPT_CTX
> +static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
> +{
> +	if (bio_is_encrypted(bio)) {
> +		bio->bi_crypt_context.data_unit_num +=
> +			bytes >> bio->bi_crypt_context.data_unit_size_bits;
> +	}
> +}
> +
> +void bio_clone_crypt_context(struct bio *dst, struct bio *src)
> +{
> +	if (bio_crypt_swhandled(src))
> +		return;
> +	dst->bi_crypt_context = src->bi_crypt_context;
> +
> +	if (!bio_crypt_has_keyslot(src))
> +		return;
> +
> +	/**

Please use "/*" to start comment blocks other than kernel-doc headers.

> +	 * This should always succeed because the src bio should already
> +	 * have a reference to the keyslot.
> +	 */
> +	BUG_ON(!keyslot_manager_get_slot(src->bi_crypt_context.processing_ksm,
> +					  src->bi_crypt_context.keyslot));

Are you aware that using BUG_ON() if there is a reasonable way to
recover is not acceptable?

> +}
> +
> +bool bio_crypt_should_process(struct bio *bio, struct request_queue *q)
> +{
> +	if (!bio_is_encrypted(bio))
> +		return false;
> +
> +	WARN_ON(!bio_crypt_has_keyslot(bio));
> +	return q->ksm == bio->bi_crypt_context.processing_ksm;
> +}
> +EXPORT_SYMBOL(bio_crypt_should_process);
> +
> +#endif /* CONFIG_BLK_CRYPT_CTX */

Please move these new functions into a separate source file instead of
using #ifdef / #endif. I think the coding style documentation mentions
this explicitly.

> +static struct blk_crypto_keyslot {
> +	struct crypto_skcipher *tfm;
> +	int crypto_alg_id;
> +	union {
> +		u8 key[BLK_CRYPTO_MAX_KEY_SIZE];
> +		u32 key_words[BLK_CRYPTO_MAX_KEY_SIZE/4];
> +	};
> +} *slot_mem;

What is the purpose of the key_words[] member? Is it used anywhere? If
not, can it be left out?

> diff --git a/block/blk-merge.c b/block/blk-merge.c
> index 1c9d4f0f96ea..55133c547bdf 100644
> --- a/block/blk-merge.c
> +++ b/block/blk-merge.c
> @@ -614,6 +614,59 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
>  }
>  EXPORT_SYMBOL(blk_rq_map_sg);
>  
> +#ifdef CONFIG_BLK_CRYPT_CTX
> +/*
> + * Checks that two bio crypt contexts are compatible - i.e. that
> + * they are mergeable except for data_unit_num continuity.
> + */
> +static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
> +{
> +	struct bio_crypt_ctx *bc1 = &b_1->bi_crypt_context;
> +	struct bio_crypt_ctx *bc2 = &b_2->bi_crypt_context;
> +
> +	if (bio_is_encrypted(b_1) != bio_is_encrypted(b_2) ||
> +	    bc1->keyslot != bc2->keyslot)
> +		return false;
> +
> +	return !bio_is_encrypted(b_1) ||
> +		bc1->data_unit_size_bits == bc2->data_unit_size_bits;
> +}
> +
> +/*
> + * Checks that two bio crypt contexts are compatible, and also
> + * that their data_unit_nums are continuous (and can hence be merged)
> + */
> +static inline bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
> +						unsigned int b1_sectors,
> +						struct bio *b_2)
> +{
> +	struct bio_crypt_ctx *bc1 = &b_1->bi_crypt_context;
> +	struct bio_crypt_ctx *bc2 = &b_2->bi_crypt_context;
> +
> +	if (!bio_crypt_ctx_compatible(b_1, b_2))
> +		return false;
> +
> +	return !bio_is_encrypted(b_1) ||
> +		(bc1->data_unit_num +
> +		(b1_sectors >> (bc1->data_unit_size_bits - 9)) ==
> +		bc2->data_unit_num);
> +}
> +
> +#else /* CONFIG_BLK_CRYPT_CTX */
> +static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
> +{
> +	return true;
> +}
> +
> +static inline bool bio_crypt_ctx_back_mergeable(struct bio *b_1,
> +						unsigned int b1_sectors,
> +						struct bio *b_2)
> +{
> +	return true;
> +}
> +
> +#endif /* CONFIG_BLK_CRYPT_CTX */

Can the above functions be moved into a new file such that the
#ifdef/#endif construct can be avoided?

> +	/* Wait till there is a free slot available */
> +	while (atomic_read(&ksm->num_idle_slots) == 0) {
> +		mutex_unlock(&ksm->lock);
> +		wait_event(ksm->wait_queue,
> +			   (atomic_read(&ksm->num_idle_slots) > 0));
> +		mutex_lock(&ksm->lock);
> +	}

Using an atomic_read() inside code protected by a mutex is suspicious.
Would protecting all ksm->num_idle_slots manipulations with ksm->lock
and making ksm->num_idle_slots a regular integer have a negative
performance impact?

> +struct keyslot_mgmt_ll_ops {
> +	int (*keyslot_program)(void *ll_priv_data, const u8 *key,
> +			       unsigned int data_unit_size,
> +			       /* crypto_alg_id returned by crypto_alg_find */
> +			       unsigned int crypto_alg_id,
> +			       unsigned int slot);
> +	/**
> +	 * Evict key from all keyslots in the keyslot manager.
> +	 * The key, data_unit_size and crypto_alg_id are also passed down
> +	 * so that for e.g. dm layers that have their own keyslot
> +	 * managers can evict keys from the devices that they map over.
> +	 * Returns 0 on success, -errno otherwise.
> +	 */
> +	int (*keyslot_evict)(void *ll_priv_data, unsigned int slot,
> +			     const u8 *key, unsigned int data_unit_size,
> +			     unsigned int crypto_alg_id);
> +	/**
> +	 * Get a crypto_alg_id (used internally by the lower layer driver) that
> +	 * represents the given blk-crypto crypt_mode and data_unit_size. The
> +	 * returned crypto_alg_id will be used in future calls to the lower
> +	 * layer driver (in keyslot_program and keyslot_evict) to reference
> +	 * this crypt_mode, data_unit_size combo. Returns negative error code
> +	 * if a crypt_mode, data_unit_size combo is not supported.
> +	 */
> +	int (*crypto_alg_find)(void *ll_priv_data,
> +			       enum blk_crypt_mode_index crypt_mode,
> +			       unsigned int data_unit_size);
> +	/**
> +	 * Returns the slot number that matches the key,
> +	 * or -ENOKEY if no match found, or negative on error
> +	 */
> +	int (*keyslot_find)(void *ll_priv_data, const u8 *key,
> +			    unsigned int data_unit_size,
> +			    unsigned int crypto_alg_id);
> +};

Have you considered to use kernel-doc format for documenting the members
of the keyslot_mgmt_ll_ops structure?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support
  2019-05-06 22:35 ` [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support Satya Tangirala
  2019-05-06 23:51   ` Randy Dunlap
@ 2019-05-07  0:39   ` Bart Van Assche
  2019-05-07  9:23   ` Avri Altman
  2 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2019-05-07  0:39 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

On 5/6/19 3:35 PM, Satya Tangirala wrote:
> +/*TODO: worry about endianness and cpu_to_le32 */

Please fix endianness issues before reposting this patch series and
please also make sure that this patch series is sparse-clean.
Instructions for how to use sparse are available at
https://kernelnewbies.org/Sparse.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] fscrypt: wire up fscrypt to use blk-crypto
  2019-05-06 22:35 ` [RFC PATCH 3/4] fscrypt: wire up fscrypt to use blk-crypto Satya Tangirala
@ 2019-05-07  1:23   ` Bart Van Assche
  0 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2019-05-07  1:23 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

On 5/6/19 3:35 PM, Satya Tangirala wrote:
[ ... ]
> diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
> index 7da276159593..d6d65c88a629 100644
> --- a/fs/crypto/fscrypt_private.h
> +++ b/fs/crypto/fscrypt_private.h
> @@ -49,6 +49,16 @@ struct fscrypt_symlink_data {
>  	char encrypted_path[1];
>  } __packed;
>  
> +/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
> +struct fscrypt_master_key {
> +	struct hlist_node mk_node;
> +	refcount_t mk_refcount;
> +	const struct fscrypt_mode *mk_mode;
> +	struct crypto_skcipher *mk_ctfm;
> +	u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
> +	u8 mk_raw[FS_MAX_KEY_SIZE];
> +};
> [ ... ]
> -/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
> -struct fscrypt_master_key {
> -	struct hlist_node mk_node;
> -	refcount_t mk_refcount;
> -	const struct fscrypt_mode *mk_mode;
> -	struct crypto_skcipher *mk_ctfm;
> -	u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
> -	u8 mk_raw[FS_MAX_KEY_SIZE];
> -};

How about introducing the file fs/crypto/fscrypt_private.h in patch 2/4
such that the fscrypt_master_key definition does not have to be moved
around?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 4/4] f2fs: Wire up f2fs to use inline encryption via fscrypt
  2019-05-06 22:35 ` [RFC PATCH 4/4] f2fs: Wire up f2fs to use inline encryption via fscrypt Satya Tangirala
@ 2019-05-07  1:25   ` Bart Van Assche
  2019-05-08  3:02   ` Chao Yu
  1 sibling, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2019-05-07  1:25 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

On 5/6/19 3:35 PM, Satya Tangirala wrote:
> +static inline u64 hw_crypt_dun(struct inode *inode, struct page *page)
> +{
> +	return (((u64)inode->i_ino) << 32) | (page->index & 0xFFFFFFFF);
> +}

How about using lower_32_bits() instead of "& 0xFFFFFFFF"?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support
  2019-05-06 22:35 ` [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support Satya Tangirala
  2019-05-06 23:51   ` Randy Dunlap
  2019-05-07  0:39   ` Bart Van Assche
@ 2019-05-07  9:23   ` Avri Altman
  2 siblings, 0 replies; 16+ messages in thread
From: Avri Altman @ 2019-05-07  9:23 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

Hi,
> 
> Uses the UFSHCI v2.1 spec to manage keys in inline crypto engine
> hardware, and exposes that functionality through the keyslot manager it
> sets up in the device's request_queue. Uses the keyslot in the
> bio_crypt_ctx of the bio, if specified, as the encryption context.
> 
> Known Issues: In the current implementation, multiple keyslot managers
> may be allocated for a single UFS host. We should tie keyslot managers
> to hosts to avoid this issue.
> 
> Signed-off-by: Satya Tangirala <satyat@google.com>

I think this patch should be disintegrate into minimum of 3 patches:
1) introducing new UFSHCI crypto registers
2) Add ufshcd-crypto API
3) whatever added functionality to ufshcd

> ---
>  drivers/scsi/ufs/Kconfig         |  10 +
>  drivers/scsi/ufs/Makefile        |   1 +
>  drivers/scsi/ufs/ufshcd-crypto.c | 449 +++++++++++++++++++++++++++++++
>  drivers/scsi/ufs/ufshcd-crypto.h |  92 +++++++
>  drivers/scsi/ufs/ufshcd.c        |  85 +++++-
>  drivers/scsi/ufs/ufshcd.h        |  23 ++
>  drivers/scsi/ufs/ufshci.h        |  67 ++++-
>  7 files changed, 720 insertions(+), 7 deletions(-)
>  create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c
>  create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h
> 
> diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
> index 6db37cf306b0..c14f445a2522 100644
> --- a/drivers/scsi/ufs/Kconfig
> +++ b/drivers/scsi/ufs/Kconfig
> @@ -135,3 +135,13 @@ config SCSI_UFS_BSG
> 
>  	  Select this if you need a bsg device node for your UFS controller.
>  	  If unsure, say N.
> +
> +config SCSI_UFS_CRYPTO
> +	bool "UFS Crypto Engine Support"
> +	depends on SCSI_UFSHCD && BLK_KEYSLOT_MANAGER
> +	help
> +	Enable Crypto Engine Support in UFS.
> +	Enabling this makes it possible for the kernel to use the crypto
> +	capabilities of the UFS device (if present) to perform crypto
> +	operations on data being transferred into/out of the device.
> +
> diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
> index a3bd70c3652c..5b52463e8abf 100644
> --- a/drivers/scsi/ufs/Makefile
> +++ b/drivers/scsi/ufs/Makefile
> @@ -10,3 +10,4 @@ ufshcd-core-$(CONFIG_SCSI_UFS_BSG)	+= ufs_bsg.o
>  obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o
>  obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
>  obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
> +ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
> \ No newline at end of file
> diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
> new file mode 100644
> index 000000000000..af1da161d53e
> --- /dev/null
> +++ b/drivers/scsi/ufs/ufshcd-crypto.c
> @@ -0,0 +1,449 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2019 Google LLC
> + */
> +
> +#include <crypto/algapi.h>
> +
> +#include "ufshcd.h"
> +#include "ufshcd-crypto.h"
> +
> +/*TODO: worry about endianness and cpu_to_le32 */
?

> +
> +bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
> +{
> +	return hba->crypto_capabilities.reg_val != 0;
> +}
> +
> +bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
> +{
> +	return hba->caps & UFSHCD_CAP_CRYPTO;
> +}
> +
> +static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
> +{
> +	return cap_idx < hba->crypto_capabilities.num_crypto_cap;
> +}
> +
> +bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
> +{
> +	/**
Not a kernel doc

> +	 * The actual number of configurations supported is (CFGC+1), so slot
> +	 * numbers range from 0 to config_count inclusive.
> +	 */
> +	return slot <= hba->crypto_capabilities.config_count;
> +}
> +
> +static u8 get_data_unit_size_mask(unsigned int data_unit_size)
> +{
> +	if (data_unit_size < 512 || data_unit_size > 65536 ||
> +	    !is_power_of_2(data_unit_size)) {
> +		return 0;
> +	}
> +
> +	return data_unit_size / 512;
> +}
> +
> +static size_t get_keysize_bytes(enum ufs_crypto_key_size size)
> +{
> +	switch (size) {
> +	case UFS_CRYPTO_KEY_SIZE_128: return 16;
> +	case UFS_CRYPTO_KEY_SIZE_192: return 24;
> +	case UFS_CRYPTO_KEY_SIZE_256: return 32;
> +	case UFS_CRYPTO_KEY_SIZE_512: return 64;
> +	default: return 0;
> +	}
> +}
> +
> +/**
> + * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
> + *
> + *	Writes the key with the appropriate format - for AES_XTS,
> + *	the first half of the key is copied as is, the second half is
> + *	copied with an offset halfway into the cfg->crypto_key array.
> + *	For the other supported crypto algs, the key is just copied.
> + *
> + * @cfg: The crypto config to write to
> + * @key: The key to write
> + * @cap: The crypto capability (which specifies the crypto alg and key size)
> + *
> + * Returns 0 on success, or -errno
Or -EINVAL

> + */
> +static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry
> *cfg,
static int
ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg,

> +					     const u8 *key,
> +					     union ufs_crypto_cap_entry cap)
> +{
> +	size_t key_size_bytes = get_keysize_bytes(cap.key_size);
> +
> +	if (key_size_bytes == 0)
> +		return -EINVAL;
> +
> +	switch (cap.algorithm_id) {
> +	case UFS_CRYPTO_ALG_AES_XTS:
> +		key_size_bytes *= 2;
> +		if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE)
> +			return -EINVAL;
> +
> +		memcpy(cfg->crypto_key, key, key_size_bytes/2);
> +		memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
> +		       key + key_size_bytes/2, key_size_bytes/2);
> +		return 0;
> +	case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC: // fallthrough
> +	case UFS_CRYPTO_ALG_AES_ECB: // fallthrough
> +	case UFS_CRYPTO_ALG_ESSIV_AES_CBC:
> +		memcpy(cfg->crypto_key, key, key_size_bytes);
> +		return 0;
> +	}
> +
> +	return -EINVAL;
> +}
> +
> +static void program_key(struct ufs_hba *hba,
> +			const union ufs_crypto_cfg_entry *cfg,
> +			int slot)
> +{
> +	int i;
> +	u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
> +
> +	/* Clear the dword 16 */
> +	ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
> +	/* Ensure that CFGE is cleared before programming the key */
Why is that needed?

> +	wmb();
> +	/* TODO: swab32 on the key? */
?

> +	for (i = 0; i < 16; i++) {
> +		ufshcd_writel(hba, cfg->reg_val[i],
> +			      slot_offset + i * sizeof(cfg->reg_val[0]));
> +		/* Spec says each dword in key must be written sequentially */
it also said it should be done in atomic context:
"When configuring CRYPTOKEY field software shall write the entire
 key from DW0 to DW15, sequentially, in one atomic set of operations."

> +		wmb();
> +	}
> +	/* Write dword 17 */
> +	ufshcd_writel(hba, cfg->reg_val[17],
> +		      slot_offset + 17 * sizeof(cfg->reg_val[0]));
> +	/* Dword 16 must be written last */
> +	wmb();
> +	/* Write dword 16 */
> +	ufshcd_writel(hba, cfg->reg_val[16],
> +		      slot_offset + 16 * sizeof(cfg->reg_val[0]));
> +	wmb();
> +}
> +
> +static int ufshcd_crypto_keyslot_program(void *hba_p, const u8 *key,
> +			      unsigned int data_unit_size,
> +			      unsigned int crypto_alg_id,
> +			      unsigned int slot)
> +{
> +	struct ufs_hba *hba = hba_p;
> +	int err = 0;
> +	u8 data_unit_mask;
> +	union ufs_crypto_cfg_entry cfg;
> +	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
> +
> +	if (!ufshcd_is_crypto_enabled(hba) ||
> +	    !ufshcd_keyslot_valid(hba, slot) ||
> +	    !ufshcd_cap_idx_valid(hba, crypto_alg_id)) {
> +		return -EINVAL;
> +	}
> +
> +	data_unit_mask = get_data_unit_size_mask(data_unit_size);
> +
> +	if (!(data_unit_mask &
> +	      hba->crypto_cap_array[crypto_alg_id].sdus_mask)) {
> +		return -EINVAL;
> +	}
> +
> +	memset(&cfg, 0, sizeof(cfg));
> +	cfg.data_unit_size = data_unit_mask;
> +	cfg.crypto_cap_idx = crypto_alg_id;
> +	cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE;
> +
> +	err = ufshcd_crypto_cfg_entry_write_key(&cfg, key,
> +					hba-
> >crypto_cap_array[crypto_alg_id]);
Slipped to next line

> +	if (err)
> +		return err;
> +
> +	program_key(hba, &cfg, slot);
> +
> +	memcpy(&cfg_arr[slot], &cfg, sizeof(cfg));
> +	memzero_explicit(&cfg, sizeof(cfg));
> +
> +	return 0;
> +}
> +
> +static int ufshcd_crypto_keyslot_find(void *hba_p,
> +				      const u8 *key,
> +				      unsigned int data_unit_size,
> +				      unsigned int crypto_alg_id)
> +{
> +	struct ufs_hba *hba = hba_p;
> +	int err = 0;
> +	int slot;
> +	u8 data_unit_mask;
> +	union ufs_crypto_cfg_entry cfg;
> +	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
> +
> +	if (!ufshcd_is_crypto_enabled(hba) ||
> +	    crypto_alg_id >= hba->crypto_capabilities.num_crypto_cap) {
> +		return -EINVAL;
> +	}
> +
> +	data_unit_mask = get_data_unit_size_mask(data_unit_size);
> +
> +	if (!(data_unit_mask &
> +	      hba->crypto_cap_array[crypto_alg_id].sdus_mask)) {
> +		return -EINVAL;
> +	}
> +
> +	memset(&cfg, 0, sizeof(cfg));
> +	err = ufshcd_crypto_cfg_entry_write_key(&cfg, key,
> +					hba-
> >crypto_cap_array[crypto_alg_id]);
> +
> +	if (err)
> +		return -EINVAL;
> +
> +	for (slot = 0; slot <= hba->crypto_capabilities.config_count; slot++) {
> +		if ((cfg_arr[slot].config_enable &
> +		     UFS_CRYPTO_CONFIGURATION_ENABLE) &&
> +		    data_unit_mask == cfg_arr[slot].data_unit_size &&
> +		    crypto_alg_id == cfg_arr[slot].crypto_cap_idx &&
> +		    crypto_memneq(&cfg.crypto_key, cfg_arr[slot].crypto_key,
> +				  UFS_CRYPTO_KEY_MAX_SIZE) == 0) {
> +			memzero_explicit(&cfg, sizeof(cfg));
> +			return slot;
> +		}
> +	}
> +
> +	memzero_explicit(&cfg, sizeof(cfg));
> +	return -ENOKEY;
> +}
> +
> +static int ufshcd_crypto_keyslot_evict(void *hba_p, unsigned int slot,
> +				       const u8 *key,
> +				       unsigned int data_unit_size,
> +				       unsigned int crypto_alg_id)
> +{
> +	struct ufs_hba *hba = hba_p;
> +	int i = 0;
> +	u32 reg_base;
> +	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
> +
> +	if (!ufshcd_is_crypto_enabled(hba) ||
> +	    !ufshcd_keyslot_valid(hba, slot)) {
> +		return -EINVAL;
> +	}
> +
> +	memset(&cfg_arr[slot], 0, sizeof(cfg_arr[slot]));
> +	reg_base = hba->crypto_cfg_register +
> +			slot * sizeof(cfg_arr[0]);
> +
> +	/**
> +	 * Clear the crypto cfg on the device. Clearing CFGE
> +	 * might not be sufficient, so just clear the entire cfg.
> +	 */
> +	for (i = 0; i < sizeof(cfg_arr[0]); i += sizeof(__le32))
> +		ufshcd_writel(hba, 0, reg_base + i);
> +	wmb();
> +
> +	return 0;
> +}
> +
> +static int ufshcd_crypto_alg_find(void *hba_p,
> +			   enum blk_crypt_mode_index crypt_mode,
> +			   unsigned int data_unit_size)
> +{
> +	struct ufs_hba *hba = hba_p;
> +	enum ufs_crypto_alg ufs_alg;
> +	u8 data_unit_mask;
> +	int cap_idx;
> +	enum ufs_crypto_key_size ufs_key_size;
> +	union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
> +
> +	if (!ufshcd_hba_is_crypto_supported(hba))
> +		return -EINVAL;
> +
> +	switch (crypt_mode) {
> +	case BLK_ENCRYPTION_MODE_AES_256_XTS:
> +		ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
> +		ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
> +		break;
> +	/**
> +	 * case BLK_CRYPTO_ALG_BITLOCKER_AES_CBC:
> +	 *	ufs_alg = UFS_CRYPTO_ALG_BITLOCKER_AES_CBC;
> +	 *	break;
> +	 * case INLINECRYPT_ALG_AES_ECB:
> +	 *	ufs_alg = UFS_CRYPTO_ALG_AES_ECB;
> +	 *	break;
> +	 * case INLINECRYPT_ALG_ESSIV_AES_CBC:
> +	 *	ufs_alg = UFS_CRYPTO_ALG_ESSIV_AES_CBC;
> +	 *	break;
> +	 */
> +	default: return -EINVAL;
> +	}
> +
> +	data_unit_mask = get_data_unit_size_mask(data_unit_size);
> +
> +	/**
> +	 * TODO: We can replace this for loop entirely by constructing
> +	 * a table on init that translates blk_crypt_mode_index to
> +	 * ufs crypt alg numbers. (By assuming that each alg/keysize combo
> +	 * appears only once in the ufs crypto caps array.)
> +	 */
> +	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
> +	     cap_idx++) {
> +		if (ccap_array[cap_idx].algorithm_id == ufs_alg &&
> +		    (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
> +		    ccap_array[cap_idx].key_size == ufs_key_size) {
> +			return cap_idx;
> +		}
> +	}
> +
> +	return -EINVAL;
> +}
> +
> +int ufshcd_crypto_enable(struct ufs_hba *hba)
> +{
> +	union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs;
> +	int slot;
> +
> +	if (!ufshcd_hba_is_crypto_supported(hba))
> +		return -EINVAL;
> +
> +	hba->caps |= UFSHCD_CAP_CRYPTO;
> +	/**
> +	 * Reset might clear all keys, so reprogram all the keys.
> +	 * Also serves to clear keys on driver init.
> +	 */
> +	for (slot = 0; slot <= hba->crypto_capabilities.config_count; slot++)
> +		program_key(hba, &cfg_arr[slot], slot);
> +
> +	return 0;
> +}
> +
> +int ufshcd_crypto_disable(struct ufs_hba *hba)
> +{
> +	if (!ufshcd_hba_is_crypto_supported(hba))
> +		return -EINVAL;
> +
> +	hba->caps &= ~UFSHCD_CAP_CRYPTO;
> +
> +	return 0;
> +}
> +
> +
> +/**
> + * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba
> + * @hba: Per adapter instance
> + *
> + * Returns 0 on success. Returns -ENODEV if such capabilties don't exist, and
> + * -ENOMEM upon OOM.
> + */
> +int ufshcd_hba_init_crypto(struct ufs_hba *hba)
> +{
> +	int cap_idx = 0;
> +	int err = 0;
One line space please

> +	/* Default to disabling crypto */
> +	hba->caps &= ~UFSHCD_CAP_CRYPTO;
> +
> +	if (!(hba->capabilities & MASK_CRYPTO_SUPPORT)) {
> +		err = -ENODEV;
> +		goto out;
> +	}
> +
> +	/**
> +	 * Crypto Capabilities should never be 0, because the
> +	 * config_array_ptr > 04h. So we use a 0 value to indicate that
> +	 * crypto init failed, and can't be enabled.
> +	 */
> +	hba->crypto_capabilities.reg_val = ufshcd_readl(hba, REG_UFS_CCAP);
> +	hba->crypto_cfg_register =
> +		(u32)hba->crypto_capabilities.config_array_ptr * 0x100;
> +	hba->crypto_cap_array =
> +		devm_kcalloc(hba->dev,
> +			     hba->crypto_capabilities.num_crypto_cap,
> +			     sizeof(hba->crypto_cap_array[0]),
> +			     GFP_KERNEL);
> +	if (!hba->crypto_cap_array) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +
> +	hba->crypto_cfgs =
> +		devm_kcalloc(hba->dev,
> +			     hba->crypto_capabilities.config_count + 1,
> +			     sizeof(union ufs_crypto_cfg_entry),
> +			     GFP_KERNEL);
> +	if (!hba->crypto_cfgs) {
> +		err = -ENOMEM;
> +		goto out_cfg_mem;
> +	}
> +
> +	/**
> +	 * Store all the capabilities now so that we don't need to repeatedly
> +	 * access the device each time we want to know its capabilities
> +	 */
> +	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
> +	     cap_idx++) {
> +		hba->crypto_cap_array[cap_idx].reg_val =
> +			ufshcd_readl(hba,
> +				     REG_UFS_CRYPTOCAP +
> +				     cap_idx * sizeof(__le32));
> +	}
> +
> +	return 0;
> +out_cfg_mem:
> +	devm_kfree(hba->dev, hba->crypto_cap_array);
> +out:
> +	// TODO: print error?
> +	/* Indicate that init failed by setting crypto_capabilities to 0 */
> +	hba->crypto_capabilities.reg_val = 0;
> +	return err;
> +}
> +
> +const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = {
> +	.keyslot_program	= ufshcd_crypto_keyslot_program,
> +	.keyslot_evict		= ufshcd_crypto_keyslot_evict,
> +	.keyslot_find		= ufshcd_crypto_keyslot_find,
> +	.crypto_alg_find	= ufshcd_crypto_alg_find,
> +};
> +
> +int ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
> +					   struct request_queue *q)
> +{
> +	int err = 0;
> +
> +	if (!ufshcd_hba_is_crypto_supported(hba))
> +		return 0;
> +
> +	if (!q) {
> +		err = -ENODEV;
> +		goto out_no_q;
> +	}
> +
> +	q->ksm = keyslot_manager_create(
> +	    hba->crypto_capabilities.config_count+1,
> +	    &ufshcd_ksm_ops, hba);
> +	/*
> +	 * If we fail we make it look like
> +	 * crypto is not supported, which will avoid issues
> +	 * with reset
> +	 */
> +	if (!q->ksm) {
> +		err = -ENOMEM;
> +out_no_q:
> +		ufshcd_crypto_disable(hba);
> +		hba->crypto_capabilities.reg_val = 0;
> +		devm_kfree(hba->dev, hba->crypto_cap_array);
> +		devm_kfree(hba->dev, hba->crypto_cfgs);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +int ufshcd_crypto_destroy_rq_keyslot_manager(struct request_queue *q)
> +{
> +	if (q && q->ksm)
> +		keyslot_manager_destroy(q->ksm);
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h
> new file mode 100644
> index 000000000000..16445efe3666
> --- /dev/null
> +++ b/drivers/scsi/ufs/ufshcd-crypto.h
> @@ -0,0 +1,92 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright 2019 Google LLC
> + */
> +
> +#ifndef _UFSHCD_CRYPTO_H
> +#define _UFSHCD_CRYPTO_H
> +
> +struct ufs_hba;
> +
> +#ifdef CONFIG_SCSI_UFS_CRYPTO
> +#include <linux/keyslot-manager.h>
> +
> +#include "ufshci.h"
> +
> +bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot);
> +
> +bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba);
> +
> +bool ufshcd_is_crypto_enabled(struct ufs_hba *hba);
> +
> +int ufshcd_crypto_set_enable_slot(struct ufs_hba *hba,
> +				  unsigned int slot,
> +				  bool enable);
> +
> +int ufshcd_crypto_enable(struct ufs_hba *hba);
> +
> +int ufshcd_crypto_disable(struct ufs_hba *hba);
> +
> +int ufshcd_hba_init_crypto(struct ufs_hba *hba);
> +
> +int ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
> +					   struct request_queue *q);
> +
> +int ufshcd_crypto_destroy_rq_keyslot_manager(struct request_queue *q);
> +
> +#else /* CONFIG_UFS_CRYPTO */
> +
> +static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba,
> +					unsigned int slot)
> +{
> +	return false;
> +}
> +
> +static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
> +{
> +	return false;
> +}
> +
> +static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
> +{
> +	return false;
> +}
> +
> +static inline int ufshcd_crypto_set_enable_slot(struct ufs_hba *hba,
> +				  unsigned int slot,
> +				  bool enable)
> +{
> +	return -1;
> +}
> +
> +static inline int ufshcd_crypto_enable(struct ufs_hba *hba)
> +{
> +	return -1;
> +}
> +
> +static inline int ufshcd_crypto_disable(struct ufs_hba *hba)
> +{
> +	return -1;
> +}
> +
> +static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
> +{
> +	return -1;
> +}
> +
> +static inline int ufshcd_crypto_setup_rq_keyslot_manager(
> +					struct ufs_hba *hba,
> +					struct request_queue *q)
> +{
> +	return -1;
> +}
> +
> +static inline int ufshcd_crypto_destroy_rq_keyslot_manager(
> +				struct request_queue *q)
> +{
> +	return -1;
> +}
> +
> +#endif /* CONFIG_SCSI_UFS_CRYPTO */
> +
> +#endif /* _UFSHCD_CRYPTO_H */
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index e040f9dd9ff3..65c51943e331 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -47,6 +47,7 @@
>  #include "unipro.h"
>  #include "ufs-sysfs.h"
>  #include "ufs_bsg.h"
> +#include "ufshcd-crypto.h"
> 
>  #define CREATE_TRACE_POINTS
>  #include <trace/events/ufs.h>
> @@ -855,7 +856,14 @@ static void ufshcd_enable_run_stop_reg(struct
> ufs_hba *hba)
>   */
>  static inline void ufshcd_hba_start(struct ufs_hba *hba)
>  {
> -	ufshcd_writel(hba, CONTROLLER_ENABLE,
> REG_CONTROLLER_ENABLE);
> +	u32 val = CONTROLLER_ENABLE;
> +
> +	if (ufshcd_hba_is_crypto_supported(hba)) {
> +		ufshcd_crypto_enable(hba);
> +		val |= CRYPTO_GENERAL_ENABLE;
> +	}
> +
> +	ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE);
>  }
> 
>  /**
> @@ -2208,9 +2216,21 @@ static void ufshcd_prepare_req_desc_hdr(struct
> ufshcd_lrb *lrbp,
>  		dword_0 |= UTP_REQ_DESC_INT_CMD;
> 
>  	/* Transfer request descriptor header fields */
> +	if (lrbp->crypto_enable) {
> +		dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
> +		dword_0 |= lrbp->crypto_key_slot;
> +		req_desc->header.dword_1 =
> +			cpu_to_le32((u32)lrbp->data_unit_num);
> +		req_desc->header.dword_3 =
> +			cpu_to_le32((u32)(lrbp->data_unit_num >> 32));
> +	} else {
> +		/* dword_1 and dword_3 are reserved, hence they are set to 0
> */
> +		req_desc->header.dword_1 = 0;
> +		req_desc->header.dword_3 = 0;
> +	}
> +
>  	req_desc->header.dword_0 = cpu_to_le32(dword_0);
> -	/* dword_1 is reserved, hence it is set to 0 */
> -	req_desc->header.dword_1 = 0;
> +
>  	/*
>  	 * assigning invalid value for command status. Controller
>  	 * updates OCS on command completion, with the command
> @@ -2218,8 +2238,6 @@ static void ufshcd_prepare_req_desc_hdr(struct
> ufshcd_lrb *lrbp,
>  	 */
>  	req_desc->header.dword_2 =
>  		cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
> -	/* dword_3 is reserved, hence it is set to 0 */
> -	req_desc->header.dword_3 = 0;
> 
>  	req_desc->prd_table_length = 0;
>  }
> @@ -2379,6 +2397,38 @@ static inline u16
> ufshcd_upiu_wlun_to_scsi_wlun(u8 upiu_wlun_id)
>  	return (upiu_wlun_id & ~UFS_UPIU_WLUN_ID) | SCSI_W_LUN_BASE;
>  }
> 
> +static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
> +					     struct scsi_cmnd *cmd,
> +					     struct ufshcd_lrb *lrbp)
> +{
> +	int key_slot;
> +
> +	if (!bio_crypt_should_process(cmd->request->bio,
> +					cmd->request->q)) {
> +		lrbp->crypto_enable = false;
> +		return 0;
> +	}
> +
> +	if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) {
> +		/**
> +		 * Upper layer asked us to do inline encryption
> +		 * but that isn't enabled, so we fail this request.
> +		 */
> +		return -EINVAL;
> +	}
> +	key_slot = bio_crypt_get_slot(cmd->request->bio);
> +	if (!ufshcd_keyslot_valid(hba, key_slot))
> +		return -EINVAL;
> +
> +	lrbp->crypto_enable = true;
> +	lrbp->crypto_key_slot = key_slot;
> +	lrbp->data_unit_num =
> +		bio_crypt_data_unit_num(cmd->request->bio);
> +
> +	return 0;
> +}
> +
> +
>  /**
>   * ufshcd_queuecommand - main entry point for SCSI requests
>   * @host: SCSI host pointer
> @@ -2466,6 +2516,13 @@ static int ufshcd_queuecommand(struct Scsi_Host
> *host, struct scsi_cmnd *cmd)
>  	lrbp->task_tag = tag;
>  	lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
>  	lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false;
> +
> +	err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
> +	if (err) {
> +		lrbp->cmd = NULL;
> +		clear_bit_unlock(tag, &hba->lrb_in_use);
> +		goto out;
> +	}
>  	lrbp->req_abort_skip = false;
> 
>  	ufshcd_comp_scsi_upiu(hba, lrbp);
> @@ -2499,6 +2556,7 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba
> *hba,
>  	lrbp->task_tag = tag;
>  	lrbp->lun = 0; /* device management cmd is not specific to any LUN */
>  	lrbp->intr_cmd = true; /* No interrupt aggregation */
> +	lrbp->crypto_enable = false; /* No crypto operations */
>  	hba->dev_cmd.type = cmd_type;
> 
>  	return ufshcd_comp_devman_upiu(hba, lrbp);
> @@ -4191,6 +4249,8 @@ static inline void ufshcd_hba_stop(struct ufs_hba
> *hba, bool can_sleep)
>  {
>  	int err;
> 
> +	ufshcd_crypto_disable(hba);
> +
>  	ufshcd_writel(hba, CONTROLLER_DISABLE,
> REG_CONTROLLER_ENABLE);
>  	err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE,
>  					CONTROLLER_ENABLE,
> CONTROLLER_DISABLE,
> @@ -4584,10 +4644,13 @@ static int ufshcd_change_queue_depth(struct
> scsi_device *sdev, int depth)
>  static int ufshcd_slave_configure(struct scsi_device *sdev)
>  {
>  	struct request_queue *q = sdev->request_queue;
> +	struct ufs_hba *hba = shost_priv(sdev->host);
> 
>  	blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
>  	blk_queue_max_segment_size(q, PRDT_DATA_BYTE_COUNT_MAX);
> 
> +	ufshcd_crypto_setup_rq_keyslot_manager(hba, q);
> +
>  	return 0;
>  }
> 
> @@ -4598,6 +4661,7 @@ static int ufshcd_slave_configure(struct scsi_device
> *sdev)
>  static void ufshcd_slave_destroy(struct scsi_device *sdev)
>  {
>  	struct ufs_hba *hba;
> +	struct request_queue *q = sdev->request_queue;
> 
>  	hba = shost_priv(sdev->host);
>  	/* Drop the reference as it won't be needed anymore */
> @@ -4608,6 +4672,8 @@ static void ufshcd_slave_destroy(struct scsi_device
> *sdev)
>  		hba->sdev_ufs_device = NULL;
>  		spin_unlock_irqrestore(hba->host->host_lock, flags);
>  	}
> +
> +	ufshcd_crypto_destroy_rq_keyslot_manager(q);
>  }
> 
>  /**
> @@ -4723,6 +4789,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba,
> struct ufshcd_lrb *lrbp)
>  	case OCS_MISMATCH_RESP_UPIU_SIZE:
>  	case OCS_PEER_COMM_FAILURE:
>  	case OCS_FATAL_ERROR:
> +	case OCS_INVALID_CRYPTO_CONFIG:
> +	case OCS_GENERAL_CRYPTO_ERROR:
>  	default:
>  		result |= DID_ERROR << 16;
>  		dev_err(hba->dev,
> @@ -8287,6 +8355,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem
> *mmio_base, unsigned int irq)
>  		goto exit_gating;
>  	}
> 
> +	/* Init crypto */
> +	err = ufshcd_hba_init_crypto(hba);
> +	if (err) {
> +		dev_err(hba->dev, "crypto setup failed\n");
> +		goto out_remove_scsi_host;
> +	}
> +
>  	/* Host controller enable */
>  	err = ufshcd_hba_enable(hba);
>  	if (err) {
> diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> index ecfa898b9ccc..283014e0924f 100644
> --- a/drivers/scsi/ufs/ufshcd.h
> +++ b/drivers/scsi/ufs/ufshcd.h
> @@ -167,6 +167,9 @@ struct ufs_pm_lvl_states {
>   * @intr_cmd: Interrupt command (doesn't participate in interrupt
> aggregation)
>   * @issue_time_stamp: time stamp for debug purposes
>   * @compl_time_stamp: time stamp for statistics
> + * @crypto_enable: whether or not the request needs inline crypto operations
> + * @crypto_key_slot: the key slot to use for inline crypto
> + * @data_unit_num: the data unit number for the first block for inline crypto
>   * @req_abort_skip: skip request abort task flag
>   */
>  struct ufshcd_lrb {
> @@ -191,6 +194,9 @@ struct ufshcd_lrb {
>  	bool intr_cmd;
>  	ktime_t issue_time_stamp;
>  	ktime_t compl_time_stamp;
> +	bool crypto_enable;
> +	u8 crypto_key_slot;
> +	u64 data_unit_num;
> 
>  	bool req_abort_skip;
>  };
> @@ -501,6 +507,10 @@ struct ufs_stats {
>   * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
>   *  device is known or not.
>   * @scsi_block_reqs_cnt: reference counting for scsi block requests
> + * @crypto_capabilities: Content of crypto capabilities register (0x100)
> + * @crypto_cap_array: Array of crypto capabilities
> + * @crypto_cfg_register: Start of the crypto cfg array
> + * @crypto_cfgs: Array of crypto configurations (i.e. config for each slot)
>   */
>  struct ufs_hba {
>  	void __iomem *mmio_base;
> @@ -692,6 +702,11 @@ struct ufs_hba {
>  	 * the performance of ongoing read/write operations.
>  	 */
>  #define UFSHCD_CAP_KEEP_AUTO_BKOPS_ENABLED_EXCEPT_SUSPEND (1 <<
> 5)
> +	/*
> +	 * This capability allows the host controller driver to use the
> +	 * inline crypto engine, if it is present
> +	 */
> +#define UFSHCD_CAP_CRYPTO (1 << 6)
> 
>  	struct devfreq *devfreq;
>  	struct ufs_clk_scaling clk_scaling;
> @@ -706,6 +721,14 @@ struct ufs_hba {
> 
>  	struct device		bsg_dev;
>  	struct request_queue	*bsg_queue;
> +
> +#ifdef CONFIG_SCSI_UFS_CRYPTO
> +	/* crypto */
> +	union ufs_crypto_capabilities crypto_capabilities;
> +	union ufs_crypto_cap_entry *crypto_cap_array;
> +	u32 crypto_cfg_register;
> +	union ufs_crypto_cfg_entry *crypto_cfgs;
> +#endif /* CONFIG_SCSI_UFS_CRYPTO */
>  };
> 
>  /* Returns true if clocks can be gated. Otherwise false */
> diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
> index 6fa889de5ee5..a757eaf99a19 100644
> --- a/drivers/scsi/ufs/ufshci.h
> +++ b/drivers/scsi/ufs/ufshci.h
> @@ -90,6 +90,7 @@ enum {
>  	MASK_64_ADDRESSING_SUPPORT		= 0x01000000,
>  	MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT	=
> 0x02000000,
>  	MASK_UIC_DME_TEST_MODE_SUPPORT		=
> 0x04000000,
> +	MASK_CRYPTO_SUPPORT			= 0x10000000,
>  };
> 
>  #define UFS_MASK(mask, offset)		((mask) << (offset))
> @@ -143,6 +144,7 @@ enum {
>  #define DEVICE_FATAL_ERROR			0x800
>  #define CONTROLLER_FATAL_ERROR			0x10000
>  #define SYSTEM_BUS_FATAL_ERROR			0x20000
> +#define CRYPTO_ENGINE_FATAL_ERROR		0x40000
> 
>  #define UFSHCD_UIC_PWR_MASK	(UIC_HIBERNATE_ENTER |\
>  				UIC_HIBERNATE_EXIT |\
> @@ -153,11 +155,13 @@ enum {
>  #define UFSHCD_ERROR_MASK	(UIC_ERROR |\
>  				DEVICE_FATAL_ERROR |\
>  				CONTROLLER_FATAL_ERROR |\
> -				SYSTEM_BUS_FATAL_ERROR)
> +				SYSTEM_BUS_FATAL_ERROR |\
> +				CRYPTO_ENGINE_FATAL_ERROR)
> 
>  #define INT_FATAL_ERRORS	(DEVICE_FATAL_ERROR |\
>  				CONTROLLER_FATAL_ERROR |\
> -				SYSTEM_BUS_FATAL_ERROR)
> +				SYSTEM_BUS_FATAL_ERROR |\
> +				CRYPTO_ENGINE_FATAL_ERROR)
> 
>  /* HCS - Host Controller Status 30h */
>  #define DEVICE_PRESENT				0x1
> @@ -316,6 +320,61 @@ enum {
>  	INTERRUPT_MASK_ALL_VER_21	= 0x71FFF,
>  };
> 
> +/* CCAP - Crypto Capability 100h */
> +union ufs_crypto_capabilities {
> +	__le32 reg_val;
> +	struct {
> +		u8 num_crypto_cap;
> +		u8 config_count;
> +		u8 reserved;
> +		u8 config_array_ptr;
> +	};
> +};
> +
> +enum ufs_crypto_key_size {
> +	UFS_CRYPTO_KEY_SIZE_INVALID	= 0x0,
> +	UFS_CRYPTO_KEY_SIZE_128		= 0x1,
> +	UFS_CRYPTO_KEY_SIZE_192		= 0x2,
> +	UFS_CRYPTO_KEY_SIZE_256		= 0x3,
> +	UFS_CRYPTO_KEY_SIZE_512		= 0x4,
> +};
> +
> +enum ufs_crypto_alg {
> +	UFS_CRYPTO_ALG_AES_XTS			= 0x0,
> +	UFS_CRYPTO_ALG_BITLOCKER_AES_CBC	= 0x1,
> +	UFS_CRYPTO_ALG_AES_ECB			= 0x2,
> +	UFS_CRYPTO_ALG_ESSIV_AES_CBC		= 0x3,
> +};
> +
> +/* x-CRYPTOCAP - Crypto Capability X */
> +union ufs_crypto_cap_entry {
> +	__le32 reg_val;
> +	struct {
> +		u8 algorithm_id;
> +		u8 sdus_mask; /* Supported data unit size mask */
> +		u8 key_size;
> +		u8 reserved;
> +	};
> +};
> +
> +#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
> +#define UFS_CRYPTO_KEY_MAX_SIZE 64
> +/* x-CRYPTOCFG - Crypto Configuration X */
> +union ufs_crypto_cfg_entry {
> +	__le32 reg_val[32];
> +	struct {
> +		u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE];
> +		u8 data_unit_size;
> +		u8 crypto_cap_idx;
> +		u8 reserved_1;
> +		u8 config_enable;
> +		u8 reserved_multi_host;
> +		u8 reserved_2;
> +		u8 vsb[2];
> +		u8 reserved_3[56];
> +	};
> +};
> +
>  /*
>   * Request Descriptor Definitions
>   */
> @@ -337,6 +396,7 @@ enum {
>  	UTP_NATIVE_UFS_COMMAND		= 0x10000000,
>  	UTP_DEVICE_MANAGEMENT_FUNCTION	= 0x20000000,
>  	UTP_REQ_DESC_INT_CMD		= 0x01000000,
> +	UTP_REQ_DESC_CRYPTO_ENABLE_CMD	= 0x00800000,
>  };
> 
>  /* UTP Transfer Request Data Direction (DD) */
> @@ -356,6 +416,9 @@ enum {
>  	OCS_PEER_COMM_FAILURE		= 0x5,
>  	OCS_ABORTED			= 0x6,
>  	OCS_FATAL_ERROR			= 0x7,
> +	OCS_DEVICE_FATAL_ERROR		= 0x8,
> +	OCS_INVALID_CRYPTO_CONFIG	= 0x9,
> +	OCS_GENERAL_CRYPTO_ERROR	= 0xA,
>  	OCS_INVALID_COMMAND_STATUS	= 0x0F,
>  	MASK_OCS			= 0x0F,
>  };
> --
> 2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/4] Inline Encryption Support
  2019-05-06 22:35 [RFC PATCH 0/4] Inline Encryption Support Satya Tangirala
                   ` (4 preceding siblings ...)
  2019-05-07  0:26 ` [RFC PATCH 0/4] Inline Encryption Support Bart Van Assche
@ 2019-05-07  9:35 ` Chao Yu
  5 siblings, 0 replies; 16+ messages in thread
From: Chao Yu @ 2019-05-07  9:35 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

On 2019/5/7 6:35, Satya Tangirala wrote:
> This patch series adds support for Inline Encryption to the block layer,
> fscrypt and f2fs.

Err.. it should send to f2fs mailing list.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] block: Block Layer changes for Inline Encryption Support
  2019-05-06 22:35 ` [RFC PATCH 1/4] block: Block Layer changes for " Satya Tangirala
  2019-05-06 23:54   ` Randy Dunlap
  2019-05-07  0:37   ` Bart Van Assche
@ 2019-05-08  2:12   ` Randy Dunlap
  2 siblings, 0 replies; 16+ messages in thread
From: Randy Dunlap @ 2019-05-08  2:12 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang

Hi,
This is documentation comments...

On 5/6/19 3:35 PM, Satya Tangirala wrote:
> diff --git a/Documentation/block/blk-crypto.txt b/Documentation/block/blk-crypto.txt
> new file mode 100644
> index 000000000000..a1b82361cb16
> --- /dev/null
> +++ b/Documentation/block/blk-crypto.txt
> @@ -0,0 +1,185 @@
> +BLK-CRYPTO and KEYSLOT MANAGER
> +===========================
> +
> +CONTENTS
> +1. Objective
> +2. Constraints and notes
> +3. Design
> +4. Blk-crypto
> + 4-1 What does blk-crypto do on bio submission
> +5. Layered Devices
> +6. Future optimizations for layered devices
> +
> +1. Objective
> +============
> +
> +We want to support inline encryption (IE) in the kernel.
> +To allow for testing, we also want a software fallback when actual
> +IE hardware is absent. We also want IE to work with layered devices
> +like dm and loopback (i.e. we want to be able to use the IE hardware
> +of the underlying devices if present, or else fall back to software
> +en/decryption).
> +
> +
> +2. Constraints and notes
> +========================
> +
> +1) IE hardware have a limited number of “keyslots” that can be programmed
> +with an encryption context (key, algorithm, data unit size, etc.) at any time.
> +One can specify a keyslot in a data requests made to the device, and when the

                             in data requests
or
                             in a data request

> +device will en/decrypt the data using the encryption context programmed into
> +that specified keyslot. Of course, when possible, we want to make multiple
> +requests with the the same encryption context share the same keyslot.
> +
> +2) We need a way for filesystems to specify an encryption context to use for
> +en/decrypting a struct bio, and a device driver (like UFS) needs to be able
> +to use that encryption context when it processes the bio.
> +
> +3) We need a way for device drivers to expose their capabilities in a unified
> +way to the upper layers.
> +
> +
> +3. Design
> +=========
> +
> +We add a struct bio_crypt_context to struct bio that can represent an
> +encryption context, because we need to able to pass this encryption context
> +from the FS layer to the device driver to act upon.
> +
> +While IE hardware works on the notion of keyslots, the FS layer has no
> +knowledge of keyslots - it simply wants to specify an encryption context to
> +use while en/decrypting a bio.
> +
> +We introduce a keyslot manager (KSM) that handles the translation from
> +encryption contexts specified by the FS to keyslots on the IE hardware.
> +This KSM also serves as the way IE hardware can expose their capabilities to
> +upper layers. The generic mode of operation is: each device driver that wants
> +to support IE will construct a KSM and set it up in its struct request_queue.
> +Upper layers that want to use IE on this device can then use this KSM in
> +the device’s struct request_queue to translate an encryption context into
> +a keyslot. The presence of the KSM in the request queue shall be used to mean
> +that the device supports IE.
> +
> +On the device driver end of the interface, the device driver needs to tell the
> +KSM how to actually manipulate the IE hardware in the device to do things like
> +programming the crypto key into the IE hardware into a particular keyslot. All
> +this is achieved through the struct keyslot_mgmt_ll_ops that the device driver
> +passes to the KSM when creating it.
> +
> +It uses refcounts to track which keyslots are idle (either they have no
> +encryption context programmed, or there are no in flight struct bios
> +referencing that keyslot). When a new encryption context needs a keyslot, it
> +tries to find a keyslot that has already been programmed with the same
> +encryption context, and if there is no such keyslot, it evicts the least
> +recently used idle keyslot and programs the new encryption context into that
> +one. If no idle keyslots are available, then the caller will sleep until there
> +is at least one.
> +
> +
> +4. Blk-crypto
> +=============
> +
> +The above is sufficient for simple cases, but does not work if there is a
> +need for a software fallback, or if we are want to use IE with layered devices.
> +To these ends, we introduce blk-crypto. Blk-crypto allows us to present a
> +unified view of encryption to the FS (so FS only needs to specify an
> +encryption context and not worry about keyslots at all), and block crypto can
> +decide whether to delegate the en/decryption to IE hardware or to software
> +(i.e. to the kernel crypto API). Block crypto maintains an internal KSM that
> +serves as the software fallback to the kernel crypto API.
> +
> +Blk-crypto needs to ensure that the encryption context is programmed into the
> +"correct" keyslot manager for IE. If a bio is submitted to a layered device
> +that eventually passes the bio down to a device that really does support IE, we
> +want the encryption context to be programmed into a keyslot for the KSM of the
> +device with IE support. However, blk-crypto does not know a-priori whether a

                                                             a priori

> +particular device is the final device in the layering structure for a bio or
> +not. So in the case that a particular device does not support IE, since it is
> +possibly the final destination device for the bio, if the bio requires
> +encryption (i.e. the bio is doing a write operation), blk-crypto must fallback
> +to software *before* sending the bio to the device.
> +
> +Blk-crypto ensures that
> +1) The bio’s encryption context is programmed into a keyslot in the KSM of the
> +request queue that the bio is being submitted to (or the software fallback KSM
> +if the request queue doesn’t have a KSM), and that the processing_ksm in the
> +bi_crypt_context is set to this KSM
> +
> +2) That the bio has its own individual reference to the keyslot in this KSM.
> +Once the bio passes through block crypto, its encryption context is programmed

                          is   block crypto
                 the same as   blk-crypto?
If so, consistency would be Good.

> +in some KSM. The “its own individual reference to the keyslot” ensures that
> +keyslots can be released by each bio independently of other bios while ensuring
> +that the bio has a valid reference to the keyslot when, for e.g., the software
> +fallback KSM in blk-crypto performs crypto for on the device’s behalf. The
> +individual references are ensured by increasing the refcount for the keyslot in
> +the processing_ksm when a bio with a programmed encryption context is cloned.
> +
> +
> +4-1. What blk-crypto does on bio submission
> +-------------------------------------------
> +
> +Case 1: blk-crypto is given a bio with only an encryption context that hasn’t
> +been programmed into any keyslot in any KSM (for e.g. a bio from the FS). In
> +this case, blk-crypto will program the encryption context into the KSM of the
> +request queue the bio is being submitted to (and if this KSM does not exist,
> +then it will program it into blk-crypto’s internal KSM for software fallback).
> +The KSM that this encryption context was programmed into is stored as the
> +processing_ksm in the bio’s bi_crypt_context.
> +
> +Case 2: blk-crypto is given a bio whose encryption context has already been
> +programmed into a keyslot in the *software fallback KSM*. In this case,
> +blk-crypto does nothing; it treats the bio as not having specified an
> +encryption context. Note that we cannot do what we will do in Case 3 here
> +because we would have already encrypted the bio in software by this point.
> +
> +Case 3: blk-crypto is given a bio whose encryption context has already been
> +programmed into a keyslot in some KSM (that is *not* the software fallback
> +KSM). In this case, blk-crypto first releases that keyslot from that KSM and
> +then treats the bio as in Case 1.
> +
> +This way, when a device driver is processing a bio, it can be sure that
> +the bio’s encryption context has been programmed into some KSM (either the
> +device driver’s request queue’s KSM, or blk-crypto’s software fallback KSM).
> +It then simply needs to check if the bio’s processing_ksm is the device’s
> +request queue’s KSM. If so, then it should proceed with IE. If not, it should
> +simply do nothing with respect to crypto, because some other KSM (perhaps the
> +blk-crypto software fallback KSM) is handling the en/decryption.
> +
> +Blk-crypto will release the keyslot that is being held by the bio (and also
> +decrypt it if the bio is using the software fallback KSM) once
> +bio_remaining_done returns true for the bio.
> +
> +
> +5. Layered Devices
> +==================
> +
> +Layered devices that wish to support IE need to create their own keyslot
> +manager for their request queue, and expose whatever functionality they choose.
> +When a layered device wants to pass a bio to another layer (either by
> +resubmitting the same bio, or by submitting a clone), it doesn’t need to do
> +anything special because the bio (or the clone) will once again pass through
> +blk-crypto, which will work as described in Case 3. If a layered device wants
> +for some reason to do the IO by itself instead of passing it on to a child
> +device, but it also chose to expose IE capabilities by setting up a KSM in its
> +request queue, it is then responsible for en/decrypting the data itself. In
> +such cases, the device can choose to call the blk-crypto function
> +blk_crypto_fallback_to_software (TODO: Not yet implemented), which will
> +cause the en/decryption to be done via software fallback.
> +
> +
> +6. Future Optimizations for layered devices
> +===========================================
> +
> +Creating a keyslot manager for the layered device uses up memory for each
> +keyslot, and in general, a layered device (like dm-linear) merely passes the
> +request on to a “child” device, so the keyslots in the layered device itself
> +might be completely unused. We can instead define a new type of KSM; the
> +“passthrough KSM”, that layered devices can use to let blk-crypto know that
> +this layered device *will* pass the bio to some child device (and hence
> +through blk-crypto again, at which point blk-crypto can program the encryption
> +context, instead of programming it into the layered device’s KSM). Again, if
> +the device “lies” and decides to do the IO itself instead of passing it on to
> +a child device, it is responsible for doing the en/decryption (and can choose
> +to call blk_crypto_fallback_to_software). Another use case for the
> +"passthrough KSM" is for IE devices that want to manage their own keyslots/do
> +not have a limited number of keyslots.


-- 
~Randy

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 4/4] f2fs: Wire up f2fs to use inline encryption via fscrypt
  2019-05-06 22:35 ` [RFC PATCH 4/4] f2fs: Wire up f2fs to use inline encryption via fscrypt Satya Tangirala
  2019-05-07  1:25   ` Bart Van Assche
@ 2019-05-08  3:02   ` Chao Yu
  1 sibling, 0 replies; 16+ messages in thread
From: Chao Yu @ 2019-05-08  3:02 UTC (permalink / raw)
  To: Satya Tangirala, linux-block, linux-scsi, linux-fscrypt, linux-fsdevel
  Cc: Parshuram Raju Thombare, Ladvine D Almeida, Barani Muthukumaran,
	Kuohong Wang, linux-f2fs-devel

Hi Satya,

+Cc f2fs mailing list.

On 2019/5/7 6:35, Satya Tangirala wrote:
> Signed-off-by: Satya Tangirala <satyat@google.com>
> ---
>  fs/f2fs/data.c  | 69 ++++++++++++++++++++++++++++++++++++++++++++++---
>  fs/f2fs/super.c |  1 +
>  2 files changed, 67 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 9727944139f2..7ac6768a52a5 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -279,9 +279,18 @@ static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
>  	return bio;
>  }
>  
> +static inline u64 hw_crypt_dun(struct inode *inode, struct page *page)
> +{
> +	return (((u64)inode->i_ino) << 32) | (page->index & 0xFFFFFFFF);
> +}
> +
>  static inline void __submit_bio(struct f2fs_sb_info *sbi,
>  				struct bio *bio, enum page_type type)
>  {
> +	struct page *page;
> +	struct inode *inode;
> +	int err = 0;
> +
>  	if (!is_read_io(bio_op(bio))) {
>  		unsigned int start;
>  
> @@ -323,7 +332,21 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi,
>  		trace_f2fs_submit_read_bio(sbi->sb, type, bio);
>  	else
>  		trace_f2fs_submit_write_bio(sbi->sb, type, bio);
> -	submit_bio(bio);
> +
> +	if (bio_has_data(bio)) {
> +		page = bio_page(bio);
> +		if (page && page->mapping && page->mapping->host) {
> +			inode = page->mapping->host;
> +			err = fscrypt_set_bio_crypt_ctx(inode, bio,

Should sanity check in fscrypt_set_bio_crypt_ctx() be necessary? as we have
already did the check in fscrypt_get_encryption_info().

If it's not necessary, we can relax to not care about the error handling.

> +						hw_crypt_dun(inode, page));
> +		}
> +	}
> +	if (err) {
> +		bio->bi_status = BLK_STS_IOERR;
> +		bio_endio(bio);
> +	} else {
> +		submit_bio(bio);
> +	}
>  }
>  
>  static void __submit_merged_bio(struct f2fs_bio_info *io)
> @@ -484,6 +507,9 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
>  	enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
>  	struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
>  	struct page *bio_page;
> +	struct inode *fio_inode, *bio_inode;
> +	struct page *first_page;
> +	u64 next_dun = 0;
>  
>  	f2fs_bug_on(sbi, is_read_io(fio->op));
>  
> @@ -512,10 +538,29 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
>  
>  	inc_page_count(sbi, WB_DATA_TYPE(bio_page));
>  
> +	fio_inode = fio->page->mapping->host;
> +	bio_inode = NULL;
> +	first_page = NULL;
> +	next_dun = 0;
> +	if (io->bio) {
> +		first_page = bio_page(io->bio);
> +		if (first_page->mapping) {
> +			bio_inode = first_page->mapping->host;
> +			if (fscrypt_inode_is_hw_encrypted(bio_inode)) {
> +				next_dun =
> +					hw_crypt_dun(bio_inode, first_page) +
> +				    (io->bio->bi_iter.bi_size >> PAGE_SHIFT);
> +			}
> +		}
> +	}
>  	if (io->bio && (io->last_block_in_bio != fio->new_blkaddr - 1 ||
>  	    (io->fio.op != fio->op || io->fio.op_flags != fio->op_flags) ||
> -			!__same_bdev(sbi, fio->new_blkaddr, io->bio)))
> +			!__same_bdev(sbi, fio->new_blkaddr, io->bio) ||
> +			!fscrypt_inode_crypt_mergeable(bio_inode, fio_inode) ||
> +			(fscrypt_inode_is_hw_encrypted(bio_inode) &&
> +			 next_dun != hw_crypt_dun(fio_inode, fio->page))))

The merge condition becomes so complicated and looks not so clean, I just
suggest we can introduce single static inline function to wrap all inline
encryption condition, which can help to make codes more clean.

>  		__submit_merged_bio(io);
> +
>  alloc_new:
>  	if (io->bio == NULL) {
>  		if ((fio->type == DATA || fio->type == NODE) &&
> @@ -570,7 +615,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
>  	bio->bi_end_io = f2fs_read_end_io;
>  	bio_set_op_attrs(bio, REQ_OP_READ, op_flag);
>  
> -	if (f2fs_encrypted_file(inode))
> +	if (f2fs_encrypted_file(inode) && !fscrypt_inode_is_hw_encrypted(inode))
>  		post_read_steps |= 1 << STEP_DECRYPT;
>  	if (post_read_steps) {
>  		ctx = mempool_alloc(bio_post_read_ctx_pool, GFP_NOFS);
> @@ -1525,6 +1570,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
>  	sector_t last_block_in_file;
>  	sector_t block_nr;
>  	struct f2fs_map_blocks map;
> +	u64 next_dun = 0;
>  
>  	map.m_pblk = 0;
>  	map.m_lblk = 0;
> @@ -1606,6 +1652,13 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
>  			__submit_bio(F2FS_I_SB(inode), bio, DATA);
>  			bio = NULL;
>  		}
> +
> +		if (bio && fscrypt_inode_is_hw_encrypted(inode) &&
> +		    next_dun != hw_crypt_dun(inode, page)) {
> +			__submit_bio(F2FS_I_SB(inode), bio, DATA);
> +			bio = NULL;
> +		}
> +
>  		if (bio == NULL) {
>  			bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
>  					is_readahead ? REQ_RAHEAD : 0);
> @@ -1624,6 +1677,9 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
>  		if (bio_add_page(bio, page, blocksize, 0) < blocksize)
>  			goto submit_and_realloc;
>  
> +		if (fscrypt_inode_is_hw_encrypted(inode))
> +			next_dun = hw_crypt_dun(inode, page) + 1;
> +
>  		inc_page_count(F2FS_I_SB(inode), F2FS_RD_DATA);
>  		ClearPageError(page);
>  		last_block_in_bio = block_nr;
> @@ -2591,12 +2647,19 @@ static void f2fs_dio_submit_bio(struct bio *bio, struct inode *inode,
>  {
>  	struct f2fs_private_dio *dio;
>  	bool write = (bio_op(bio) == REQ_OP_WRITE);
> +	u64 data_unit_num = (((u64)inode->i_ino) << 32) |
> +			    ((file_offset >> PAGE_SHIFT) & 0xFFFFFFFF);

Can we allow hw_crypt_dun() to accept @offset as parameter instead of @page? so
we can use hw_crypt_dun() here to replace raw codes.

Thanks,

>  
>  	dio = f2fs_kzalloc(F2FS_I_SB(inode),
>  			sizeof(struct f2fs_private_dio), GFP_NOFS);
>  	if (!dio)
>  		goto out;
>  
> +	if (fscrypt_set_bio_crypt_ctx(inode, bio, data_unit_num) != 0) {
> +		kvfree(dio);
> +		goto out;
> +	}
> +
>  	dio->inode = inode;
>  	dio->orig_end_io = bio->bi_end_io;
>  	dio->orig_private = bio->bi_private;
> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> index f2aaa2cc6b3e..e98c85d42e8d 100644
> --- a/fs/f2fs/super.c
> +++ b/fs/f2fs/super.c
> @@ -2225,6 +2225,7 @@ static const struct fscrypt_operations f2fs_cryptops = {
>  	.dummy_context	= f2fs_dummy_context,
>  	.empty_dir	= f2fs_empty_dir,
>  	.max_namelen	= F2FS_NAME_LEN,
> +	.hw_crypt_supp	= true,
>  };
>  #endif
>  
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-05-08  3:02 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-06 22:35 [RFC PATCH 0/4] Inline Encryption Support Satya Tangirala
2019-05-06 22:35 ` [RFC PATCH 1/4] block: Block Layer changes for " Satya Tangirala
2019-05-06 23:54   ` Randy Dunlap
2019-05-07  0:37   ` Bart Van Assche
2019-05-08  2:12   ` Randy Dunlap
2019-05-06 22:35 ` [RFC PATCH 2/4] scsi: ufs: UFS driver v2.1 crypto support Satya Tangirala
2019-05-06 23:51   ` Randy Dunlap
2019-05-07  0:39   ` Bart Van Assche
2019-05-07  9:23   ` Avri Altman
2019-05-06 22:35 ` [RFC PATCH 3/4] fscrypt: wire up fscrypt to use blk-crypto Satya Tangirala
2019-05-07  1:23   ` Bart Van Assche
2019-05-06 22:35 ` [RFC PATCH 4/4] f2fs: Wire up f2fs to use inline encryption via fscrypt Satya Tangirala
2019-05-07  1:25   ` Bart Van Assche
2019-05-08  3:02   ` Chao Yu
2019-05-07  0:26 ` [RFC PATCH 0/4] Inline Encryption Support Bart Van Assche
2019-05-07  9:35 ` Chao Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).