linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Biggers <ebiggers@kernel.org>
To: Satya Tangirala <satyat@google.com>
Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	Jens Axboe <axboe@kernel.dk>
Subject: Re: [PATCH v3 05/10] block: keyslot-manager: introduce blk_ksm_restrict_dus_to_queue_limits()
Date: Wed, 16 Jun 2021 18:58:35 -0700	[thread overview]
Message-ID: <YMqsS0ehsVlyySOQ@sol.localdomain> (raw)
In-Reply-To: <20210604195900.2096121-6-satyat@google.com>

On Fri, Jun 04, 2021 at 07:58:55PM +0000, Satya Tangirala wrote:
> Not all crypto data unit sizes might be supported by the block layer due to
> certain queue limits. This new function checks the queue limits and
> appropriately modifies the keyslot manager to reflect only the supported
> crypto data unit sizes. blk_ksm_register() runs any given ksm through this
> function before actually registering the ksm with a queue.
> 
> Signed-off-by: Satya Tangirala <satyat@google.com>
> ---
>  block/keyslot-manager.c | 91 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 91 insertions(+)
> 
> diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
> index 88211581141a..6a355867be59 100644
> --- a/block/keyslot-manager.c
> +++ b/block/keyslot-manager.c
> @@ -458,12 +458,103 @@ bool blk_ksm_is_empty(struct blk_keyslot_manager *ksm)
>  }
>  EXPORT_SYMBOL_GPL(blk_ksm_is_empty);
>  
> +/*
> + * Restrict the supported data unit sizes of the ksm based on the request queue
> + * limits
> + */
> +static void
> +blk_ksm_restrict_dus_to_queue_limits(struct blk_keyslot_manager *ksm,
> +				     struct request_queue *q)
> +{
> +	/* The largest possible data unit size we support is PAGE_SIZE. */
> +	unsigned long largest_dus = PAGE_SIZE;
> +	unsigned int dus_allowed_mask;
> +	int i;
> +	bool dus_was_restricted = false;
> +	struct queue_limits *limits = &q->limits;
> +
> +	/*
> +	 * If the queue doesn't support SG gaps, a bio might get split in the
> +	 * middle of a data unit. So require SG gap support for inline
> +	 * encryption for any data unit size larger than a single sector.
> +	 *
> +	 * A crypto data unit might straddle an SG gap, and only a single sector
> +	 * of that data unit might be before the gap - the block layer will need
> +	 * to split that bio at the gap, which will result in an incomplete
> +	 * crypto data unit unless the crypto data unit size is a single sector.
> +	 */
> +	if (limits->virt_boundary_mask)
> +		largest_dus = SECTOR_SIZE;

This seems unnecessarily pessimistic, as the length of each bio_vec will still
be aligned to logical_block_size.  virt_boundary_mask only causes splits between
bio_vec's, not within a bio_vec.

I think we want something like:

	/*
	 * If the queue doesn't support SG gaps, then a bio may have to be split
	 * between any two bio_vecs.  Since the size of each bio_vec is only
	 * guaranteed to be a multiple of logical_block_size, logical_block_size
	 * is also the maximum crypto data unit size that can be supported in
	 * this case, as bios must not be split in the middle of a data unit.
	 */
	if (limits->virt_boundary_mask)
		largest_dus = queue_logical_block_size(q);

> +	/*
> +	 * If the queue has chunk_sectors, the bio might be split within a data
> +	 * unit if the data unit size is larger than a single sector. So only
> +	 * support a single sector data unit size in this case.
> +	 *
> +	 * Just like the SG gap case above, a crypto data unit might straddle a
> +	 * chunk sector boundary, and in the worst case, only a single sector of
> +	 * the data unit might be before/after the boundary.
> +	 */
> +	if (limits->chunk_sectors)
> +		largest_dus = SECTOR_SIZE;

I think the same applies here.  As I understand it, chunk_sectors has to be a
multiple of logical_block_size.  Here's what I'm thinking:

	/*
	 * Similarly, if chunk_sectors is set and a bio is submitted that
	 * crosses a chunk boundary, then that bio may have to be split at a
	 * boundary that is only logical_block_size aligned.  So that limits the
	 * crypto data unit size to logical_block_size as well.
	 */
	if (limits->chunk_sectors)
		largest_dus = queue_logical_block_size(q);

Although, that also raises the question of whether we should require that
'bi_sector' be crypto_data_size aligned for inline encryption to be used.  Then
I think we could remove the above limitation.

I suppose the main concern with that is that if someone was to e.g. create a
filesystem on a partition which starts at a location that isn't 4K aligned, they
wouldn't be able to use inline encryption on that filesystem...  I'm not sure
how much of a problem that would be in practice.

> +
> +	/*
> +	 * Any bio sent to the queue must be allowed to contain at least a
> +	 * data_unit_size worth of data. Since each segment in a bio contains
> +	 * at least a SECTOR_SIZE worth of data, it's sufficient that
> +	 * queue_max_segments(q) * SECTOR_SIZE >= data_unit_size. So disable
> +	 * all data_unit_sizes not satisfiable.
> +	 *
> +	 * We assume the worst case of only SECTOR_SIZE bytes of data in each
> +	 * segment since users of the block layer are free to construct bios
> +	 * with such segments.
> +	 */
> +	largest_dus = min(largest_dus,
> +			1UL << (fls(limits->max_segments) - 1 + SECTOR_SHIFT));

And similarly here too.  As far as I can tell, the minimum size of a segment is
logical_block_size, which is not necessarily SECTOR_SIZE.

We can also make use of rounddown_pow_of_two() here.

Here is what I'm thinking:

	/*
	 * Each bio_vec can be as small as logical_block_size.  Therefore the
	 * crypto data unit size can't be greater than 'max_segments *
	 * logical_block_size', as otherwise in the worst case there would be no
	 * way to process the first data unit without exceeding max_segments.
	 */
	largest_dus = min(largest_dus,
			  rounddown_pow_of_two(limits->max_segments) *
			  queue_logical_block_size(q));

> +	/* Clear all unsupported data unit sizes. */
> +	dus_allowed_mask = (largest_dus << 1) - 1;
> +	for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
> +		if (ksm->crypto_modes_supported[i] & (~dus_allowed_mask))
> +			dus_was_restricted = true;
> +		ksm->crypto_modes_supported[i] &= dus_allowed_mask;
> +	}
> +
> +	if (dus_was_restricted) {
> +		pr_warn("Disallowed use of encryption data unit sizes above %lu bytes with inline encryption hardware because of device request queue limits on device %s.\n",
> +			largest_dus, q->backing_dev_info->dev_name);
> +	}

The disk name should go at the beginning of the log message.

> +/**
> + * blk_ksm_register() - Sets the queue's keyslot manager to the provided ksm, if
> + *			compatible
> + * @ksm: The ksm to register
> + * @q: The request_queue to register the ksm to
> + *
> + * Checks if the keyslot manager provided is compatible with the request queue
> + * (i.e. the queue shouldn't also support integrity). After that, the crypto
> + * capabilities of the given keyslot manager are restricted to what the queue
> + * can support based on it's limits. Note that if @ksm doesn't support any
> + * crypto capabilities after the capability restriction, the queue's ksm is
> + * set to NULL, instead of being set to a pointer to the now "empty" @ksm.
> + *
> + * Return: true if @q's ksm is set to the provided @ksm, false otherwise
> + *	   (including the case when @ksm becomes "empty" due to crypto
> + *	    capability restrictions)
> + */
>  bool blk_ksm_register(struct blk_keyslot_manager *ksm, struct request_queue *q)
>  {
>  	if (blk_integrity_queue_supports_integrity(q)) {
>  		pr_warn("Integrity and hardware inline encryption are not supported together. Disabling hardware inline encryption.\n");
>  		return false;
>  	}
> +
> +	blk_ksm_restrict_dus_to_queue_limits(ksm, q);
> +
> +	if (blk_ksm_is_empty(ksm))
> +		return false;
> +
>  	q->ksm = ksm;
>  	return true;
>  }

The behavior of this function is a bit odd.  If no crypto capabilities can be
registered, it returns false, but it may or may not modify @ksm.  It should
probably leave @ksm unmodified in that case (which we could do by turning
blk_ksm_restrict_dus_to_queue_limits() into something that just calculates the
largest supported data unit size, and making blk_ksm_register() do the rest).

- Eric

  reply	other threads:[~2021-06-17  1:58 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-04 19:58 [PATCH v3 00/10] ensure bios aren't split in middle of crypto data unit Satya Tangirala
2021-06-04 19:58 ` [PATCH v3 01/10] block: introduce blk_ksm_is_empty() Satya Tangirala
2021-06-16 23:47   ` Eric Biggers
2021-06-04 19:58 ` [PATCH v3 02/10] block: blk-crypto: introduce blk_crypto_bio_sectors_alignment() Satya Tangirala
2021-06-17  0:29   ` Eric Biggers
2021-06-04 19:58 ` [PATCH v3 03/10] block: introduce bio_required_sector_alignment() Satya Tangirala
2021-06-17  0:37   ` Eric Biggers
2021-06-04 19:58 ` [PATCH v3 04/10] block: respect bio_required_sector_alignment() in blk-crypto-fallback Satya Tangirala
2021-06-17  0:39   ` Eric Biggers
2021-06-17  4:34     ` Eric Biggers
2021-06-04 19:58 ` [PATCH v3 05/10] block: keyslot-manager: introduce blk_ksm_restrict_dus_to_queue_limits() Satya Tangirala
2021-06-17  1:58   ` Eric Biggers [this message]
2021-06-04 19:58 ` [PATCH v3 06/10] ufshcd: handle error from blk_ksm_register() Satya Tangirala
2021-06-04 19:58 ` [PATCH v3 07/10] mmc: " Satya Tangirala
2021-06-17  3:25   ` Eric Biggers
2021-06-24 10:04     ` Satya Tangirala
2021-06-04 19:58 ` [PATCH v3 08/10] dm: " Satya Tangirala
2021-06-17  3:23   ` Eric Biggers
2021-06-04 19:58 ` [PATCH v3 09/10] blk-merge: Ensure bios aren't split in middle of a crypto data unit Satya Tangirala
2021-06-04 19:59 ` [PATCH v3 10/10] block: add WARN_ON_ONCE() to bio_split() for sector alignment Satya Tangirala
2021-06-17  2:46   ` Eric Biggers
2021-06-17  3:51 ` [PATCH v3 00/10] ensure bios aren't split in middle of crypto data unit Eric Biggers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YMqsS0ehsVlyySOQ@sol.localdomain \
    --to=ebiggers@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=satyat@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).