All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Satya Tangirala <satyat@google.com>
Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-ext4@vger.kernel.org,
	Barani Muthukumaran <bmuthuku@qti.qualcomm.com>,
	Kuohong Wang <kuohong.wang@mediatek.com>,
	Kim Boojin <boojin.kim@samsung.com>
Subject: Re: [PATCH v7 2/9] block: Inline encryption support for blk-mq
Date: Fri, 21 Feb 2020 09:22:05 -0800	[thread overview]
Message-ID: <20200221172205.GB438@infradead.org> (raw)
In-Reply-To: <20200221115050.238976-3-satyat@google.com>

> index bf62c25cde8f..bce563031e7c 100644
> --- a/block/bio-integrity.c
> +++ b/block/bio-integrity.c
> @@ -42,6 +42,11 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
>  	struct bio_set *bs = bio->bi_pool;
>  	unsigned inline_vecs;
>  
> +	if (bio_has_crypt_ctx(bio)) {
> +		pr_warn("blk-integrity can't be used together with blk-crypto en/decryption.");
> +		return ERR_PTR(-EOPNOTSUPP);
> +	}

What is the rationale for this limitation?  Restricting unrelated
features from being used together is a pretty bad design pattern and
should be avoided where possible.  If it can't it needs to be documented
very clearly.

> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +	rq->rq_crypt_ctx.keyslot = -EINVAL;
> +#endif

All the other core block calls to the crypto code are in helpers that
are stubbed out.  It might make sense to follow that style here.

>  
>  free_and_out:
> @@ -1813,5 +1826,8 @@ int __init blk_dev_init(void)
>  	blk_debugfs_root = debugfs_create_dir("block", NULL);
>  #endif
>  
> +	if (bio_crypt_ctx_init() < 0)
> +		panic("Failed to allocate mem for bio crypt ctxs\n");

Maybe move that panic into bio_crypt_ctx_init itself?

> +static int num_prealloc_crypt_ctxs = 128;
> +
> +module_param(num_prealloc_crypt_ctxs, int, 0444);
> +MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
> +		"Number of bio crypto contexts to preallocate");

Please write a comment why this is a tunable, how the default is choosen
and why someone might want to chane it.

> +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
> +{
> +	return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
> +}

I'd rather move bio_crypt_set_ctx out of line, at which point we don't
really need this helper.

> +/* Return: 0 on success, negative on error */
> +int rq_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
> +				  struct keyslot_manager *ksm,
> +				  struct rq_crypt_ctx *rc)
> +{
> +	rc->keyslot = blk_ksm_get_slot_for_key(ksm, bc->bc_key);
> +	return rc->keyslot >= 0 ? 0 : rc->keyslot;
> +}
> +
> +void rq_crypt_ctx_release_keyslot(struct keyslot_manager *ksm,
> +				  struct rq_crypt_ctx *rc)
> +{
> +	if (rc->keyslot >= 0)
> +		blk_ksm_put_slot(ksm, rc->keyslot);
> +	rc->keyslot = -EINVAL;
> +}

Is there really much of a need for these helpers?  I think the
callers would generally be simpler without them.  Especially the
fallback code can avoid having to declare rq_crypt_ctx variables
on stack without the helpers.

> +int blk_crypto_init_request(struct request *rq, struct request_queue *q,
> +			    struct bio *bio)

We can always derive the request_queue from rq->q, so there is no need
to pass it explicitly (even if a lot of legacy block code does, but
it is slowly getting cleaned up).

> +{
> +	struct rq_crypt_ctx *rc = &rq->rq_crypt_ctx;
> +	struct bio_crypt_ctx *bc;
> +	int err;
> +
> +	rc->bc = NULL;
> +	rc->keyslot = -EINVAL;
> +
> +	if (!bio)
> +		return 0;
> +
> +	bc = bio->bi_crypt_context;
> +	if (!bc)
> +		return 0;

Shouldn't the checks if the bio actually requires crypto handling be
done by the caller based on a new handler ala:

static inline bool bio_is_encrypted(struct bio *bio)
{
	return bio && bio->bi_crypt_context;
}

and maybe some inline helpers to reduce the clutter?

That way a kernel with blk crypto support, but using non-crypto I/O
saves all the function calls to blk-crypto.

> +	err = bio_crypt_check_alignment(bio);
> +	if (err)
> +		goto fail;

This seems pretty late to check the alignment, it would be more
useful in bio_add_page.  Then again Jens didn't like alignment checks
in the block layer at all even for the normal non-crypto alignment,
so I don't see why we'd have them here but not for the general case
(I'd actually like to ee them for the general case, btw).

> +int blk_crypto_evict_key(struct request_queue *q,
> +			 const struct blk_crypto_key *key)
> +{
> +	if (q->ksm && blk_ksm_crypto_mode_supported(q->ksm, key))
> +		return blk_ksm_evict_key(q->ksm, key);
> +
> +	return 0;
> +}

Is there any point in this wrapper that just has a single caller?
Als why doesn't blk_ksm_evict_key have the blk_ksm_crypto_mode_supported
sanity check itself?

> @@ -1998,6 +2007,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
>  
>  	cookie = request_to_qc_t(data.hctx, rq);
>  
> +	if (blk_crypto_init_request(rq, q, bio)) {
> +		bio->bi_status = BLK_STS_RESOURCE;
> +		bio_endio(bio);
> +		blk_mq_end_request(rq, BLK_STS_RESOURCE);
> +		return BLK_QC_T_NONE;
> +	}

This looks fundamentally wrong given that layers above blk-mq
can't handle BLK_STS_RESOURCE.  It will just show up as an error
in the calller insteaf of being requeued.

That being said I think the only error return from
blk_crypto_init_request is and actual hardware error.  So failing this
might be ok, but it should be BLK_STS_IOERR, or even better an error
directly propagated from the driver. 

> +int bio_crypt_ctx_init(void);
> +
> +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
> +
> +void bio_crypt_free_ctx(struct bio *bio);

These can go into block layer internal headers.

> +static inline bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
> +					       unsigned int bytes,
> +					u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
> +{
> +	int i = 0;
> +	unsigned int inc = bytes >> bc->bc_key->data_unit_size_bits;
> +
> +	while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
> +		if (bc->bc_dun[i] + inc != next_dun[i])
> +			return false;
> +		inc = ((bc->bc_dun[i] + inc)  < inc);
> +		i++;
> +	}
> +
> +	return true;
> +}
> +
> +static inline void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
> +					   unsigned int inc)
> +{
> +	int i = 0;
> +
> +	while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
> +		dun[i] += inc;
> +		inc = (dun[i] < inc);
> +		i++;
> +	}
> +}

Should these really be inline?

> +bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio);
> +
> +bool bio_crypt_ctx_front_mergeable(struct request *req, struct bio *bio);
> +
> +bool bio_crypt_ctx_back_mergeable(struct request *req, struct bio *bio);
> +
> +bool bio_crypt_ctx_merge_rq(struct request *req, struct request *next);
> +
> +void blk_crypto_bio_back_merge(struct request *req, struct bio *bio);
> +
> +void blk_crypto_bio_front_merge(struct request *req, struct bio *bio);
> +
> +void blk_crypto_free_request(struct request *rq);
> +
> +int blk_crypto_init_request(struct request *rq, struct request_queue *q,
> +			    struct bio *bio);
> +
> +int blk_crypto_bio_prep(struct bio **bio_ptr);
> +
> +void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio);
> +
> +void blk_crypto_rq_prep_clone(struct request *dst, struct request *src);
> +
> +int blk_crypto_insert_cloned_request(struct request_queue *q,
> +				     struct request *rq);
> +
> +int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
> +			enum blk_crypto_mode_num crypto_mode,
> +			unsigned int blk_crypto_dun_bytes,
> +			unsigned int data_unit_size);
> +
> +int blk_crypto_evict_key(struct request_queue *q,
> +			 const struct blk_crypto_key *key);

Most of this should be block layer private.

> +struct rq_crypt_ctx {
> +	struct bio_crypt_ctx *bc;
> +	int keyslot;
> +};
> +
>  /*
>   * Try to put the fields that are referenced together in the same cacheline.
>   *
> @@ -224,6 +230,10 @@ struct request {
>  	unsigned short nr_integrity_segments;
>  #endif
>  
> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +	struct rq_crypt_ctx rq_crypt_ctx;
> +#endif

I'd be tempted to just add

	struct bio_crypt_ctx *crypt_ctx;
	int crypt_keyslot;

directly to struct request.

WARNING: multiple messages have this Message-ID (diff)
From: Christoph Hellwig <hch@infradead.org>
To: Satya Tangirala <satyat@google.com>
Cc: linux-scsi@vger.kernel.org, Kim Boojin <boojin.kim@samsung.com>,
	Kuohong Wang <kuohong.wang@mediatek.com>,
	Barani Muthukumaran <bmuthuku@qti.qualcomm.com>,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org
Subject: Re: [f2fs-dev] [PATCH v7 2/9] block: Inline encryption support for blk-mq
Date: Fri, 21 Feb 2020 09:22:05 -0800	[thread overview]
Message-ID: <20200221172205.GB438@infradead.org> (raw)
In-Reply-To: <20200221115050.238976-3-satyat@google.com>

> index bf62c25cde8f..bce563031e7c 100644
> --- a/block/bio-integrity.c
> +++ b/block/bio-integrity.c
> @@ -42,6 +42,11 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
>  	struct bio_set *bs = bio->bi_pool;
>  	unsigned inline_vecs;
>  
> +	if (bio_has_crypt_ctx(bio)) {
> +		pr_warn("blk-integrity can't be used together with blk-crypto en/decryption.");
> +		return ERR_PTR(-EOPNOTSUPP);
> +	}

What is the rationale for this limitation?  Restricting unrelated
features from being used together is a pretty bad design pattern and
should be avoided where possible.  If it can't it needs to be documented
very clearly.

> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +	rq->rq_crypt_ctx.keyslot = -EINVAL;
> +#endif

All the other core block calls to the crypto code are in helpers that
are stubbed out.  It might make sense to follow that style here.

>  
>  free_and_out:
> @@ -1813,5 +1826,8 @@ int __init blk_dev_init(void)
>  	blk_debugfs_root = debugfs_create_dir("block", NULL);
>  #endif
>  
> +	if (bio_crypt_ctx_init() < 0)
> +		panic("Failed to allocate mem for bio crypt ctxs\n");

Maybe move that panic into bio_crypt_ctx_init itself?

> +static int num_prealloc_crypt_ctxs = 128;
> +
> +module_param(num_prealloc_crypt_ctxs, int, 0444);
> +MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
> +		"Number of bio crypto contexts to preallocate");

Please write a comment why this is a tunable, how the default is choosen
and why someone might want to chane it.

> +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
> +{
> +	return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
> +}

I'd rather move bio_crypt_set_ctx out of line, at which point we don't
really need this helper.

> +/* Return: 0 on success, negative on error */
> +int rq_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
> +				  struct keyslot_manager *ksm,
> +				  struct rq_crypt_ctx *rc)
> +{
> +	rc->keyslot = blk_ksm_get_slot_for_key(ksm, bc->bc_key);
> +	return rc->keyslot >= 0 ? 0 : rc->keyslot;
> +}
> +
> +void rq_crypt_ctx_release_keyslot(struct keyslot_manager *ksm,
> +				  struct rq_crypt_ctx *rc)
> +{
> +	if (rc->keyslot >= 0)
> +		blk_ksm_put_slot(ksm, rc->keyslot);
> +	rc->keyslot = -EINVAL;
> +}

Is there really much of a need for these helpers?  I think the
callers would generally be simpler without them.  Especially the
fallback code can avoid having to declare rq_crypt_ctx variables
on stack without the helpers.

> +int blk_crypto_init_request(struct request *rq, struct request_queue *q,
> +			    struct bio *bio)

We can always derive the request_queue from rq->q, so there is no need
to pass it explicitly (even if a lot of legacy block code does, but
it is slowly getting cleaned up).

> +{
> +	struct rq_crypt_ctx *rc = &rq->rq_crypt_ctx;
> +	struct bio_crypt_ctx *bc;
> +	int err;
> +
> +	rc->bc = NULL;
> +	rc->keyslot = -EINVAL;
> +
> +	if (!bio)
> +		return 0;
> +
> +	bc = bio->bi_crypt_context;
> +	if (!bc)
> +		return 0;

Shouldn't the checks if the bio actually requires crypto handling be
done by the caller based on a new handler ala:

static inline bool bio_is_encrypted(struct bio *bio)
{
	return bio && bio->bi_crypt_context;
}

and maybe some inline helpers to reduce the clutter?

That way a kernel with blk crypto support, but using non-crypto I/O
saves all the function calls to blk-crypto.

> +	err = bio_crypt_check_alignment(bio);
> +	if (err)
> +		goto fail;

This seems pretty late to check the alignment, it would be more
useful in bio_add_page.  Then again Jens didn't like alignment checks
in the block layer at all even for the normal non-crypto alignment,
so I don't see why we'd have them here but not for the general case
(I'd actually like to ee them for the general case, btw).

> +int blk_crypto_evict_key(struct request_queue *q,
> +			 const struct blk_crypto_key *key)
> +{
> +	if (q->ksm && blk_ksm_crypto_mode_supported(q->ksm, key))
> +		return blk_ksm_evict_key(q->ksm, key);
> +
> +	return 0;
> +}

Is there any point in this wrapper that just has a single caller?
Als why doesn't blk_ksm_evict_key have the blk_ksm_crypto_mode_supported
sanity check itself?

> @@ -1998,6 +2007,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
>  
>  	cookie = request_to_qc_t(data.hctx, rq);
>  
> +	if (blk_crypto_init_request(rq, q, bio)) {
> +		bio->bi_status = BLK_STS_RESOURCE;
> +		bio_endio(bio);
> +		blk_mq_end_request(rq, BLK_STS_RESOURCE);
> +		return BLK_QC_T_NONE;
> +	}

This looks fundamentally wrong given that layers above blk-mq
can't handle BLK_STS_RESOURCE.  It will just show up as an error
in the calller insteaf of being requeued.

That being said I think the only error return from
blk_crypto_init_request is and actual hardware error.  So failing this
might be ok, but it should be BLK_STS_IOERR, or even better an error
directly propagated from the driver. 

> +int bio_crypt_ctx_init(void);
> +
> +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
> +
> +void bio_crypt_free_ctx(struct bio *bio);

These can go into block layer internal headers.

> +static inline bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
> +					       unsigned int bytes,
> +					u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
> +{
> +	int i = 0;
> +	unsigned int inc = bytes >> bc->bc_key->data_unit_size_bits;
> +
> +	while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
> +		if (bc->bc_dun[i] + inc != next_dun[i])
> +			return false;
> +		inc = ((bc->bc_dun[i] + inc)  < inc);
> +		i++;
> +	}
> +
> +	return true;
> +}
> +
> +static inline void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
> +					   unsigned int inc)
> +{
> +	int i = 0;
> +
> +	while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
> +		dun[i] += inc;
> +		inc = (dun[i] < inc);
> +		i++;
> +	}
> +}

Should these really be inline?

> +bool bio_crypt_rq_ctx_compatible(struct request *rq, struct bio *bio);
> +
> +bool bio_crypt_ctx_front_mergeable(struct request *req, struct bio *bio);
> +
> +bool bio_crypt_ctx_back_mergeable(struct request *req, struct bio *bio);
> +
> +bool bio_crypt_ctx_merge_rq(struct request *req, struct request *next);
> +
> +void blk_crypto_bio_back_merge(struct request *req, struct bio *bio);
> +
> +void blk_crypto_bio_front_merge(struct request *req, struct bio *bio);
> +
> +void blk_crypto_free_request(struct request *rq);
> +
> +int blk_crypto_init_request(struct request *rq, struct request_queue *q,
> +			    struct bio *bio);
> +
> +int blk_crypto_bio_prep(struct bio **bio_ptr);
> +
> +void blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio);
> +
> +void blk_crypto_rq_prep_clone(struct request *dst, struct request *src);
> +
> +int blk_crypto_insert_cloned_request(struct request_queue *q,
> +				     struct request *rq);
> +
> +int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
> +			enum blk_crypto_mode_num crypto_mode,
> +			unsigned int blk_crypto_dun_bytes,
> +			unsigned int data_unit_size);
> +
> +int blk_crypto_evict_key(struct request_queue *q,
> +			 const struct blk_crypto_key *key);

Most of this should be block layer private.

> +struct rq_crypt_ctx {
> +	struct bio_crypt_ctx *bc;
> +	int keyslot;
> +};
> +
>  /*
>   * Try to put the fields that are referenced together in the same cacheline.
>   *
> @@ -224,6 +230,10 @@ struct request {
>  	unsigned short nr_integrity_segments;
>  #endif
>  
> +#ifdef CONFIG_BLK_INLINE_ENCRYPTION
> +	struct rq_crypt_ctx rq_crypt_ctx;
> +#endif

I'd be tempted to just add

	struct bio_crypt_ctx *crypt_ctx;
	int crypt_keyslot;

directly to struct request.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

  reply	other threads:[~2020-02-21 17:22 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-21 11:50 [PATCH v7 0/9] Inline Encryption Support Satya Tangirala
2020-02-21 11:50 ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-21 11:50 ` [PATCH v7 1/9] block: Keyslot Manager for Inline Encryption Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-21 17:04   ` Christoph Hellwig
2020-02-21 17:04     ` [f2fs-dev] " Christoph Hellwig
2020-02-21 17:31     ` Christoph Hellwig
2020-02-21 17:31       ` [f2fs-dev] " Christoph Hellwig
2020-02-27 18:14       ` Eric Biggers
2020-02-27 18:14         ` [f2fs-dev] " Eric Biggers
2020-02-27 21:25         ` Satya Tangirala
2020-02-27 21:25           ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-03-05 16:11           ` Christoph Hellwig
2020-03-05 16:11             ` [f2fs-dev] " Christoph Hellwig
2020-02-27 18:48   ` Eric Biggers
2020-02-27 18:48     ` [f2fs-dev] " Eric Biggers
2020-02-21 11:50 ` [PATCH v7 2/9] block: Inline encryption support for blk-mq Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-21 17:22   ` Christoph Hellwig [this message]
2020-02-21 17:22     ` Christoph Hellwig
2020-02-22  0:52     ` Satya Tangirala
2020-02-22  0:52       ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-24 23:34       ` Christoph Hellwig
2020-02-24 23:34         ` [f2fs-dev] " Christoph Hellwig
2020-02-27 18:25     ` Eric Biggers
2020-02-27 18:25       ` [f2fs-dev] " Eric Biggers
2020-02-21 11:50 ` [PATCH v7 3/9] block: blk-crypto-fallback for Inline Encryption Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-21 16:51   ` Randy Dunlap
2020-02-21 16:51     ` [f2fs-dev] " Randy Dunlap
2020-02-21 17:25   ` Christoph Hellwig
2020-02-21 17:25     ` [f2fs-dev] " Christoph Hellwig
2020-02-21 17:35   ` Christoph Hellwig
2020-02-21 17:35     ` [f2fs-dev] " Christoph Hellwig
2020-02-21 18:34     ` Eric Biggers
2020-02-21 18:34       ` [f2fs-dev] " Eric Biggers
2020-02-24 23:36       ` Christoph Hellwig
2020-02-24 23:36         ` [f2fs-dev] " Christoph Hellwig
2020-02-27 19:25   ` Eric Biggers
2020-02-27 19:25     ` [f2fs-dev] " Eric Biggers
2020-02-21 11:50 ` [PATCH v7 4/9] scsi: ufs: UFS driver v2.1 spec crypto additions Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-21 11:50 ` [PATCH v7 5/9] scsi: ufs: UFS crypto API Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-22  4:59   ` Eric Biggers
2020-02-22  4:59     ` [f2fs-dev] " Eric Biggers
2020-02-21 11:50 ` [PATCH v7 6/9] scsi: ufs: Add inline encryption support to UFS Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-21 17:22   ` Christoph Hellwig
2020-02-21 17:22     ` [f2fs-dev] " Christoph Hellwig
2020-02-21 18:11     ` Eric Biggers
2020-02-21 18:11       ` [f2fs-dev] " Eric Biggers
2020-02-23 13:47       ` Stanley Chu
2020-02-23 13:47         ` [f2fs-dev] " Stanley Chu
2020-02-24 23:37         ` Christoph Hellwig
2020-02-24 23:37           ` [f2fs-dev] " Christoph Hellwig
2020-02-25  7:21           ` Stanley Chu
2020-02-25  7:21             ` [f2fs-dev] " Stanley Chu
2020-02-26  1:12             ` Eric Biggers
2020-02-26  1:12               ` [f2fs-dev] " Eric Biggers
2020-02-26  6:43               ` Stanley Chu
2020-02-26  6:43                 ` [f2fs-dev] " Stanley Chu
2020-03-02  9:17                 ` Stanley Chu
2020-03-02  9:17                   ` [f2fs-dev] " Stanley Chu
2020-02-21 11:50 ` [PATCH v7 7/9] fscrypt: add inline encryption support Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-21 18:40   ` Eric Biggers
2020-02-21 18:40     ` [f2fs-dev] " Eric Biggers
2020-02-22  5:39   ` Eric Biggers
2020-02-22  5:39     ` [f2fs-dev] " Eric Biggers
2020-02-26  0:30   ` Eric Biggers
2020-02-26  0:30     ` [f2fs-dev] " Eric Biggers
2020-02-21 11:50 ` [PATCH v7 8/9] f2fs: " Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-21 11:50 ` [PATCH v7 9/9] ext4: " Satya Tangirala
2020-02-21 11:50   ` [f2fs-dev] " Satya Tangirala via Linux-f2fs-devel
2020-02-22  5:21   ` Eric Biggers
2020-02-22  5:21     ` [f2fs-dev] " Eric Biggers
2020-02-21 17:16 ` [PATCH v7 0/9] Inline Encryption Support Eric Biggers
2020-02-21 17:16   ` [f2fs-dev] " Eric Biggers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200221172205.GB438@infradead.org \
    --to=hch@infradead.org \
    --cc=bmuthuku@qti.qualcomm.com \
    --cc=boojin.kim@samsung.com \
    --cc=kuohong.wang@mediatek.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fscrypt@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=satyat@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.