* [PATCHv4 1/8] block: support pi with extended metadata
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
@ 2022-03-03 20:13 ` Keith Busch
2022-03-03 20:13 ` [PATCHv4 2/8] nvme: allow integrity on extended metadata formats Keith Busch
` (8 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Keith Busch @ 2022-03-03 20:13 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Keith Busch, Hannes Reinecke
The nvme spec allows protection information formats with metadata
extending beyond the pi field. Use the actual size of the metadata field
for incrementing the buffer.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
block/bio-integrity.c | 1 +
block/t10-pi.c | 4 ++--
include/linux/blk-integrity.h | 1 +
3 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index 6996e7bd66e9..32929c89ba8a 100644
--- a/block/bio-integrity.c
+++ b/block/bio-integrity.c
@@ -165,6 +165,7 @@ static blk_status_t bio_integrity_process(struct bio *bio,
iter.disk_name = bio->bi_bdev->bd_disk->disk_name;
iter.interval = 1 << bi->interval_exp;
+ iter.tuple_size = bi->tuple_size;
iter.seed = proc_iter->bi_sector;
iter.prot_buf = bvec_virt(bip->bip_vec);
diff --git a/block/t10-pi.c b/block/t10-pi.c
index 25a52a2a09a8..758a76518854 100644
--- a/block/t10-pi.c
+++ b/block/t10-pi.c
@@ -44,7 +44,7 @@ static blk_status_t t10_pi_generate(struct blk_integrity_iter *iter,
pi->ref_tag = 0;
iter->data_buf += iter->interval;
- iter->prot_buf += sizeof(struct t10_pi_tuple);
+ iter->prot_buf += iter->tuple_size;
iter->seed++;
}
@@ -93,7 +93,7 @@ static blk_status_t t10_pi_verify(struct blk_integrity_iter *iter,
next:
iter->data_buf += iter->interval;
- iter->prot_buf += sizeof(struct t10_pi_tuple);
+ iter->prot_buf += iter->tuple_size;
iter->seed++;
}
diff --git a/include/linux/blk-integrity.h b/include/linux/blk-integrity.h
index 8a038ea0717e..378b2459efe2 100644
--- a/include/linux/blk-integrity.h
+++ b/include/linux/blk-integrity.h
@@ -19,6 +19,7 @@ struct blk_integrity_iter {
sector_t seed;
unsigned int data_size;
unsigned short interval;
+ unsigned char tuple_size;
const char *disk_name;
};
--
2.25.4
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv4 2/8] nvme: allow integrity on extended metadata formats
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
2022-03-03 20:13 ` [PATCHv4 1/8] block: support pi with extended metadata Keith Busch
@ 2022-03-03 20:13 ` Keith Busch
2022-03-03 20:13 ` [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors Keith Busch
` (7 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Keith Busch @ 2022-03-03 20:13 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Keith Busch, Hannes Reinecke
The block integrity subsystem knows how to construct protection
information buffers with metadata beyond the protection information
fields. Remove the driver restriction.
Note, this can only work if the PI field appears first in the metadata,
as the integrity subsystem doesn't calculate guard tags on preceding
metadata.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
drivers/nvme/host/core.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index ace8c61850b1..ace5f30acaf6 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1754,12 +1754,9 @@ static void nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id)
{
struct nvme_ctrl *ctrl = ns->ctrl;
- /*
- * The PI implementation requires the metadata size to be equal to the
- * t10 pi tuple size.
- */
ns->ms = le16_to_cpu(id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ms);
- if (ns->ms == sizeof(struct t10_pi_tuple))
+ if (id->dps & NVME_NS_DPS_PI_FIRST ||
+ ns->ms == sizeof(struct t10_pi_tuple))
ns->pi_type = id->dps & NVME_NS_DPS_PI_MASK;
else
ns->pi_type = 0;
--
2.25.4
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
2022-03-03 20:13 ` [PATCHv4 1/8] block: support pi with extended metadata Keith Busch
2022-03-03 20:13 ` [PATCHv4 2/8] nvme: allow integrity on extended metadata formats Keith Busch
@ 2022-03-03 20:13 ` Keith Busch
2022-03-03 20:47 ` Bart Van Assche
2022-03-04 1:31 ` David Laight
2022-03-03 20:13 ` [PATCHv4 4/8] linux/kernel: introduce lower_48_bits function Keith Busch
` (6 subsequent siblings)
9 siblings, 2 replies; 32+ messages in thread
From: Keith Busch @ 2022-03-03 20:13 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Keith Busch, Hannes Reinecke,
Chaitanya Kulkarni, Arnd Bergmann
The NVMe protocol extended the data integrity fields with unaligned
48-bit reference tags. Provide some helper accessors in
preparation for these.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
include/asm-generic/unaligned.h | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/include/asm-generic/unaligned.h b/include/asm-generic/unaligned.h
index 1c4242416c9f..8fc637379899 100644
--- a/include/asm-generic/unaligned.h
+++ b/include/asm-generic/unaligned.h
@@ -126,4 +126,30 @@ static inline void put_unaligned_le24(const u32 val, void *p)
__put_unaligned_le24(val, p);
}
+static inline void __put_unaligned_be48(const u64 val, __u8 *p)
+{
+ *p++ = val >> 40;
+ *p++ = val >> 32;
+ *p++ = val >> 24;
+ *p++ = val >> 16;
+ *p++ = val >> 8;
+ *p++ = val;
+}
+
+static inline void put_unaligned_be48(const u64 val, void *p)
+{
+ __put_unaligned_be48(val, p);
+}
+
+static inline u64 __get_unaligned_be48(const u8 *p)
+{
+ return (u64)p[0] << 40 | (u64)p[1] << 32 | p[2] << 24 |
+ p[3] << 16 | p[4] << 8 | p[5];
+}
+
+static inline u64 get_unaligned_be48(const void *p)
+{
+ return __get_unaligned_be48(p);
+}
+
#endif /* __ASM_GENERIC_UNALIGNED_H */
--
2.25.4
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors
2022-03-03 20:13 ` [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors Keith Busch
@ 2022-03-03 20:47 ` Bart Van Assche
2022-03-04 1:31 ` David Laight
1 sibling, 0 replies; 32+ messages in thread
From: Bart Van Assche @ 2022-03-03 20:47 UTC (permalink / raw)
To: Keith Busch, linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Hannes Reinecke, Chaitanya Kulkarni,
Arnd Bergmann
On 3/3/22 12:13, Keith Busch wrote:
> The NVMe protocol extended the data integrity fields with unaligned
> 48-bit reference tags. Provide some helper accessors in
> preparation for these.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 32+ messages in thread
* RE: [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors
2022-03-03 20:13 ` [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors Keith Busch
2022-03-03 20:47 ` Bart Van Assche
@ 2022-03-04 1:31 ` David Laight
2022-03-04 2:40 ` Martin K. Petersen
2022-03-04 2:56 ` Keith Busch
1 sibling, 2 replies; 32+ messages in thread
From: David Laight @ 2022-03-04 1:31 UTC (permalink / raw)
To: 'Keith Busch',
linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Hannes Reinecke, Chaitanya Kulkarni,
Arnd Bergmann
From: Keith Busch
> Sent: 03 March 2022 20:13
>
> The NVMe protocol extended the data integrity fields with unaligned
> 48-bit reference tags.
If they are reference tags, are they only interpreted by the
sending system?
In which case they don't need to be big-endian since the
actual value doesn't really matter.
> Provide some helper accessors in preparation for these.
>
...
> diff --git a/include/asm-generic/unaligned.h b/include/asm-generic/unaligned.h
> index 1c4242416c9f..8fc637379899 100644
> --- a/include/asm-generic/unaligned.h
> +++ b/include/asm-generic/unaligned.h
> @@ -126,4 +126,30 @@ static inline void put_unaligned_le24(const u32 val, void *p)
> __put_unaligned_le24(val, p);
> }
>
> +static inline void __put_unaligned_be48(const u64 val, __u8 *p)
> +{
> + *p++ = val >> 40;
> + *p++ = val >> 32;
> + *p++ = val >> 24;
> + *p++ = val >> 16;
> + *p++ = val >> 8;
> + *p++ = val;
> +}
Although that matches __put_unaligned_be24() I think I'd use
array indexing not pointer increments.
The compiler will probably generate the same code anyway.
However it is probably better to do:
put_unaligned_be16(val >> 32, p);
put_unaligned_be32(val, p + 2);
so you get 2 memory accesses on x86 (etc) instead of 6.
Similarly for __get_unaligned_be48() where it is likely
so make a bigger difference.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors
2022-03-04 1:31 ` David Laight
@ 2022-03-04 2:40 ` Martin K. Petersen
2022-03-04 2:56 ` Keith Busch
1 sibling, 0 replies; 32+ messages in thread
From: Martin K. Petersen @ 2022-03-04 2:40 UTC (permalink / raw)
To: David Laight
Cc: 'Keith Busch',
linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen, Hannes Reinecke, Chaitanya Kulkarni,
Arnd Bergmann
David,
> If they are reference tags, are they only interpreted by the sending
> system? In which case they don't need to be big-endian since the
> actual value doesn't really matter.
The tags are validated by the storage device.
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors
2022-03-04 1:31 ` David Laight
2022-03-04 2:40 ` Martin K. Petersen
@ 2022-03-04 2:56 ` Keith Busch
1 sibling, 0 replies; 32+ messages in thread
From: Keith Busch @ 2022-03-04 2:56 UTC (permalink / raw)
To: David Laight
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen, Hannes Reinecke, Chaitanya Kulkarni,
Arnd Bergmann
On Fri, Mar 04, 2022 at 01:31:26AM +0000, David Laight wrote:
> From: Keith Busch
> > Sent: 03 March 2022 20:13
> >
> > The NVMe protocol extended the data integrity fields with unaligned
> > 48-bit reference tags.
>
> If they are reference tags, are they only interpreted by the
> sending system?
No, this field participates in end-to-end data protection formats, so is
verified on both sides.
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCHv4 4/8] linux/kernel: introduce lower_48_bits function
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
` (2 preceding siblings ...)
2022-03-03 20:13 ` [PATCHv4 3/8] asm-generic: introduce be48 unaligned accessors Keith Busch
@ 2022-03-03 20:13 ` Keith Busch
2022-03-03 20:48 ` Bart Van Assche
2022-03-03 20:13 ` [PATCHv4 5/8] lib: add rocksoft model crc64 Keith Busch
` (5 subsequent siblings)
9 siblings, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-03-03 20:13 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Keith Busch, Bart Van Assche
Recent data integrity field enhancements allow reference tags to be up
to 48 bits. Introduce an inline helper function since this will be a
repeated operation.
Suggested-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
include/linux/kernel.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 33f47a996513..3877478681b9 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -63,6 +63,15 @@
} \
)
+/**
+ * lower_48_bits() - return bits 0-47 of a number
+ * @n: the number we're accessing
+ */
+static inline u64 lower_48_bits(u64 n)
+{
+ return n & ((1ull << 48) - 1);
+}
+
/**
* upper_32_bits - return bits 32-63 of a number
* @n: the number we're accessing
--
2.25.4
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv4 5/8] lib: add rocksoft model crc64
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
` (3 preceding siblings ...)
2022-03-03 20:13 ` [PATCHv4 4/8] linux/kernel: introduce lower_48_bits function Keith Busch
@ 2022-03-03 20:13 ` Keith Busch
2022-03-04 2:41 ` Martin K. Petersen
2022-03-04 7:53 ` David Laight
2022-03-03 20:13 ` [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework Keith Busch
` (4 subsequent siblings)
9 siblings, 2 replies; 32+ messages in thread
From: Keith Busch @ 2022-03-03 20:13 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Keith Busch, Hannes Reinecke
The NVM Express specification extended data integrity fields to 64 bits
using the Rocksoft parameters. Add the poly to the crc64 table
generation, and provide a generic library routine implementing the
algorithm.
The Rocksoft 64-bit CRC model parameters are as follows:
Poly: 0xAD93D23594C93659
Initial value: 0xFFFFFFFFFFFFFFFF
Reflected Input: True
Reflected Output: True
Xor Final: 0xFFFFFFFFFFFFFFFF
Since this model used reflected bits, the implementation generates the
reflected table so the result is ordered consistently.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
v3->v4:
Require the caller to provide 0 for the initial value. This means the
calculation is responsible to invert it now, and makes updating the
crc in multiple chunks easier.
include/linux/crc64.h | 2 ++
lib/crc64.c | 28 ++++++++++++++++++++++++
lib/gen_crc64table.c | 51 +++++++++++++++++++++++++++++++++----------
3 files changed, 70 insertions(+), 11 deletions(-)
diff --git a/include/linux/crc64.h b/include/linux/crc64.h
index c756e65a1b58..9480f38cc7cf 100644
--- a/include/linux/crc64.h
+++ b/include/linux/crc64.h
@@ -8,4 +8,6 @@
#include <linux/types.h>
u64 __pure crc64_be(u64 crc, const void *p, size_t len);
+u64 __pure crc64_rocksoft_generic(u64 crc, const void *p, size_t len);
+
#endif /* _LINUX_CRC64_H */
diff --git a/lib/crc64.c b/lib/crc64.c
index 9f852a89ee2a..61ae8dfb6a1c 100644
--- a/lib/crc64.c
+++ b/lib/crc64.c
@@ -22,6 +22,13 @@
* x^24 + x^23 + x^22 + x^21 + x^19 + x^17 + x^13 + x^12 + x^10 + x^9 +
* x^7 + x^4 + x + 1
*
+ * crc64rocksoft[256] table is from the Rocksoft specification polynomial
+ * defined as,
+ *
+ * x^64 + x^63 + x^61 + x^59 + x^58 + x^56 + x^55 + x^52 + x^49 + x^48 + x^47 +
+ * x^46 + x^44 + x^41 + x^37 + x^36 + x^34 + x^32 + x^31 + x^28 + x^26 + x^23 +
+ * x^22 + x^19 + x^16 + x^13 + x^12 + x^10 + x^9 + x^6 + x^4 + x^3 + 1
+ *
* Copyright 2018 SUSE Linux.
* Author: Coly Li <colyli@suse.de>
*/
@@ -55,3 +62,24 @@ u64 __pure crc64_be(u64 crc, const void *p, size_t len)
return crc;
}
EXPORT_SYMBOL_GPL(crc64_be);
+
+/**
+ * crc64_rocksoft_generic - Calculate bitwise Rocksoft CRC64
+ * @crc: seed value for computation. 0 for a new CRC calculation, or the
+ * previous crc64 value if computing incrementally.
+ * @p: pointer to buffer over which CRC64 is run
+ * @len: length of buffer @p
+ */
+u64 __pure crc64_rocksoft_generic(u64 crc, const void *p, size_t len)
+{
+ const unsigned char *_p = p;
+ size_t i;
+
+ crc = ~crc;
+
+ for (i = 0; i < len; i++)
+ crc = (crc >> 8) ^ crc64rocksofttable[(crc & 0xff) ^ *_p++];
+
+ return ~crc;
+}
+EXPORT_SYMBOL_GPL(crc64_rocksoft_generic);
diff --git a/lib/gen_crc64table.c b/lib/gen_crc64table.c
index 094b43aef8db..55e222acd0b8 100644
--- a/lib/gen_crc64table.c
+++ b/lib/gen_crc64table.c
@@ -17,10 +17,30 @@
#include <stdio.h>
#define CRC64_ECMA182_POLY 0x42F0E1EBA9EA3693ULL
+#define CRC64_ROCKSOFT_POLY 0x9A6C9329AC4BC9B5ULL
static uint64_t crc64_table[256] = {0};
+static uint64_t crc64_rocksoft_table[256] = {0};
-static void generate_crc64_table(void)
+static void generate_reflected_crc64_table(uint64_t table[256], uint64_t poly)
+{
+ uint64_t i, j, c, crc;
+
+ for (i = 0; i < 256; i++) {
+ crc = 0ULL;
+ c = i;
+
+ for (j = 0; j < 8; j++) {
+ if ((crc ^ (c >> j)) & 1)
+ crc = (crc >> 1) ^ poly;
+ else
+ crc >>= 1;
+ }
+ table[i] = crc;
+ }
+}
+
+static void generate_crc64_table(uint64_t table[256], uint64_t poly)
{
uint64_t i, j, c, crc;
@@ -30,26 +50,22 @@ static void generate_crc64_table(void)
for (j = 0; j < 8; j++) {
if ((crc ^ c) & 0x8000000000000000ULL)
- crc = (crc << 1) ^ CRC64_ECMA182_POLY;
+ crc = (crc << 1) ^ poly;
else
crc <<= 1;
c <<= 1;
}
- crc64_table[i] = crc;
+ table[i] = crc;
}
}
-static void print_crc64_table(void)
+static void output_table(uint64_t table[256])
{
int i;
- printf("/* this file is generated - do not edit */\n\n");
- printf("#include <linux/types.h>\n");
- printf("#include <linux/cache.h>\n\n");
- printf("static const u64 ____cacheline_aligned crc64table[256] = {\n");
for (i = 0; i < 256; i++) {
- printf("\t0x%016" PRIx64 "ULL", crc64_table[i]);
+ printf("\t0x%016" PRIx64 "ULL", table[i]);
if (i & 0x1)
printf(",\n");
else
@@ -58,9 +74,22 @@ static void print_crc64_table(void)
printf("};\n");
}
+static void print_crc64_tables(void)
+{
+ printf("/* this file is generated - do not edit */\n\n");
+ printf("#include <linux/types.h>\n");
+ printf("#include <linux/cache.h>\n\n");
+ printf("static const u64 ____cacheline_aligned crc64table[256] = {\n");
+ output_table(crc64_table);
+
+ printf("\nstatic const u64 ____cacheline_aligned crc64rocksofttable[256] = {\n");
+ output_table(crc64_rocksoft_table);
+}
+
int main(int argc, char *argv[])
{
- generate_crc64_table();
- print_crc64_table();
+ generate_crc64_table(crc64_table, CRC64_ECMA182_POLY);
+ generate_reflected_crc64_table(crc64_rocksoft_table, CRC64_ROCKSOFT_POLY);
+ print_crc64_tables();
return 0;
}
--
2.25.4
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCHv4 5/8] lib: add rocksoft model crc64
2022-03-03 20:13 ` [PATCHv4 5/8] lib: add rocksoft model crc64 Keith Busch
@ 2022-03-04 2:41 ` Martin K. Petersen
2022-03-04 7:53 ` David Laight
1 sibling, 0 replies; 32+ messages in thread
From: Martin K. Petersen @ 2022-03-04 2:41 UTC (permalink / raw)
To: Keith Busch
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen, Hannes Reinecke
Keith,
> The NVM Express specification extended data integrity fields to 64
> bits using the Rocksoft parameters. Add the poly to the crc64 table
> generation, and provide a generic library routine implementing the
> algorithm.
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 32+ messages in thread
* RE: [PATCHv4 5/8] lib: add rocksoft model crc64
2022-03-03 20:13 ` [PATCHv4 5/8] lib: add rocksoft model crc64 Keith Busch
2022-03-04 2:41 ` Martin K. Petersen
@ 2022-03-04 7:53 ` David Laight
2022-03-04 15:02 ` Keith Busch
1 sibling, 1 reply; 32+ messages in thread
From: David Laight @ 2022-03-04 7:53 UTC (permalink / raw)
To: 'Keith Busch',
linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Hannes Reinecke
From: Keith Busch
> Sent: 03 March 2022 20:13
>
> The NVM Express specification extended data integrity fields to 64 bits
> using the Rocksoft parameters. Add the poly to the crc64 table
> generation, and provide a generic library routine implementing the
> algorithm.
>
> The Rocksoft 64-bit CRC model parameters are as follows:
> Poly: 0xAD93D23594C93659
> Initial value: 0xFFFFFFFFFFFFFFFF
> Reflected Input: True
> Reflected Output: True
> Xor Final: 0xFFFFFFFFFFFFFFFF
>
> Since this model used reflected bits, the implementation generates the
> reflected table so the result is ordered consistently.
Since the data is processed least significant bit first the
table must be setup slightly differently.
...
> + * crc64rocksoft[256] table is from the Rocksoft specification polynomial
> + * defined as,
> + *
> + * x^64 + x^63 + x^61 + x^59 + x^58 + x^56 + x^55 + x^52 + x^49 + x^48 + x^47 +
> + * x^46 + x^44 + x^41 + x^37 + x^36 + x^34 + x^32 + x^31 + x^28 + x^26 + x^23 +
> + * x^22 + x^19 + x^16 + x^13 + x^12 + x^10 + x^9 + x^6 + x^4 + x^3 + 1
Which matches the Poly: 0xAD93D23594C93659 above.
...
> +#define CRC64_ROCKSOFT_POLY 0x9A6C9329AC4BC9B5ULL
But that value is clearly different.
You really ought to add a comment that each byte of the constant
has to be bit reversed from the polynomial coefficients.
> -static void generate_crc64_table(void)
> +static void generate_reflected_crc64_table(uint64_t table[256], uint64_t poly)
> +{
> + uint64_t i, j, c, crc;
> +
> + for (i = 0; i < 256; i++) {
> + crc = 0ULL;
> + c = i;
> +
> + for (j = 0; j < 8; j++) {
> + if ((crc ^ (c >> j)) & 1)
> + crc = (crc >> 1) ^ poly;
> + else
> + crc >>= 1;
> + }
> + table[i] = crc;
> + }
> +}
That can be speeded up by using the identity:
table[x ^ y] == table[x] ^ table[y]
something like:
crc = poly; /* actually crc(1) */
table[0] = 0;
table[1] = crc;
for (i = 2; i < 8; i++) [
crc = crc & 1 ? (crc >> 1) ^ poly : crc >> 1;
for (j = 0; j < 1u << i; j++)
table[j + (1i << i)] = table[j] ^ crc;
}
I think the same code can be used for a normal MSB first crc
provided both the polynomial and crc(1) are passed in.
OTOH initialisation speed may not matter.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 5/8] lib: add rocksoft model crc64
2022-03-04 7:53 ` David Laight
@ 2022-03-04 15:02 ` Keith Busch
0 siblings, 0 replies; 32+ messages in thread
From: Keith Busch @ 2022-03-04 15:02 UTC (permalink / raw)
To: David Laight
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen, Hannes Reinecke
On Fri, Mar 04, 2022 at 07:53:44AM +0000, David Laight wrote:
>
> That can be speeded up by using the identity:
> table[x ^ y] == table[x] ^ table[y]
>
> something like:
> crc = poly; /* actually crc(1) */
> table[0] = 0;
> table[1] = crc;
> for (i = 2; i < 8; i++) [
> crc = crc & 1 ? (crc >> 1) ^ poly : crc >> 1;
> for (j = 0; j < 1u << i; j++)
> table[j + (1i << i)] = table[j] ^ crc;
> }
>
> I think the same code can be used for a normal MSB first crc
> provided both the polynomial and crc(1) are passed in.
>
> OTOH initialisation speed may not matter.
I don't think speed is an issue here. This part is executed just once
for the initial kernel compile, then never again.
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
` (4 preceding siblings ...)
2022-03-03 20:13 ` [PATCHv4 5/8] lib: add rocksoft model crc64 Keith Busch
@ 2022-03-03 20:13 ` Keith Busch
2022-03-08 20:21 ` Vasily Gorbik
2022-03-03 20:13 ` [PATCHv4 7/8] block: add pi for extended integrity Keith Busch
` (3 subsequent siblings)
9 siblings, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-03-03 20:13 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Keith Busch
Hardware specific features may be able to calculate a crc64, so provide
a framework for drivers to register their implementation. If nothing is
registered, fallback to the generic table lookup implementation. The
implementation is modeled after the crct10dif equivalent.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
v3->v4:
Updated naming, and Kconfig help texts, and Kconfig dependency
Added crc64 test vectors to crypto/testmgr.c
Use unaligned le64 macros for setting the result
Use _GPL for EXPORT_SYMBOL
crypto/Kconfig | 5 ++
crypto/Makefile | 1 +
crypto/crc64_rocksoft_generic.c | 89 ++++++++++++++++++++++
crypto/testmgr.c | 7 ++
crypto/testmgr.h | 15 ++++
include/linux/crc64.h | 5 ++
lib/Kconfig | 9 +++
lib/Makefile | 1 +
lib/crc64-rocksoft.c | 129 ++++++++++++++++++++++++++++++++
9 files changed, 261 insertions(+)
create mode 100644 crypto/crc64_rocksoft_generic.c
create mode 100644 lib/crc64-rocksoft.c
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 442765219c37..e88e2d00e33d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -735,6 +735,11 @@ config CRYPTO_CRCT10DIF_VPMSUM
multiply-sum (vpmsum) instructions, introduced in POWER8. Enable on
POWER8 and newer processors for improved performance.
+config CRYPTO_CRC64_ROCKSOFT
+ tristate "Rocksoft Model CRC64 algorithm"
+ depends on CRC64
+ select CRYPTO_HASH
+
config CRYPTO_VPMSUM_TESTER
tristate "Powerpc64 vpmsum hardware acceleration tester"
depends on CRYPTO_CRCT10DIF_VPMSUM && CRYPTO_CRC32C_VPMSUM
diff --git a/crypto/Makefile b/crypto/Makefile
index d76bff8d0ffd..f754c4d17d6b 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -152,6 +152,7 @@ obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o
obj-$(CONFIG_CRYPTO_CRC32C) += crc32c_generic.o
obj-$(CONFIG_CRYPTO_CRC32) += crc32_generic.o
obj-$(CONFIG_CRYPTO_CRCT10DIF) += crct10dif_common.o crct10dif_generic.o
+obj-$(CONFIG_CRYPTO_CRC64_ROCKSOFT) += crc64_rocksoft_generic.o
obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o authencesn.o
obj-$(CONFIG_CRYPTO_LZO) += lzo.o lzo-rle.o
obj-$(CONFIG_CRYPTO_LZ4) += lz4.o
diff --git a/crypto/crc64_rocksoft_generic.c b/crypto/crc64_rocksoft_generic.c
new file mode 100644
index 000000000000..9e812bb26dba
--- /dev/null
+++ b/crypto/crc64_rocksoft_generic.c
@@ -0,0 +1,89 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/crc64.h>
+#include <linux/module.h>
+#include <crypto/internal/hash.h>
+#include <asm/unaligned.h>
+
+static int chksum_init(struct shash_desc *desc)
+{
+ u64 *crc = shash_desc_ctx(desc);
+
+ *crc = 0;
+
+ return 0;
+}
+
+static int chksum_update(struct shash_desc *desc, const u8 *data,
+ unsigned int length)
+{
+ u64 *crc = shash_desc_ctx(desc);
+
+ *crc = crc64_rocksoft_generic(*crc, data, length);
+
+ return 0;
+}
+
+static int chksum_final(struct shash_desc *desc, u8 *out)
+{
+ u64 *crc = shash_desc_ctx(desc);
+
+ put_unaligned_le64(*crc, out);
+ return 0;
+}
+
+static int __chksum_finup(u64 crc, const u8 *data, unsigned int len, u8 *out)
+{
+ crc = crc64_rocksoft_generic(crc, data, len);
+ put_unaligned_le64(crc, out);
+ return 0;
+}
+
+static int chksum_finup(struct shash_desc *desc, const u8 *data,
+ unsigned int len, u8 *out)
+{
+ u64 *crc = shash_desc_ctx(desc);
+
+ return __chksum_finup(*crc, data, len, out);
+}
+
+static int chksum_digest(struct shash_desc *desc, const u8 *data,
+ unsigned int length, u8 *out)
+{
+ return __chksum_finup(0, data, length, out);
+}
+
+static struct shash_alg alg = {
+ .digestsize = sizeof(u64),
+ .init = chksum_init,
+ .update = chksum_update,
+ .final = chksum_final,
+ .finup = chksum_finup,
+ .digest = chksum_digest,
+ .descsize = sizeof(u64),
+ .base = {
+ .cra_name = CRC64_ROCKSOFT_STRING,
+ .cra_driver_name = "crc64-rocksoft-generic",
+ .cra_priority = 200,
+ .cra_blocksize = 1,
+ .cra_module = THIS_MODULE,
+ }
+};
+
+static int __init crc64_rocksoft_init(void)
+{
+ return crypto_register_shash(&alg);
+}
+
+static void __exit crc64_rocksoft_exit(void)
+{
+ crypto_unregister_shash(&alg);
+}
+
+module_init(crc64_rocksoft_init);
+module_exit(crc64_rocksoft_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Rocksoft model CRC64 calculation.");
+MODULE_ALIAS_CRYPTO("crc64-rocksoft");
+MODULE_ALIAS_CRYPTO("crc64-rocksoft-generic");
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 5831d4bbc64f..2e120eea10b1 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -4526,6 +4526,13 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.hash = __VECS(crc32c_tv_template)
}
+ }, {
+ .alg = "crc64-rocksoft",
+ .test = alg_test_hash,
+ .fips_allowed = 1,
+ .suite = {
+ .hash = __VECS(crc64_rocksoft_tv_template)
+ }
}, {
.alg = "crct10dif",
.test = alg_test_hash,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index a253d66ba1c1..f1a22794c404 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -3679,6 +3679,21 @@ static const struct hash_testvec rmd160_tv_template[] = {
}
};
+static const u8 zeroes[4096] = { [0 ... 4095] = 0 };
+static const u8 ones[4096] = { [0 ... 4095] = 0xff };
+
+static const struct hash_testvec crc64_rocksoft_tv_template[] = {
+ {
+ .plaintext = zeroes,
+ .psize = 4096,
+ .digest = (u8 *)(u64[]){ 0x6482d367eb22b64eull },
+ }, {
+ .plaintext = ones,
+ .psize = 4096,
+ .digest = (u8 *)(u64[]){ 0xc0ddba7302eca3acull },
+ }
+};
+
static const struct hash_testvec crct10dif_tv_template[] = {
{
.plaintext = "abc",
diff --git a/include/linux/crc64.h b/include/linux/crc64.h
index 9480f38cc7cf..e044c60d1e61 100644
--- a/include/linux/crc64.h
+++ b/include/linux/crc64.h
@@ -7,7 +7,12 @@
#include <linux/types.h>
+#define CRC64_ROCKSOFT_STRING "crc64-rocksoft"
+
u64 __pure crc64_be(u64 crc, const void *p, size_t len);
u64 __pure crc64_rocksoft_generic(u64 crc, const void *p, size_t len);
+u64 crc64_rocksoft(const unsigned char *buffer, size_t len);
+u64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len);
+
#endif /* _LINUX_CRC64_H */
diff --git a/lib/Kconfig b/lib/Kconfig
index c80fde816a7e..da3e03579666 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -146,6 +146,15 @@ config CRC_T10DIF
kernel tree needs to calculate CRC checks for use with the
SCSI data integrity subsystem.
+config CRC64_ROCKSOFT
+ tristate "CRC calculation for the Rocksoft model CRC64"
+ select CRC64
+ select CRYPTO
+ select CRYPTO_CRC64_ROCKSOFT
+ help
+ This option provides a CRC64 API to a registered crypto driver.
+ This is used with the block layer's data integrity subsystem.
+
config CRC_ITU_T
tristate "CRC ITU-T V.41 functions"
help
diff --git a/lib/Makefile b/lib/Makefile
index 300f569c626b..7f7ae7458b6c 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -175,6 +175,7 @@ obj-$(CONFIG_CRC4) += crc4.o
obj-$(CONFIG_CRC7) += crc7.o
obj-$(CONFIG_LIBCRC32C) += libcrc32c.o
obj-$(CONFIG_CRC8) += crc8.o
+obj-$(CONFIG_CRC64_ROCKSOFT) += crc64-rocksoft.o
obj-$(CONFIG_XXHASH) += xxhash.o
obj-$(CONFIG_GENERIC_ALLOCATOR) += genalloc.o
diff --git a/lib/crc64-rocksoft.c b/lib/crc64-rocksoft.c
new file mode 100644
index 000000000000..55d32872778a
--- /dev/null
+++ b/lib/crc64-rocksoft.c
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/crc64.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <crypto/hash.h>
+#include <crypto/algapi.h>
+#include <linux/static_key.h>
+#include <linux/notifier.h>
+
+static struct crypto_shash __rcu *crc64_rocksoft_tfm;
+static DEFINE_STATIC_KEY_TRUE(crc64_rocksoft_fallback);
+static DEFINE_MUTEX(crc64_rocksoft_mutex);
+static struct work_struct crc64_rocksoft_rehash_work;
+
+static int crc64_rocksoft_notify(struct notifier_block *self, unsigned long val, void *data)
+{
+ struct crypto_alg *alg = data;
+
+ if (val != CRYPTO_MSG_ALG_LOADED ||
+ strcmp(alg->cra_name, CRC64_ROCKSOFT_STRING))
+ return NOTIFY_DONE;
+
+ schedule_work(&crc64_rocksoft_rehash_work);
+ return NOTIFY_OK;
+}
+
+static void crc64_rocksoft_rehash(struct work_struct *work)
+{
+ struct crypto_shash *new, *old;
+
+ mutex_lock(&crc64_rocksoft_mutex);
+ old = rcu_dereference_protected(crc64_rocksoft_tfm,
+ lockdep_is_held(&crc64_rocksoft_mutex));
+ new = crypto_alloc_shash(CRC64_ROCKSOFT_STRING, 0, 0);
+ if (IS_ERR(new)) {
+ mutex_unlock(&crc64_rocksoft_mutex);
+ return;
+ }
+ rcu_assign_pointer(crc64_rocksoft_tfm, new);
+ mutex_unlock(&crc64_rocksoft_mutex);
+
+ if (old) {
+ synchronize_rcu();
+ crypto_free_shash(old);
+ } else {
+ static_branch_disable(&crc64_rocksoft_fallback);
+ }
+}
+
+static struct notifier_block crc64_rocksoft_nb = {
+ .notifier_call = crc64_rocksoft_notify,
+};
+
+u64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len)
+{
+ struct {
+ struct shash_desc shash;
+ u64 crc;
+ } desc;
+ int err;
+
+ if (static_branch_unlikely(&crc64_rocksoft_fallback))
+ return crc64_rocksoft_generic(crc, buffer, len);
+
+ rcu_read_lock();
+ desc.shash.tfm = rcu_dereference(crc64_rocksoft_tfm);
+ desc.crc = crc;
+ err = crypto_shash_update(&desc.shash, buffer, len);
+ rcu_read_unlock();
+
+ BUG_ON(err);
+
+ return desc.crc;
+}
+EXPORT_SYMBOL_GPL(crc64_rocksoft_update);
+
+u64 crc64_rocksoft(const unsigned char *buffer, size_t len)
+{
+ return crc64_rocksoft_update(0, buffer, len);
+}
+EXPORT_SYMBOL_GPL(crc64_rocksoft);
+
+static int __init crc64_rocksoft_mod_init(void)
+{
+ INIT_WORK(&crc64_rocksoft_rehash_work, crc64_rocksoft_rehash);
+ crypto_register_notifier(&crc64_rocksoft_nb);
+ crc64_rocksoft_rehash(&crc64_rocksoft_rehash_work);
+ return 0;
+}
+
+static void __exit crc64_rocksoft_mod_fini(void)
+{
+ crypto_unregister_notifier(&crc64_rocksoft_nb);
+ cancel_work_sync(&crc64_rocksoft_rehash_work);
+ crypto_free_shash(rcu_dereference_protected(crc64_rocksoft_tfm, 1));
+}
+
+module_init(crc64_rocksoft_mod_init);
+module_exit(crc64_rocksoft_mod_fini);
+
+static int crc64_rocksoft_transform_show(char *buffer, const struct kernel_param *kp)
+{
+ struct crypto_shash *tfm;
+ int len;
+
+ if (static_branch_unlikely(&crc64_rocksoft_fallback))
+ return sprintf(buffer, "fallback\n");
+
+ rcu_read_lock();
+ tfm = rcu_dereference(crc64_rocksoft_tfm);
+ len = snprintf(buffer, PAGE_SIZE, "%s\n",
+ crypto_shash_driver_name(tfm));
+ rcu_read_unlock();
+
+ return len;
+}
+
+module_param_call(transform, NULL, crc64_rocksoft_transform_show, NULL, 0444);
+
+MODULE_AUTHOR("Keith Busch <kbusch@kernel.org>");
+MODULE_DESCRIPTION("Rocksoft model CRC64 calculation (library API)");
+MODULE_LICENSE("GPL");
+MODULE_SOFTDEP("pre: crc64");
--
2.25.4
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-03 20:13 ` [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework Keith Busch
@ 2022-03-08 20:21 ` Vasily Gorbik
2022-03-08 20:27 ` Keith Busch
0 siblings, 1 reply; 32+ messages in thread
From: Vasily Gorbik @ 2022-03-08 20:21 UTC (permalink / raw)
To: Keith Busch
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen
On Thu, Mar 03, 2022 at 12:13:10PM -0800, Keith Busch wrote:
> Hardware specific features may be able to calculate a crc64, so provide
> a framework for drivers to register their implementation. If nothing is
> registered, fallback to the generic table lookup implementation. The
> implementation is modeled after the crct10dif equivalent.
Hi Keith,
this is failing on big-endian systems. I get the following on s390:
[ 0.551573] crc32: CRC_LE_BITS = 64, CRC_BE BITS = 64
[ 0.551575] crc32: self tests passed, processed 225944 bytes in 118879 nsec
[ 0.551697] crc32c: CRC_LE_BITS = 64
[ 0.551698] crc32c: self tests passed, processed 112972 bytes in 58963 nsec
[ 0.577325] crc32_combine: 8373 self tests passed
[ 0.603321] crc32c_combine: 8373 self tests passed
[ 0.603502] alg: shash: crc64-rocksoft-generic test failed (wrong result) on test vector 0, cfg="init+update+final aligned buffer"
[ 0.603506] ------------[ cut here ]------------
[ 0.603507] alg: self-tests for crc64-rocksoft-generic (crc64-rocksoft) failed (rc=-22)
[ 0.603542] WARNING: CPU: 0 PID: 43 at crypto/testmgr.c:5726 alg_test+0x3c2/0x638
[ 0.603554] Modules linked in:
[ 0.603557] CPU: 0 PID: 43 Comm: cryptomgr_test Not tainted 5.17.0-rc7-next-20220308-118584-gcb153b68ff91 #168
[ 0.603560] Hardware name: IBM 8561 T01 701 (KVM/Linux)
[ 0.603562] Krnl PSW : 0704e00180000000 00000000007d2286 (alg_test+0x3c6/0x638)
[ 0.603565] R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
[ 0.603568] Krnl GPRS: 00000000ffffffea 000000000177c350 000000000000004b 00000000ffffefff
[ 0.603570] 0000000001663ed0 0000038000000001 0000000001ed1720 0000000081a4e480
[ 0.603572] 0000000081a4e400 000003800000003e ffffffffffffffea 000000000000003e
[ 0.603611] 0000000081a5a100 000000000000003e 00000000007d2282 00000380001b7cf0
[ 0.603618] Krnl Code: 00000000007d2276: c02000495b9e larl %r2,00000000010fd9b2
[ 0.603618] 00000000007d227c: c0e50026dbc6 brasl %r14,0000000000cada08
[ 0.603618] #00000000007d2282: af000000 mc 0,0
[ 0.603618] >00000000007d2286: b904002a lgr %r2,%r10
[ 0.603618] 00000000007d228a: eb6ff1380004 lmg %r6,%r15,312(%r15)
[ 0.603618] 00000000007d2290: 07fe bcr 15,%r14
[ 0.603618] 00000000007d2292: 47000700 bc 0,1792
[ 0.603618] 00000000007d2296: 1842 lr %r4,%r2
[ 0.603632] Call Trace:
[ 0.603634] [<00000000007d2286>] alg_test+0x3c6/0x638
[ 0.603636] ([<00000000007d2282>] alg_test+0x3c2/0x638)
[ 0.603638] [<00000000007cfff8>] cryptomgr_test+0x68/0x70
[ 0.603641] [<000000000017b228>] kthread+0x108/0x110
[ 0.603646] [<0000000000103374>] __ret_from_fork+0x3c/0x58
[ 0.603650] [<0000000000ccc3ba>] ret_from_fork+0xa/0x40
[ 0.603658] Last Breaking-Event-Address:
[ 0.603659] [<0000000000cada68>] __warn_printk+0x60/0x68
[ 0.603663] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-08 20:21 ` Vasily Gorbik
@ 2022-03-08 20:27 ` Keith Busch
2022-03-08 21:46 ` Keith Busch
2022-03-09 4:57 ` Eric Biggers
0 siblings, 2 replies; 32+ messages in thread
From: Keith Busch @ 2022-03-08 20:27 UTC (permalink / raw)
To: Vasily Gorbik
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen
On Tue, Mar 08, 2022 at 09:21:41PM +0100, Vasily Gorbik wrote:
> On Thu, Mar 03, 2022 at 12:13:10PM -0800, Keith Busch wrote:
> > Hardware specific features may be able to calculate a crc64, so provide
> > a framework for drivers to register their implementation. If nothing is
> > registered, fallback to the generic table lookup implementation. The
> > implementation is modeled after the crct10dif equivalent.
>
> Hi Keith,
>
> this is failing on big-endian systems. I get the following on s390:
Oh, I see the put_unaligned_le64() in chksum_final() was not the correct
action. I'll send an update, thank you for the report.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-08 20:27 ` Keith Busch
@ 2022-03-08 21:46 ` Keith Busch
2022-03-08 22:03 ` Vasily Gorbik
2022-03-09 4:57 ` Eric Biggers
1 sibling, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-03-08 21:46 UTC (permalink / raw)
To: Vasily Gorbik
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen
On Tue, Mar 08, 2022 at 12:27:47PM -0800, Keith Busch wrote:
> On Tue, Mar 08, 2022 at 09:21:41PM +0100, Vasily Gorbik wrote:
> > On Thu, Mar 03, 2022 at 12:13:10PM -0800, Keith Busch wrote:
> > > Hardware specific features may be able to calculate a crc64, so provide
> > > a framework for drivers to register their implementation. If nothing is
> > > registered, fallback to the generic table lookup implementation. The
> > > implementation is modeled after the crct10dif equivalent.
> >
> > Hi Keith,
> >
> > this is failing on big-endian systems. I get the following on s390:
>
> Oh, I see the put_unaligned_le64() in chksum_final() was not the correct
> action. I'll send an update, thank you for the report.
I'll set up a BE qemu target this week, but in the meantime, would you
be able to confirm if the following is successful?
---
diff --git a/crypto/crc64_rocksoft_generic.c b/crypto/crc64_rocksoft_generic.c
index 9e812bb26dba..12a8b0575ad1 100644
--- a/crypto/crc64_rocksoft_generic.c
+++ b/crypto/crc64_rocksoft_generic.c
@@ -28,14 +28,14 @@ static int chksum_final(struct shash_desc *desc, u8 *out)
{
u64 *crc = shash_desc_ctx(desc);
- put_unaligned_le64(*crc, out);
+ put_unaligned(*crc, (u64 *)out);
return 0;
}
static int __chksum_finup(u64 crc, const u8 *data, unsigned int len, u8 *out)
{
crc = crc64_rocksoft_generic(crc, data, len);
- put_unaligned_le64(crc, out);
+ put_unaligned(crc, (u64 *)out);
return 0;
}
--
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-08 21:46 ` Keith Busch
@ 2022-03-08 22:03 ` Vasily Gorbik
0 siblings, 0 replies; 32+ messages in thread
From: Vasily Gorbik @ 2022-03-08 22:03 UTC (permalink / raw)
To: Keith Busch
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen
On Tue, Mar 08, 2022 at 01:46:12PM -0800, Keith Busch wrote:
> On Tue, Mar 08, 2022 at 12:27:47PM -0800, Keith Busch wrote:
> > On Tue, Mar 08, 2022 at 09:21:41PM +0100, Vasily Gorbik wrote:
> > > On Thu, Mar 03, 2022 at 12:13:10PM -0800, Keith Busch wrote:
> > > > Hardware specific features may be able to calculate a crc64, so provide
> > > > a framework for drivers to register their implementation. If nothing is
> > > > registered, fallback to the generic table lookup implementation. The
> > > > implementation is modeled after the crct10dif equivalent.
> > >
> > > Hi Keith,
> > >
> > > this is failing on big-endian systems. I get the following on s390:
> >
> > Oh, I see the put_unaligned_le64() in chksum_final() was not the correct
> > action. I'll send an update, thank you for the report.
>
> I'll set up a BE qemu target this week, but in the meantime, would you
> be able to confirm if the following is successful?
Sure,
[ 0.543862] crc32: CRC_LE_BITS = 64, CRC_BE BITS = 64
[ 0.543864] crc32: self tests passed, processed 225944 bytes in 118678 nsec
[ 0.543986] crc32c: CRC_LE_BITS = 64
[ 0.543987] crc32c: self tests passed, processed 112972 bytes in 58932 nsec
[ 0.569479] crc32_combine: 8373 self tests passed
[ 0.595330] crc32c_combine: 8373 self tests passed
it does the trick. Thanks!
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-08 20:27 ` Keith Busch
2022-03-08 21:46 ` Keith Busch
@ 2022-03-09 4:57 ` Eric Biggers
2022-03-09 19:31 ` Keith Busch
1 sibling, 1 reply; 32+ messages in thread
From: Eric Biggers @ 2022-03-09 4:57 UTC (permalink / raw)
To: Keith Busch
Cc: Vasily Gorbik, linux-nvme, linux-block, linux-crypto,
linux-kernel, axboe, hch, martin.petersen
On Tue, Mar 08, 2022 at 12:27:47PM -0800, Keith Busch wrote:
> On Tue, Mar 08, 2022 at 09:21:41PM +0100, Vasily Gorbik wrote:
> > On Thu, Mar 03, 2022 at 12:13:10PM -0800, Keith Busch wrote:
> > > Hardware specific features may be able to calculate a crc64, so provide
> > > a framework for drivers to register their implementation. If nothing is
> > > registered, fallback to the generic table lookup implementation. The
> > > implementation is modeled after the crct10dif equivalent.
> >
> > Hi Keith,
> >
> > this is failing on big-endian systems. I get the following on s390:
>
> Oh, I see the put_unaligned_le64() in chksum_final() was not the correct
> action. I'll send an update, thank you for the report.
Or you could make the digests in your test vectors have have a consistent byte
order, probably little endian. That's how "shash" algorithms in the crypto API
normally work, including crc32 and crc32c; they produce bytes as output. I see
that crct10dif violates that convention, and I assume you copied it from there.
I'm not sure you should do that; crct10dif might be more of a one-off quirk.
- Eric
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-09 4:57 ` Eric Biggers
@ 2022-03-09 19:31 ` Keith Busch
2022-03-09 19:49 ` Eric Biggers
0 siblings, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-03-09 19:31 UTC (permalink / raw)
To: Eric Biggers
Cc: Vasily Gorbik, linux-nvme, linux-block, linux-crypto,
linux-kernel, axboe, hch, martin.petersen
On Tue, Mar 08, 2022 at 08:57:04PM -0800, Eric Biggers wrote:
> On Tue, Mar 08, 2022 at 12:27:47PM -0800, Keith Busch wrote:
> > On Tue, Mar 08, 2022 at 09:21:41PM +0100, Vasily Gorbik wrote:
> > > On Thu, Mar 03, 2022 at 12:13:10PM -0800, Keith Busch wrote:
> > > > Hardware specific features may be able to calculate a crc64, so provide
> > > > a framework for drivers to register their implementation. If nothing is
> > > > registered, fallback to the generic table lookup implementation. The
> > > > implementation is modeled after the crct10dif equivalent.
> > >
> > > Hi Keith,
> > >
> > > this is failing on big-endian systems. I get the following on s390:
> >
> > Oh, I see the put_unaligned_le64() in chksum_final() was not the correct
> > action. I'll send an update, thank you for the report.
>
> Or you could make the digests in your test vectors have have a consistent byte
> order, probably little endian. That's how "shash" algorithms in the crypto API
> normally work, including crc32 and crc32c; they produce bytes as output. I see
> that crct10dif violates that convention, and I assume you copied it from there.
> I'm not sure you should do that; crct10dif might be more of a one-off quirk.
Right, I started with the t10dif implementation. I changed it to the
unaligned accessor since you indicated the output buffer doesn't have an
alignment guarantee.
Perhaps I'm missing something, but it looks easier to just use the CPU
native endianess here. The only users for t10 and rocksoft transform to
big-endian for the wire protocol at the end, but there's no need to
maintain a specific byte order before setting the payload.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-09 19:31 ` Keith Busch
@ 2022-03-09 19:49 ` Eric Biggers
2022-03-10 15:39 ` Keith Busch
0 siblings, 1 reply; 32+ messages in thread
From: Eric Biggers @ 2022-03-09 19:49 UTC (permalink / raw)
To: Keith Busch
Cc: Vasily Gorbik, linux-nvme, linux-block, linux-crypto,
linux-kernel, axboe, hch, martin.petersen, Herbert Xu
On Wed, Mar 09, 2022 at 11:31:26AM -0800, Keith Busch wrote:
> On Tue, Mar 08, 2022 at 08:57:04PM -0800, Eric Biggers wrote:
> > On Tue, Mar 08, 2022 at 12:27:47PM -0800, Keith Busch wrote:
> > > On Tue, Mar 08, 2022 at 09:21:41PM +0100, Vasily Gorbik wrote:
> > > > On Thu, Mar 03, 2022 at 12:13:10PM -0800, Keith Busch wrote:
> > > > > Hardware specific features may be able to calculate a crc64, so provide
> > > > > a framework for drivers to register their implementation. If nothing is
> > > > > registered, fallback to the generic table lookup implementation. The
> > > > > implementation is modeled after the crct10dif equivalent.
> > > >
> > > > Hi Keith,
> > > >
> > > > this is failing on big-endian systems. I get the following on s390:
> > >
> > > Oh, I see the put_unaligned_le64() in chksum_final() was not the correct
> > > action. I'll send an update, thank you for the report.
> >
> > Or you could make the digests in your test vectors have have a consistent byte
> > order, probably little endian. That's how "shash" algorithms in the crypto API
> > normally work, including crc32 and crc32c; they produce bytes as output. I see
> > that crct10dif violates that convention, and I assume you copied it from there.
> > I'm not sure you should do that; crct10dif might be more of a one-off quirk.
>
> Right, I started with the t10dif implementation. I changed it to the
> unaligned accessor since you indicated the output buffer doesn't have an
> alignment guarantee.
>
> Perhaps I'm missing something, but it looks easier to just use the CPU
> native endianess here. The only users for t10 and rocksoft transform to
> big-endian for the wire protocol at the end, but there's no need to
> maintain a specific byte order before setting the payload.
The issue is that every other "shash" algorithm besides crct10dif, including
crc32 and crc32c, follow the convention of treating the digest as bytes. Doing
otherwise is unusual for the crypto API. So I have a slight preference for
treating it as bytes. Perhaps see what Herbert Xu (maintainer of the crypto
API, Cc'ed) recommends?
- Eric
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-09 19:49 ` Eric Biggers
@ 2022-03-10 15:39 ` Keith Busch
2022-03-10 18:36 ` Eric Biggers
0 siblings, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-03-10 15:39 UTC (permalink / raw)
To: Eric Biggers
Cc: Vasily Gorbik, linux-nvme, linux-block, linux-crypto,
linux-kernel, axboe, hch, martin.petersen, Herbert Xu
On Wed, Mar 09, 2022 at 07:49:07PM +0000, Eric Biggers wrote:
> The issue is that every other "shash" algorithm besides crct10dif, including
> crc32 and crc32c, follow the convention of treating the digest as bytes. Doing
> otherwise is unusual for the crypto API. So I have a slight preference for
> treating it as bytes. Perhaps see what Herbert Xu (maintainer of the crypto
> API, Cc'ed) recommends?
I'm okay either way, they're both simple enough. Here is an update atop
this series to match the other shash conventions if this is preferred
over my previous fix:
---
diff --git a/block/t10-pi.c b/block/t10-pi.c
index 914d8cddd43a..f9eb45571bc7 100644
--- a/block/t10-pi.c
+++ b/block/t10-pi.c
@@ -282,7 +282,7 @@ EXPORT_SYMBOL(t10_pi_type3_ip);
static __be64 ext_pi_crc64(void *data, unsigned int len)
{
- return cpu_to_be64(crc64_rocksoft(data, len));
+ return cpu_to_be64(le64_to_cpu(crc64_rocksoft(data, len)));
}
static blk_status_t ext_pi_crc64_generate(struct blk_integrity_iter *iter,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index f1a22794c404..f9e5f601c657 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -3686,11 +3686,11 @@ static const struct hash_testvec crc64_rocksoft_tv_template[] = {
{
.plaintext = zeroes,
.psize = 4096,
- .digest = (u8 *)(u64[]){ 0x6482d367eb22b64eull },
+ .digest = "\x4e\xb6\x22\xeb\x67\xd3\x82\x64",
}, {
.plaintext = ones,
.psize = 4096,
- .digest = (u8 *)(u64[]){ 0xc0ddba7302eca3acull },
+ .digest = "\xac\xa3\xec\x02\x73\xba\xdd\xc0",
}
};
diff --git a/include/linux/crc64.h b/include/linux/crc64.h
index e044c60d1e61..5319f9a9fc19 100644
--- a/include/linux/crc64.h
+++ b/include/linux/crc64.h
@@ -12,7 +12,7 @@
u64 __pure crc64_be(u64 crc, const void *p, size_t len);
u64 __pure crc64_rocksoft_generic(u64 crc, const void *p, size_t len);
-u64 crc64_rocksoft(const unsigned char *buffer, size_t len);
-u64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len);
+__le64 crc64_rocksoft(const unsigned char *buffer, size_t len);
+__le64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len);
#endif /* _LINUX_CRC64_H */
diff --git a/lib/crc64-rocksoft.c b/lib/crc64-rocksoft.c
index fc9ae0da5df7..215acb79a15d 100644
--- a/lib/crc64-rocksoft.c
+++ b/lib/crc64-rocksoft.c
@@ -54,16 +54,16 @@ static struct notifier_block crc64_rocksoft_nb = {
.notifier_call = crc64_rocksoft_notify,
};
-u64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len)
+__le64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len)
{
struct {
struct shash_desc shash;
- u64 crc;
+ __le64 crc;
} desc;
int err;
if (static_branch_unlikely(&crc64_rocksoft_fallback))
- return crc64_rocksoft_generic(crc, buffer, len);
+ return cpu_to_le64(crc64_rocksoft_generic(crc, buffer, len));
rcu_read_lock();
desc.shash.tfm = rcu_dereference(crc64_rocksoft_tfm);
@@ -77,7 +77,7 @@ u64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len)
}
EXPORT_SYMBOL_GPL(crc64_rocksoft_update);
-u64 crc64_rocksoft(const unsigned char *buffer, size_t len)
+__le64 crc64_rocksoft(const unsigned char *buffer, size_t len)
{
return crc64_rocksoft_update(0, buffer, len);
}
--
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-10 15:39 ` Keith Busch
@ 2022-03-10 18:36 ` Eric Biggers
2022-03-11 20:00 ` Keith Busch
0 siblings, 1 reply; 32+ messages in thread
From: Eric Biggers @ 2022-03-10 18:36 UTC (permalink / raw)
To: Keith Busch
Cc: Vasily Gorbik, linux-nvme, linux-block, linux-crypto,
linux-kernel, axboe, hch, martin.petersen, Herbert Xu
On Thu, Mar 10, 2022 at 07:39:59AM -0800, Keith Busch wrote:
> On Wed, Mar 09, 2022 at 07:49:07PM +0000, Eric Biggers wrote:
> > The issue is that every other "shash" algorithm besides crct10dif, including
> > crc32 and crc32c, follow the convention of treating the digest as bytes. Doing
> > otherwise is unusual for the crypto API. So I have a slight preference for
> > treating it as bytes. Perhaps see what Herbert Xu (maintainer of the crypto
> > API, Cc'ed) recommends?
>
> I'm okay either way, they're both simple enough. Here is an update atop
> this series to match the other shash conventions if this is preferred
> over my previous fix:
>
> ---
> diff --git a/block/t10-pi.c b/block/t10-pi.c
> index 914d8cddd43a..f9eb45571bc7 100644
> --- a/block/t10-pi.c
> +++ b/block/t10-pi.c
> @@ -282,7 +282,7 @@ EXPORT_SYMBOL(t10_pi_type3_ip);
>
> static __be64 ext_pi_crc64(void *data, unsigned int len)
> {
> - return cpu_to_be64(crc64_rocksoft(data, len));
> + return cpu_to_be64(le64_to_cpu(crc64_rocksoft(data, len)));
> }
>
> static blk_status_t ext_pi_crc64_generate(struct blk_integrity_iter *iter,
> diff --git a/crypto/testmgr.h b/crypto/testmgr.h
> index f1a22794c404..f9e5f601c657 100644
> --- a/crypto/testmgr.h
> +++ b/crypto/testmgr.h
> @@ -3686,11 +3686,11 @@ static const struct hash_testvec crc64_rocksoft_tv_template[] = {
> {
> .plaintext = zeroes,
> .psize = 4096,
> - .digest = (u8 *)(u64[]){ 0x6482d367eb22b64eull },
> + .digest = "\x4e\xb6\x22\xeb\x67\xd3\x82\x64",
> }, {
> .plaintext = ones,
> .psize = 4096,
> - .digest = (u8 *)(u64[]){ 0xc0ddba7302eca3acull },
> + .digest = "\xac\xa3\xec\x02\x73\xba\xdd\xc0",
> }
> };
>
> diff --git a/include/linux/crc64.h b/include/linux/crc64.h
> index e044c60d1e61..5319f9a9fc19 100644
> --- a/include/linux/crc64.h
> +++ b/include/linux/crc64.h
> @@ -12,7 +12,7 @@
> u64 __pure crc64_be(u64 crc, const void *p, size_t len);
> u64 __pure crc64_rocksoft_generic(u64 crc, const void *p, size_t len);
>
> -u64 crc64_rocksoft(const unsigned char *buffer, size_t len);
> -u64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len);
> +__le64 crc64_rocksoft(const unsigned char *buffer, size_t len);
> +__le64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len);
>
> #endif /* _LINUX_CRC64_H */
> diff --git a/lib/crc64-rocksoft.c b/lib/crc64-rocksoft.c
> index fc9ae0da5df7..215acb79a15d 100644
> --- a/lib/crc64-rocksoft.c
> +++ b/lib/crc64-rocksoft.c
> @@ -54,16 +54,16 @@ static struct notifier_block crc64_rocksoft_nb = {
> .notifier_call = crc64_rocksoft_notify,
> };
>
> -u64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len)
> +__le64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len)
> {
> struct {
> struct shash_desc shash;
> - u64 crc;
> + __le64 crc;
> } desc;
> int err;
>
> if (static_branch_unlikely(&crc64_rocksoft_fallback))
> - return crc64_rocksoft_generic(crc, buffer, len);
> + return cpu_to_le64(crc64_rocksoft_generic(crc, buffer, len));
>
> rcu_read_lock();
> desc.shash.tfm = rcu_dereference(crc64_rocksoft_tfm);
> @@ -77,7 +77,7 @@ u64 crc64_rocksoft_update(u64 crc, const unsigned char *buffer, size_t len)
> }
> EXPORT_SYMBOL_GPL(crc64_rocksoft_update);
>
> -u64 crc64_rocksoft(const unsigned char *buffer, size_t len)
> +__le64 crc64_rocksoft(const unsigned char *buffer, size_t len)
> {
> return crc64_rocksoft_update(0, buffer, len);
> }
> --
I think the lib functions should still use native endianness, like what crc32
does.
- Eric
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework
2022-03-10 18:36 ` Eric Biggers
@ 2022-03-11 20:00 ` Keith Busch
0 siblings, 0 replies; 32+ messages in thread
From: Keith Busch @ 2022-03-11 20:00 UTC (permalink / raw)
To: Eric Biggers
Cc: Vasily Gorbik, linux-nvme, linux-block, linux-crypto,
linux-kernel, axboe, hch, martin.petersen, Herbert Xu
On Thu, Mar 10, 2022 at 06:36:47PM +0000, Eric Biggers wrote:
> I think the lib functions should still use native endianness, like what crc32
> does.
Gotcha, then it should simply be this patch:
---
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index f1a22794c404..f9e5f601c657 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -3686,11 +3686,11 @@ static const struct hash_testvec crc64_rocksoft_tv_template[] = {
{
.plaintext = zeroes,
.psize = 4096,
- .digest = (u8 *)(u64[]){ 0x6482d367eb22b64eull },
+ .digest = "\x4e\xb6\x22\xeb\x67\xd3\x82\x64",
}, {
.plaintext = ones,
.psize = 4096,
- .digest = (u8 *)(u64[]){ 0xc0ddba7302eca3acull },
+ .digest = "\xac\xa3\xec\x02\x73\xba\xdd\xc0",
}
};
--
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCHv4 7/8] block: add pi for extended integrity
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
` (5 preceding siblings ...)
2022-03-03 20:13 ` [PATCHv4 6/8] crypto: add rocksoft 64b crc guard tag framework Keith Busch
@ 2022-03-03 20:13 ` Keith Busch
2022-03-04 2:41 ` Martin K. Petersen
2022-03-03 20:13 ` [PATCHv4 8/8] nvme: add support for enhanced metadata Keith Busch
` (2 subsequent siblings)
9 siblings, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-03-03 20:13 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Keith Busch, Hannes Reinecke
The NVMe specification defines new data integrity formats beyond the
t10 tuple. Add support for the specification defined CRC64 formats,
assuming the reference tag does not need to be split with the "storage
tag".
Cc: Hannes Reinecke <hare@suse.de>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
v3->v4:
Renamed nvme_ prefixes with ext_ for extended protection information.
Use _GPL for EXPORT_SYMBOL
block/Kconfig | 1 +
block/t10-pi.c | 194 +++++++++++++++++++++++++++++++++++++++++
include/linux/t10-pi.h | 20 +++++
3 files changed, 215 insertions(+)
diff --git a/block/Kconfig b/block/Kconfig
index 7eb5d6d53b3f..50b17e260fa2 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -73,6 +73,7 @@ config BLK_DEV_INTEGRITY_T10
tristate
depends on BLK_DEV_INTEGRITY
select CRC_T10DIF
+ select CRC64_ROCKSOFT
config BLK_DEV_ZONED
bool "Zoned block device support"
diff --git a/block/t10-pi.c b/block/t10-pi.c
index 758a76518854..3c35266edb66 100644
--- a/block/t10-pi.c
+++ b/block/t10-pi.c
@@ -7,8 +7,10 @@
#include <linux/t10-pi.h>
#include <linux/blk-integrity.h>
#include <linux/crc-t10dif.h>
+#include <linux/crc64.h>
#include <linux/module.h>
#include <net/checksum.h>
+#include <asm/unaligned.h>
typedef __be16 (csum_fn) (void *, unsigned int);
@@ -278,4 +280,196 @@ const struct blk_integrity_profile t10_pi_type3_ip = {
};
EXPORT_SYMBOL(t10_pi_type3_ip);
+static __be64 ext_pi_crc64(void *data, unsigned int len)
+{
+ return cpu_to_be64(crc64_rocksoft(data, len));
+}
+
+static blk_status_t ext_pi_crc64_generate(struct blk_integrity_iter *iter,
+ enum t10_dif_type type)
+{
+ unsigned int i;
+
+ for (i = 0 ; i < iter->data_size ; i += iter->interval) {
+ struct crc64_pi_tuple *pi = iter->prot_buf;
+
+ pi->guard_tag = ext_pi_crc64(iter->data_buf, iter->interval);
+ pi->app_tag = 0;
+
+ if (type == T10_PI_TYPE1_PROTECTION)
+ put_unaligned_be48(iter->seed, pi->ref_tag);
+ else
+ put_unaligned_be48(0ULL, pi->ref_tag);
+
+ iter->data_buf += iter->interval;
+ iter->prot_buf += iter->tuple_size;
+ iter->seed++;
+ }
+
+ return BLK_STS_OK;
+}
+
+static bool ext_pi_ref_escape(u8 *ref_tag)
+{
+ static u8 ref_escape[6] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
+
+ return memcmp(ref_tag, ref_escape, sizeof(ref_escape)) == 0;
+}
+
+static blk_status_t ext_pi_crc64_verify(struct blk_integrity_iter *iter,
+ enum t10_dif_type type)
+{
+ unsigned int i;
+
+ for (i = 0; i < iter->data_size; i += iter->interval) {
+ struct crc64_pi_tuple *pi = iter->prot_buf;
+ u64 ref, seed;
+ __be64 csum;
+
+ if (type == T10_PI_TYPE1_PROTECTION) {
+ if (pi->app_tag == T10_PI_APP_ESCAPE)
+ goto next;
+
+ ref = get_unaligned_be48(pi->ref_tag);
+ seed = lower_48_bits(iter->seed);
+ if (ref != seed) {
+ pr_err("%s: ref tag error at location %llu (rcvd %llu)\n",
+ iter->disk_name, seed, ref);
+ return BLK_STS_PROTECTION;
+ }
+ } else if (type == T10_PI_TYPE3_PROTECTION) {
+ if (pi->app_tag == T10_PI_APP_ESCAPE &&
+ ext_pi_ref_escape(pi->ref_tag))
+ goto next;
+ }
+
+ csum = ext_pi_crc64(iter->data_buf, iter->interval);
+ if (pi->guard_tag != csum) {
+ pr_err("%s: guard tag error at sector %llu " \
+ "(rcvd %016llx, want %016llx)\n",
+ iter->disk_name, (unsigned long long)iter->seed,
+ be64_to_cpu(pi->guard_tag), be64_to_cpu(csum));
+ return BLK_STS_PROTECTION;
+ }
+
+next:
+ iter->data_buf += iter->interval;
+ iter->prot_buf += iter->tuple_size;
+ iter->seed++;
+ }
+
+ return BLK_STS_OK;
+}
+
+static blk_status_t ext_pi_type1_verify_crc64(struct blk_integrity_iter *iter)
+{
+ return ext_pi_crc64_verify(iter, T10_PI_TYPE1_PROTECTION);
+}
+
+static blk_status_t ext_pi_type1_generate_crc64(struct blk_integrity_iter *iter)
+{
+ return ext_pi_crc64_generate(iter, T10_PI_TYPE1_PROTECTION);
+}
+
+static void ext_pi_type1_prepare(struct request *rq)
+{
+ const int tuple_sz = rq->q->integrity.tuple_size;
+ u64 ref_tag = ext_pi_ref_tag(rq);
+ struct bio *bio;
+
+ __rq_for_each_bio(bio, rq) {
+ struct bio_integrity_payload *bip = bio_integrity(bio);
+ u64 virt = lower_48_bits(bip_get_seed(bip));
+ struct bio_vec iv;
+ struct bvec_iter iter;
+
+ /* Already remapped? */
+ if (bip->bip_flags & BIP_MAPPED_INTEGRITY)
+ break;
+
+ bip_for_each_vec(iv, bip, iter) {
+ unsigned int j;
+ void *p;
+
+ p = bvec_kmap_local(&iv);
+ for (j = 0; j < iv.bv_len; j += tuple_sz) {
+ struct crc64_pi_tuple *pi = p;
+ u64 ref = get_unaligned_be48(pi->ref_tag);
+
+ if (ref == virt)
+ put_unaligned_be48(ref_tag, pi->ref_tag);
+ virt++;
+ ref_tag++;
+ p += tuple_sz;
+ }
+ kunmap_local(p);
+ }
+
+ bip->bip_flags |= BIP_MAPPED_INTEGRITY;
+ }
+}
+
+static void ext_pi_type1_complete(struct request *rq, unsigned int nr_bytes)
+{
+ unsigned intervals = nr_bytes >> rq->q->integrity.interval_exp;
+ const int tuple_sz = rq->q->integrity.tuple_size;
+ u64 ref_tag = ext_pi_ref_tag(rq);
+ struct bio *bio;
+
+ __rq_for_each_bio(bio, rq) {
+ struct bio_integrity_payload *bip = bio_integrity(bio);
+ u64 virt = lower_48_bits(bip_get_seed(bip));
+ struct bio_vec iv;
+ struct bvec_iter iter;
+
+ bip_for_each_vec(iv, bip, iter) {
+ unsigned int j;
+ void *p;
+
+ p = bvec_kmap_local(&iv);
+ for (j = 0; j < iv.bv_len && intervals; j += tuple_sz) {
+ struct crc64_pi_tuple *pi = p;
+ u64 ref = get_unaligned_be48(pi->ref_tag);
+
+ if (ref == ref_tag)
+ put_unaligned_be48(virt, pi->ref_tag);
+ virt++;
+ ref_tag++;
+ intervals--;
+ p += tuple_sz;
+ }
+ kunmap_local(p);
+ }
+ }
+}
+
+static blk_status_t ext_pi_type3_verify_crc64(struct blk_integrity_iter *iter)
+{
+ return ext_pi_crc64_verify(iter, T10_PI_TYPE3_PROTECTION);
+}
+
+static blk_status_t ext_pi_type3_generate_crc64(struct blk_integrity_iter *iter)
+{
+ return ext_pi_crc64_generate(iter, T10_PI_TYPE3_PROTECTION);
+}
+
+const struct blk_integrity_profile ext_pi_type1_crc64 = {
+ .name = "EXT-DIF-TYPE1-CRC64",
+ .generate_fn = ext_pi_type1_generate_crc64,
+ .verify_fn = ext_pi_type1_verify_crc64,
+ .prepare_fn = ext_pi_type1_prepare,
+ .complete_fn = ext_pi_type1_complete,
+};
+EXPORT_SYMBOL_GPL(ext_pi_type1_crc64);
+
+const struct blk_integrity_profile ext_pi_type3_crc64 = {
+ .name = "EXT-DIF-TYPE3-CRC64",
+ .generate_fn = ext_pi_type3_generate_crc64,
+ .verify_fn = ext_pi_type3_verify_crc64,
+ .prepare_fn = t10_pi_type3_prepare,
+ .complete_fn = t10_pi_type3_complete,
+};
+EXPORT_SYMBOL_GPL(ext_pi_type3_crc64);
+
+MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL");
diff --git a/include/linux/t10-pi.h b/include/linux/t10-pi.h
index c635c2e014e3..bc2c365373b9 100644
--- a/include/linux/t10-pi.h
+++ b/include/linux/t10-pi.h
@@ -53,4 +53,24 @@ extern const struct blk_integrity_profile t10_pi_type1_ip;
extern const struct blk_integrity_profile t10_pi_type3_crc;
extern const struct blk_integrity_profile t10_pi_type3_ip;
+struct crc64_pi_tuple {
+ __be64 guard_tag;
+ __be16 app_tag;
+ __u8 ref_tag[6];
+};
+
+static inline u64 ext_pi_ref_tag(struct request *rq)
+{
+ unsigned int shift = ilog2(queue_logical_block_size(rq->q));
+
+#ifdef CONFIG_BLK_DEV_INTEGRITY
+ if (rq->q->integrity.interval_exp)
+ shift = rq->q->integrity.interval_exp;
+#endif
+ return lower_48_bits(blk_rq_pos(rq) >> (shift - SECTOR_SHIFT));
+}
+
+extern const struct blk_integrity_profile ext_pi_type1_crc64;
+extern const struct blk_integrity_profile ext_pi_type3_crc64;
+
#endif
--
2.25.4
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCHv4 7/8] block: add pi for extended integrity
2022-03-03 20:13 ` [PATCHv4 7/8] block: add pi for extended integrity Keith Busch
@ 2022-03-04 2:41 ` Martin K. Petersen
0 siblings, 0 replies; 32+ messages in thread
From: Martin K. Petersen @ 2022-03-04 2:41 UTC (permalink / raw)
To: Keith Busch
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen, Hannes Reinecke
Keith,
> The NVMe specification defines new data integrity formats beyond the
> t10 tuple. Add support for the specification defined CRC64 formats,
> assuming the reference tag does not need to be split with the "storage
> tag".
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCHv4 8/8] nvme: add support for enhanced metadata
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
` (6 preceding siblings ...)
2022-03-03 20:13 ` [PATCHv4 7/8] block: add pi for extended integrity Keith Busch
@ 2022-03-03 20:13 ` Keith Busch
2022-03-04 2:38 ` Martin K. Petersen
2022-03-07 19:34 ` [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
2022-03-07 19:50 ` Jens Axboe
9 siblings, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-03-03 20:13 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen, Keith Busch, Hannes Reinecke, Klaus Jensen
NVM Express ratified TP 4068 defines new protection information formats.
Implement support for the CRC64 guard tags.
Since the block layer doesn't support variable length reference tags,
driver support for the Storage Tag space is not supported at this time.
Cc: Hannes Reinecke <hare@suse.de>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Klaus Jensen <its@irrelevant.dk>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
v3->v4:
Removed inline for setting ref_tag
Updated new exported names from prior patches
drivers/nvme/host/core.c | 162 +++++++++++++++++++++++++++++++++------
drivers/nvme/host/nvme.h | 4 +-
include/linux/nvme.h | 53 +++++++++++--
3 files changed, 188 insertions(+), 31 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index ace5f30acaf6..f3f9d48e0d32 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -919,6 +919,30 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req,
return BLK_STS_OK;
}
+static void nvme_set_ref_tag(struct nvme_ns *ns, struct nvme_command *cmnd,
+ struct request *req)
+{
+ u32 upper, lower;
+ u64 ref48;
+
+ /* both rw and write zeroes share the same reftag format */
+ switch (ns->guard_type) {
+ case NVME_NVM_NS_16B_GUARD:
+ cmnd->rw.reftag = cpu_to_le32(t10_pi_ref_tag(req));
+ break;
+ case NVME_NVM_NS_64B_GUARD:
+ ref48 = ext_pi_ref_tag(req);
+ lower = lower_32_bits(ref48);
+ upper = upper_32_bits(ref48);
+
+ cmnd->rw.reftag = cpu_to_le32(lower);
+ cmnd->rw.cdw3 = cpu_to_le32(upper);
+ break;
+ default:
+ break;
+ }
+}
+
static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
struct request *req, struct nvme_command *cmnd)
{
@@ -940,8 +964,7 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
switch (ns->pi_type) {
case NVME_NS_DPS_PI_TYPE1:
case NVME_NS_DPS_PI_TYPE2:
- cmnd->write_zeroes.reftag =
- cpu_to_le32(t10_pi_ref_tag(req));
+ nvme_set_ref_tag(ns, cmnd, req);
break;
}
}
@@ -968,7 +991,8 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
cmnd->rw.opcode = op;
cmnd->rw.flags = 0;
cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id);
- cmnd->rw.rsvd2 = 0;
+ cmnd->rw.cdw2 = 0;
+ cmnd->rw.cdw3 = 0;
cmnd->rw.metadata = 0;
cmnd->rw.slba = cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req)));
cmnd->rw.length = cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1);
@@ -1002,7 +1026,7 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
NVME_RW_PRINFO_PRCHK_REF;
if (op == nvme_cmd_zone_append)
control |= NVME_RW_APPEND_PIREMAP;
- cmnd->rw.reftag = cpu_to_le32(t10_pi_ref_tag(req));
+ nvme_set_ref_tag(ns, cmnd, req);
break;
}
}
@@ -1654,33 +1678,58 @@ int nvme_getgeo(struct block_device *bdev, struct hd_geometry *geo)
}
#ifdef CONFIG_BLK_DEV_INTEGRITY
-static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
+static void nvme_init_integrity(struct gendisk *disk, struct nvme_ns *ns,
u32 max_integrity_segments)
{
struct blk_integrity integrity = { };
- switch (pi_type) {
+ switch (ns->pi_type) {
case NVME_NS_DPS_PI_TYPE3:
- integrity.profile = &t10_pi_type3_crc;
- integrity.tag_size = sizeof(u16) + sizeof(u32);
- integrity.flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ switch (ns->guard_type) {
+ case NVME_NVM_NS_16B_GUARD:
+ integrity.profile = &t10_pi_type3_crc;
+ integrity.tag_size = sizeof(u16) + sizeof(u32);
+ integrity.flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ break;
+ case NVME_NVM_NS_64B_GUARD:
+ integrity.profile = &ext_pi_type3_crc64;
+ integrity.tag_size = sizeof(u16) + 6;
+ integrity.flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ break;
+ default:
+ integrity.profile = NULL;
+ break;
+ }
break;
case NVME_NS_DPS_PI_TYPE1:
case NVME_NS_DPS_PI_TYPE2:
- integrity.profile = &t10_pi_type1_crc;
- integrity.tag_size = sizeof(u16);
- integrity.flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ switch (ns->guard_type) {
+ case NVME_NVM_NS_16B_GUARD:
+ integrity.profile = &t10_pi_type1_crc;
+ integrity.tag_size = sizeof(u16);
+ integrity.flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ break;
+ case NVME_NVM_NS_64B_GUARD:
+ integrity.profile = &ext_pi_type1_crc64;
+ integrity.tag_size = sizeof(u16);
+ integrity.flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
+ break;
+ default:
+ integrity.profile = NULL;
+ break;
+ }
break;
default:
integrity.profile = NULL;
break;
}
- integrity.tuple_size = ms;
+
+ integrity.tuple_size = ns->ms;
blk_integrity_register(disk, &integrity);
blk_queue_max_integrity_segments(disk->queue, max_integrity_segments);
}
#else
-static void nvme_init_integrity(struct gendisk *disk, u16 ms, u8 pi_type,
+static void nvme_init_integrity(struct gendisk *disk, struct nvme_ns *ns,
u32 max_integrity_segments)
{
}
@@ -1750,17 +1799,73 @@ static int nvme_setup_streams_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
return 0;
}
-static void nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id)
+static int nvme_init_ms(struct nvme_ns *ns, struct nvme_id_ns *id)
{
+ bool first = id->dps & NVME_NS_DPS_PI_FIRST;
+ unsigned lbaf = nvme_lbaf_index(id->flbas);
struct nvme_ctrl *ctrl = ns->ctrl;
+ struct nvme_command c = { };
+ struct nvme_id_ns_nvm *nvm;
+ int ret = 0;
+ u32 elbaf;
+
+ ns->pi_size = 0;
+ ns->ms = le16_to_cpu(id->lbaf[lbaf].ms);
+ if (!(ctrl->ctratt & NVME_CTRL_ATTR_ELBAS)) {
+ ns->pi_size = sizeof(struct t10_pi_tuple);
+ ns->guard_type = NVME_NVM_NS_16B_GUARD;
+ goto set_pi;
+ }
+
+ nvm = kzalloc(sizeof(*nvm), GFP_KERNEL);
+ if (!nvm)
+ return -ENOMEM;
- ns->ms = le16_to_cpu(id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ms);
- if (id->dps & NVME_NS_DPS_PI_FIRST ||
- ns->ms == sizeof(struct t10_pi_tuple))
+ c.identify.opcode = nvme_admin_identify;
+ c.identify.nsid = cpu_to_le32(ns->head->ns_id);
+ c.identify.cns = NVME_ID_CNS_CS_NS;
+ c.identify.csi = NVME_CSI_NVM;
+
+ ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, &c, nvm, sizeof(*nvm));
+ if (ret)
+ goto free_data;
+
+ elbaf = le32_to_cpu(nvm->elbaf[lbaf]);
+
+ /* no support for storage tag formats right now */
+ if (nvme_elbaf_sts(elbaf))
+ goto free_data;
+
+ ns->guard_type = nvme_elbaf_guard_type(elbaf);
+ switch (ns->guard_type) {
+ case NVME_NVM_NS_64B_GUARD:
+ ns->pi_size = sizeof(struct crc64_pi_tuple);
+ break;
+ case NVME_NVM_NS_16B_GUARD:
+ ns->pi_size = sizeof(struct t10_pi_tuple);
+ break;
+ default:
+ break;
+ }
+
+free_data:
+ kfree(nvm);
+set_pi:
+ if (ns->pi_size && (first || ns->ms == ns->pi_size))
ns->pi_type = id->dps & NVME_NS_DPS_PI_MASK;
else
ns->pi_type = 0;
+ return ret;
+}
+
+static void nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id)
+{
+ struct nvme_ctrl *ctrl = ns->ctrl;
+
+ if (nvme_init_ms(ns, id))
+ return;
+
ns->features &= ~(NVME_NS_METADATA_SUPPORTED | NVME_NS_EXT_LBAS);
if (!ns->ms || !(ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
return;
@@ -1877,7 +1982,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
if (ns->ms) {
if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) &&
(ns->features & NVME_NS_METADATA_SUPPORTED))
- nvme_init_integrity(disk, ns->ms, ns->pi_type,
+ nvme_init_integrity(disk, ns,
ns->ctrl->max_integrity_segments);
else if (!nvme_ns_has_pi(ns))
capacity = 0;
@@ -1932,7 +2037,7 @@ static void nvme_set_chunk_sectors(struct nvme_ns *ns, struct nvme_id_ns *id)
static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
{
- unsigned lbaf = id->flbas & NVME_NS_FLBAS_LBA_MASK;
+ unsigned lbaf = nvme_lbaf_index(id->flbas);
int ret;
blk_mq_freeze_queue(ns->disk->queue);
@@ -2277,20 +2382,27 @@ static int nvme_configure_timestamp(struct nvme_ctrl *ctrl)
return ret;
}
-static int nvme_configure_acre(struct nvme_ctrl *ctrl)
+static int nvme_configure_host_options(struct nvme_ctrl *ctrl)
{
struct nvme_feat_host_behavior *host;
+ u8 acre = 0, lbafee = 0;
int ret;
/* Don't bother enabling the feature if retry delay is not reported */
- if (!ctrl->crdt[0])
+ if (ctrl->crdt[0])
+ acre = NVME_ENABLE_ACRE;
+ if (ctrl->ctratt & NVME_CTRL_ATTR_ELBAS)
+ lbafee = NVME_ENABLE_LBAFEE;
+
+ if (!acre && !lbafee)
return 0;
host = kzalloc(sizeof(*host), GFP_KERNEL);
if (!host)
return 0;
- host->acre = NVME_ENABLE_ACRE;
+ host->acre = acre;
+ host->lbafee = lbafee;
ret = nvme_set_features(ctrl, NVME_FEAT_HOST_BEHAVIOR, 0,
host, sizeof(*host), NULL);
kfree(host);
@@ -3132,7 +3244,7 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
if (ret < 0)
return ret;
- ret = nvme_configure_acre(ctrl);
+ ret = nvme_configure_host_options(ctrl);
if (ret < 0)
return ret;
@@ -4846,12 +4958,14 @@ static inline void _nvme_check_size(void)
BUILD_BUG_ON(sizeof(struct nvme_id_ctrl) != NVME_IDENTIFY_DATA_SIZE);
BUILD_BUG_ON(sizeof(struct nvme_id_ns) != NVME_IDENTIFY_DATA_SIZE);
BUILD_BUG_ON(sizeof(struct nvme_id_ns_zns) != NVME_IDENTIFY_DATA_SIZE);
+ BUILD_BUG_ON(sizeof(struct nvme_id_ns_nvm) != NVME_IDENTIFY_DATA_SIZE);
BUILD_BUG_ON(sizeof(struct nvme_id_ctrl_zns) != NVME_IDENTIFY_DATA_SIZE);
BUILD_BUG_ON(sizeof(struct nvme_id_ctrl_nvm) != NVME_IDENTIFY_DATA_SIZE);
BUILD_BUG_ON(sizeof(struct nvme_lba_range_type) != 64);
BUILD_BUG_ON(sizeof(struct nvme_smart_log) != 512);
BUILD_BUG_ON(sizeof(struct nvme_dbbuf) != 64);
BUILD_BUG_ON(sizeof(struct nvme_directive_cmd) != 64);
+ BUILD_BUG_ON(sizeof(struct nvme_feat_host_behavior) != 512);
}
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 587d92df118b..77257e02ef03 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -454,9 +454,11 @@ struct nvme_ns {
int lba_shift;
u16 ms;
+ u16 pi_size;
u16 sgs;
u32 sws;
u8 pi_type;
+ u8 guard_type;
#ifdef CONFIG_BLK_DEV_ZONED
u64 zsze;
#endif
@@ -479,7 +481,7 @@ struct nvme_ns {
/* NVMe ns supports metadata actions by the controller (generate/strip) */
static inline bool nvme_ns_has_pi(struct nvme_ns *ns)
{
- return ns->pi_type && ns->ms == sizeof(struct t10_pi_tuple);
+ return ns->pi_type && ns->ms == ns->pi_size;
}
struct nvme_ctrl_ops {
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 9dbc3ef4daf7..4f44f83817a9 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -244,6 +244,7 @@ enum {
enum nvme_ctrl_attr {
NVME_CTRL_ATTR_HID_128_BIT = (1 << 0),
NVME_CTRL_ATTR_TBKAS = (1 << 6),
+ NVME_CTRL_ATTR_ELBAS = (1 << 15),
};
struct nvme_id_ctrl {
@@ -399,8 +400,7 @@ struct nvme_id_ns {
__le16 endgid;
__u8 nguid[16];
__u8 eui64[8];
- struct nvme_lbaf lbaf[16];
- __u8 rsvd192[192];
+ struct nvme_lbaf lbaf[64];
__u8 vs[3712];
};
@@ -418,8 +418,7 @@ struct nvme_id_ns_zns {
__le32 rrl;
__le32 frl;
__u8 rsvd20[2796];
- struct nvme_zns_lbafe lbafe[16];
- __u8 rsvd3072[768];
+ struct nvme_zns_lbafe lbafe[64];
__u8 vs[256];
};
@@ -428,6 +427,30 @@ struct nvme_id_ctrl_zns {
__u8 rsvd1[4095];
};
+struct nvme_id_ns_nvm {
+ __le64 lbstm;
+ __u8 pic;
+ __u8 rsvd9[3];
+ __le32 elbaf[64];
+ __u8 rsvd268[3828];
+};
+
+enum {
+ NVME_ID_NS_NVM_STS_MASK = 0x3f,
+ NVME_ID_NS_NVM_GUARD_SHIFT = 7,
+ NVME_ID_NS_NVM_GUARD_MASK = 0x3,
+};
+
+static inline __u8 nvme_elbaf_sts(__u32 elbaf)
+{
+ return elbaf & NVME_ID_NS_NVM_STS_MASK;
+}
+
+static inline __u8 nvme_elbaf_guard_type(__u32 elbaf)
+{
+ return (elbaf >> NVME_ID_NS_NVM_GUARD_SHIFT) & NVME_ID_NS_NVM_GUARD_MASK;
+}
+
struct nvme_id_ctrl_nvm {
__u8 vsl;
__u8 wzsl;
@@ -478,6 +501,8 @@ enum {
NVME_NS_FEAT_IO_OPT = 1 << 4,
NVME_NS_ATTR_RO = 1 << 0,
NVME_NS_FLBAS_LBA_MASK = 0xf,
+ NVME_NS_FLBAS_LBA_UMASK = 0x60,
+ NVME_NS_FLBAS_LBA_SHIFT = 1,
NVME_NS_FLBAS_META_EXT = 0x10,
NVME_NS_NMIC_SHARED = 1 << 0,
NVME_LBAF_RP_BEST = 0,
@@ -496,6 +521,18 @@ enum {
NVME_NS_DPS_PI_TYPE3 = 3,
};
+enum {
+ NVME_NVM_NS_16B_GUARD = 0,
+ NVME_NVM_NS_32B_GUARD = 1,
+ NVME_NVM_NS_64B_GUARD = 2,
+};
+
+static inline __u8 nvme_lbaf_index(__u8 flbas)
+{
+ return (flbas & NVME_NS_FLBAS_LBA_MASK) |
+ ((flbas & NVME_NS_FLBAS_LBA_UMASK) >> NVME_NS_FLBAS_LBA_SHIFT);
+}
+
/* Identify Namespace Metadata Capabilities (MC): */
enum {
NVME_MC_EXTENDED_LBA = (1 << 0),
@@ -842,7 +879,8 @@ struct nvme_rw_command {
__u8 flags;
__u16 command_id;
__le32 nsid;
- __u64 rsvd2;
+ __le32 cdw2;
+ __le32 cdw3;
__le64 metadata;
union nvme_data_ptr dptr;
__le64 slba;
@@ -996,11 +1034,14 @@ enum {
struct nvme_feat_host_behavior {
__u8 acre;
- __u8 resv1[511];
+ __u8 etdas;
+ __u8 lbafee;
+ __u8 resv1[509];
};
enum {
NVME_ENABLE_ACRE = 1,
+ NVME_ENABLE_LBAFEE = 1,
};
/* Admin commands */
--
2.25.4
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCHv4 8/8] nvme: add support for enhanced metadata
2022-03-03 20:13 ` [PATCHv4 8/8] nvme: add support for enhanced metadata Keith Busch
@ 2022-03-04 2:38 ` Martin K. Petersen
0 siblings, 0 replies; 32+ messages in thread
From: Martin K. Petersen @ 2022-03-04 2:38 UTC (permalink / raw)
To: Keith Busch
Cc: linux-nvme, linux-block, linux-crypto, linux-kernel, axboe, hch,
martin.petersen, Hannes Reinecke, Klaus Jensen
Keith,
> NVM Express ratified TP 4068 defines new protection information
> formats. Implement support for the CRC64 guard tags.
>
> Since the block layer doesn't support variable length reference tags,
> driver support for the Storage Tag space is not supported at this
> time.
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
--
Martin K. Petersen Oracle Linux Engineering
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 0/8] 64-bit data integrity field support
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
` (7 preceding siblings ...)
2022-03-03 20:13 ` [PATCHv4 8/8] nvme: add support for enhanced metadata Keith Busch
@ 2022-03-07 19:34 ` Keith Busch
2022-03-07 19:50 ` Jens Axboe
2022-03-07 19:50 ` Jens Axboe
9 siblings, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-03-07 19:34 UTC (permalink / raw)
To: linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: axboe, hch, martin.petersen
Hi Jens,
I believe this is looking ready at this point. Would it be too ambitious
to see this applied ahead of 5.18? The block layer provides the focal
point for new capabilities in this series, so I feel it should go
through that tree if you're okay with this.
Thanks,
Keith
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 0/8] 64-bit data integrity field support
2022-03-07 19:34 ` [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
@ 2022-03-07 19:50 ` Jens Axboe
0 siblings, 0 replies; 32+ messages in thread
From: Jens Axboe @ 2022-03-07 19:50 UTC (permalink / raw)
To: Keith Busch, linux-nvme, linux-block, linux-crypto, linux-kernel
Cc: hch, martin.petersen
On 3/7/22 12:34 PM, Keith Busch wrote:
> Hi Jens,
>
> I believe this is looking ready at this point. Would it be too ambitious
> to see this applied ahead of 5.18? The block layer provides the focal
> point for new capabilities in this series, so I feel it should go
> through that tree if you're okay with this.
Layered it on top of the write-streams changes to avoid a silly
conflict.
--
Jens Axboe
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCHv4 0/8] 64-bit data integrity field support
2022-03-03 20:13 [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
` (8 preceding siblings ...)
2022-03-07 19:34 ` [PATCHv4 0/8] 64-bit data integrity field support Keith Busch
@ 2022-03-07 19:50 ` Jens Axboe
9 siblings, 0 replies; 32+ messages in thread
From: Jens Axboe @ 2022-03-07 19:50 UTC (permalink / raw)
To: linux-kernel, Keith Busch, linux-nvme, linux-crypto, linux-block
Cc: martin.petersen, hch
On Thu, 3 Mar 2022 12:13:04 -0800, Keith Busch wrote:
> The NVM Express protocol added enhancements to the data integrity field
> formats beyond the T10 defined protection information. A detailed
> description of the new formats can be found in the NVMe's NVM Command
> Set Specification, section 5.2, available at:
>
> https://nvmexpress.org/wp-content/uploads/NVM-Command-Set-Specification-1.0b-2021.12.18-Ratified.pdf
>
> [...]
Applied, thanks!
[1/8] block: support pi with extended metadata
commit: c340b990d58c856c1636e0c10abb9e4351ad852a
[2/8] nvme: allow integrity on extended metadata formats
commit: 84b735429f5fe6f57fc0b3fff3932dce1471e668
[3/8] asm-generic: introduce be48 unaligned accessors
commit: c2ea5fcf53d5f21e6aff0de11d55bc202822df6a
[4/8] linux/kernel: introduce lower_48_bits function
commit: 7ee8809df990d1de379002973baee1681e8d7dd3
[5/8] lib: add rocksoft model crc64
commit: cbc0a40e17da361a2ada8d669413ccfbd2028f2d
[6/8] crypto: add rocksoft 64b crc guard tag framework
commit: f3813f4b287e480b1fcd62ca798d8556644b8278
[7/8] block: add pi for extended integrity
commit: a7d4383f17e10f338ea757a849f02298790d24fb
[8/8] nvme: add support for enhanced metadata
commit: 4020aad85c6785ddac8d51f345ff9e3328ce773a
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 32+ messages in thread