All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] aspeed/hace: Support AST2600 HACE
@ 2022-03-31  7:48 Steven Lee
  2022-03-31  7:48 ` [PATCH v4 1/3] aspeed/hace: Support HMAC Key Buffer register Steven Lee
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Steven Lee @ 2022-03-31  7:48 UTC (permalink / raw)
  To: Cédric Le Goater, Peter Maydell, Andrew Jeffery,
	Joel Stanley, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	open list:ASPEED BMCs, open list:All patches CC here
  Cc: jamin_lin, troy_lee, steven_lee

This patch series implements ast2600 hace engine with accumulative mode
and unit test against to it.

Verified with following models
- AST2600 with OpenBmc VERSION_ID=2.12.0-dev-660-g4c7b3e692-dirty
  - check hash verification in uboot and check whether qemu crashed
    during openbmc web gui login.
- AST1030 with ASPEED zephyr SDK v1.04
  - run `hash sha256` command in zephyr shell to verify aspeed hace.

Please help to review.

Thanks,
Steven

Changes in v4:
- Separate HACE28 support to another patch.
- Refactor acc_mode message handling flow.

Steven Lee (3):
  aspeed/hace: Support HMAC Key Buffer register.
  aspeed/hace: Support AST2600 HACE
  tests/qtest: Add test for Aspeed HACE accumulative mode

 hw/misc/aspeed_hace.c          | 147 ++++++++++++++++++++++++++++++++-
 include/hw/misc/aspeed_hace.h  |   1 +
 tests/qtest/aspeed_hace-test.c | 145 ++++++++++++++++++++++++++++++++
 3 files changed, 289 insertions(+), 4 deletions(-)

-- 
2.17.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v4 1/3] aspeed/hace: Support HMAC Key Buffer register.
  2022-03-31  7:48 [PATCH v4 0/3] aspeed/hace: Support AST2600 HACE Steven Lee
@ 2022-03-31  7:48 ` Steven Lee
  2022-04-19 21:35   ` Cédric Le Goater
  2022-03-31  7:48 ` [PATCH v4 2/3] aspeed/hace: Support AST2600 HACE Steven Lee
  2022-03-31  7:48 ` [PATCH v4 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode Steven Lee
  2 siblings, 1 reply; 10+ messages in thread
From: Steven Lee @ 2022-03-31  7:48 UTC (permalink / raw)
  To: Cédric Le Goater, Peter Maydell, Andrew Jeffery,
	Joel Stanley, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	open list:ASPEED BMCs, open list:All patches CC here
  Cc: jamin_lin, troy_lee, steven_lee

Support HACE28: Hash HMAC Key Buffer Base Address Register.

Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>
---
 hw/misc/aspeed_hace.c         | 7 +++++++
 include/hw/misc/aspeed_hace.h | 1 +
 2 files changed, 8 insertions(+)

diff --git a/hw/misc/aspeed_hace.c b/hw/misc/aspeed_hace.c
index 10f00e65f4..59fe5bfca2 100644
--- a/hw/misc/aspeed_hace.c
+++ b/hw/misc/aspeed_hace.c
@@ -27,6 +27,7 @@
 
 #define R_HASH_SRC      (0x20 / 4)
 #define R_HASH_DEST     (0x24 / 4)
+#define R_HASH_KEY_BUFF (0x28 / 4)
 #define R_HASH_SRC_LEN  (0x2c / 4)
 
 #define R_HASH_CMD      (0x30 / 4)
@@ -210,6 +211,9 @@ static void aspeed_hace_write(void *opaque, hwaddr addr, uint64_t data,
     case R_HASH_DEST:
         data &= ahc->dest_mask;
         break;
+    case R_HASH_KEY_BUFF:
+        data &= ahc->key_mask;
+        break;
     case R_HASH_SRC_LEN:
         data &= 0x0FFFFFFF;
         break;
@@ -333,6 +337,7 @@ static void aspeed_ast2400_hace_class_init(ObjectClass *klass, void *data)
 
     ahc->src_mask = 0x0FFFFFFF;
     ahc->dest_mask = 0x0FFFFFF8;
+    ahc->key_mask = 0x0FFFFFC0;
     ahc->hash_mask = 0x000003ff; /* No SG or SHA512 modes */
 }
 
@@ -351,6 +356,7 @@ static void aspeed_ast2500_hace_class_init(ObjectClass *klass, void *data)
 
     ahc->src_mask = 0x3fffffff;
     ahc->dest_mask = 0x3ffffff8;
+    ahc->key_mask = 0x3FFFFFC0;
     ahc->hash_mask = 0x000003ff; /* No SG or SHA512 modes */
 }
 
@@ -369,6 +375,7 @@ static void aspeed_ast2600_hace_class_init(ObjectClass *klass, void *data)
 
     ahc->src_mask = 0x7FFFFFFF;
     ahc->dest_mask = 0x7FFFFFF8;
+    ahc->key_mask = 0x7FFFFFF8;
     ahc->hash_mask = 0x00147FFF;
 }
 
diff --git a/include/hw/misc/aspeed_hace.h b/include/hw/misc/aspeed_hace.h
index 94d5ada95f..2242945eb4 100644
--- a/include/hw/misc/aspeed_hace.h
+++ b/include/hw/misc/aspeed_hace.h
@@ -37,6 +37,7 @@ struct AspeedHACEClass {
 
     uint32_t src_mask;
     uint32_t dest_mask;
+    uint32_t key_mask;
     uint32_t hash_mask;
 };
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v4 2/3] aspeed/hace: Support AST2600 HACE
  2022-03-31  7:48 [PATCH v4 0/3] aspeed/hace: Support AST2600 HACE Steven Lee
  2022-03-31  7:48 ` [PATCH v4 1/3] aspeed/hace: Support HMAC Key Buffer register Steven Lee
@ 2022-03-31  7:48 ` Steven Lee
  2022-04-20 12:53   ` Cédric Le Goater
  2022-03-31  7:48 ` [PATCH v4 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode Steven Lee
  2 siblings, 1 reply; 10+ messages in thread
From: Steven Lee @ 2022-03-31  7:48 UTC (permalink / raw)
  To: Cédric Le Goater, Peter Maydell, Andrew Jeffery,
	Joel Stanley, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	open list:ASPEED BMCs, open list:All patches CC here
  Cc: jamin_lin, troy_lee, steven_lee

The aspeed ast2600 accumulative mode is described in datasheet
ast2600v10.pdf section 25.6.4:
 1. Allocating and initiating accumulative hash digest write buffer
    with initial state.
    * Since QEMU crypto/hash api doesn't provide the API to set initial
      state of hash library, and the initial state is already setted by
      crypto library (gcrypt/glib/...), so skip this step.
 2. Calculating accumulative hash digest.
    (a) When receiving the last accumulative data, software need to add
        padding message at the end of the accumulative data. Padding
        message described in specific of MD5, SHA-1, SHA224, SHA256,
        SHA512, SHA512/224, SHA512/256.
        * Since the crypto library (gcrypt/glib) already pad the
          padding message internally.
        * This patch is to remove the padding message which fed byguest
          machine driver.

Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>
---
 hw/misc/aspeed_hace.c | 140 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 136 insertions(+), 4 deletions(-)

diff --git a/hw/misc/aspeed_hace.c b/hw/misc/aspeed_hace.c
index 59fe5bfca2..5a7a144602 100644
--- a/hw/misc/aspeed_hace.c
+++ b/hw/misc/aspeed_hace.c
@@ -95,12 +95,115 @@ static int hash_algo_lookup(uint32_t reg)
     return -1;
 }
 
-static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
+/**
+ * Check whether the request contains padding message.
+ *
+ * @param iov           iov of current request
+ * @param id            index of iov of current request
+ * @param total_req_len length of all acc_mode requests(including padding msg)
+ * @param req_len       length of the current request
+ * @param total_msg_len length of all acc_mode requests(excluding padding msg)
+ * @param pad_offset    start offset of padding message
+ */
+static bool has_padding(struct iovec *iov, uint32_t total_req_len,
+                        hwaddr req_len, uint32_t *total_msg_len,
+                        uint32_t *pad_offset)
+{
+    *total_msg_len = (uint32_t)(ldq_be_p(iov->iov_base + req_len - 8) / 8);
+    /*
+     * SG_LIST_LEN_LAST asserted in the request length doesn't mean it is the
+     * last request. The last request should contain padding message.
+     * We check whether message contains padding by
+     *   1. Get total message length. If the current message contains
+     *      padding, the last 8 bytes are total message length.
+     *   2. Check whether the total message length is valid.
+     *      If it is valid, the value should less than or eaual to
+     *      total_req_len.
+     *   3. Current request len - padding_size to get padding offset.
+     *      The padding message's first byte should be 0x80
+     */
+    if (*total_msg_len <= total_req_len) {
+        uint32_t padding_size = total_req_len - *total_msg_len;
+        uint8_t *padding = iov->iov_base;
+        *pad_offset = req_len - padding_size;
+        if (padding[*pad_offset] == 0x80) {
+            return true;
+        }
+    }
+
+    return false;
+}
+
+static int reconstruct_iov(struct iovec *cache, struct iovec *iov, int id,
+                           uint32_t *total_req_len,
+                           uint32_t *pad_offset,
+                           int *count)
+{
+    int i, iov_count;
+    if (pad_offset != 0) {
+        (cache + *count)->iov_base = (iov + id)->iov_base;
+        (cache + *count)->iov_len = *pad_offset;
+        ++*count;
+    }
+    for (i = 0; i < *count; i++) {
+        (iov + i)->iov_base = (cache + i)->iov_base;
+        (iov + i)->iov_len = (cache + i)->iov_len;
+    }
+    iov_count = *count;
+    *count = 0;
+    *total_req_len = 0;
+    return iov_count;
+}
+
+/**
+ * Generate iov for accumulative mode.
+ *
+ * @param cache         cached iov
+ * @param iov           iov of current request
+ * @param id            index of iov of current request
+ * @param total_req_len total length of the request(including padding)
+ * @param req_len       length of the current request
+ * @param count         count of cached iov
+ */
+static int gen_acc_mode_iov(struct iovec *cache, struct iovec *iov, int id,
+                            uint32_t *total_req_len, hwaddr *req_len,
+                            int *count)
+{
+    uint32_t pad_offset;
+    uint32_t total_msg_len;
+    *total_req_len += *req_len;
+
+    if (has_padding(&iov[id], *total_req_len, *req_len, &total_msg_len,
+                    &pad_offset)) {
+        if (*count) {
+            return reconstruct_iov(cache, iov, id, total_req_len,
+                    &pad_offset, count);
+        }
+
+        *req_len -= *total_req_len - total_msg_len;
+        *total_req_len = 0;
+        (iov + id)->iov_len = *req_len;
+        return id + 1;
+    } else {
+        (cache + *count)->iov_base = iov->iov_base;
+        (cache + *count)->iov_len = *req_len;
+        ++*count;
+    }
+
+    return 0;
+}
+
+static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode,
+                              bool acc_mode)
 {
     struct iovec iov[ASPEED_HACE_MAX_SG];
     g_autofree uint8_t *digest_buf;
     size_t digest_len = 0;
+    int niov = 0;
     int i;
+    static struct iovec iov_cache[ASPEED_HACE_MAX_SG];
+    static int count;
+    static uint32_t total_len;
 
     if (sg_mode) {
         uint32_t len = 0;
@@ -124,10 +227,17 @@ static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
                                         MEMTXATTRS_UNSPECIFIED, NULL);
             addr &= SG_LIST_ADDR_MASK;
 
-            iov[i].iov_len = len & SG_LIST_LEN_MASK;
-            plen = iov[i].iov_len;
+            plen = len & SG_LIST_LEN_MASK;
             iov[i].iov_base = address_space_map(&s->dram_as, addr, &plen, false,
                                                 MEMTXATTRS_UNSPECIFIED);
+
+            if (acc_mode) {
+                niov = gen_acc_mode_iov(
+                        iov_cache, iov, i, &total_len, &plen, &count);
+
+            } else {
+                iov[i].iov_len = plen;
+            }
         }
     } else {
         hwaddr len = s->regs[R_HASH_SRC_LEN];
@@ -137,6 +247,27 @@ static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
                                             &len, false,
                                             MEMTXATTRS_UNSPECIFIED);
         i = 1;
+
+        if (count) {
+            /*
+             * In aspeed sdk kernel driver, sg_mode is disabled in hash_final().
+             * Thus if we received a request with sg_mode disabled, it is
+             * required to check whether cache is empty. If no, we should
+             * combine cached iov and the current iov.
+             */
+            uint32_t total_msg_len;
+            uint32_t pad_offset;
+            total_len += len;
+            if (has_padding(iov, total_len, len, &total_msg_len,
+                            &pad_offset)) {
+                niov = reconstruct_iov(iov_cache, iov, 0, &total_len,
+                        &pad_offset, &count);
+            }
+        }
+    }
+
+    if (niov) {
+        i = niov;
     }
 
     if (qcrypto_hash_bytesv(algo, iov, i, &digest_buf, &digest_len, NULL) < 0) {
@@ -238,7 +369,8 @@ static void aspeed_hace_write(void *opaque, hwaddr addr, uint64_t data,
                         __func__, data & ahc->hash_mask);
                 break;
         }
-        do_hash_operation(s, algo, data & HASH_SG_EN);
+        do_hash_operation(s, algo, data & HASH_SG_EN,
+                ((data & HASH_HMAC_MASK) == HASH_DIGEST_ACCUM));
 
         if (data & HASH_IRQ_EN) {
             qemu_irq_raise(s->irq);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v4 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode
  2022-03-31  7:48 [PATCH v4 0/3] aspeed/hace: Support AST2600 HACE Steven Lee
  2022-03-31  7:48 ` [PATCH v4 1/3] aspeed/hace: Support HMAC Key Buffer register Steven Lee
  2022-03-31  7:48 ` [PATCH v4 2/3] aspeed/hace: Support AST2600 HACE Steven Lee
@ 2022-03-31  7:48 ` Steven Lee
  2022-04-20  9:50   ` Thomas Huth
  2022-04-21  5:55   ` Joel Stanley
  2 siblings, 2 replies; 10+ messages in thread
From: Steven Lee @ 2022-03-31  7:48 UTC (permalink / raw)
  To: Cédric Le Goater, Peter Maydell, Andrew Jeffery,
	Joel Stanley, Thomas Huth, Laurent Vivier, Paolo Bonzini,
	open list:ASPEED BMCs, open list:All patches CC here
  Cc: jamin_lin, troy_lee, steven_lee

This add two addition test cases for accumulative mode under sg enabled.

The input vector was manually craft with "abc" + bit 1 + padding zeros + L.
The padding length depends on algorithm, i.e. SHA512 (1024 bit),
SHA256 (512 bit).

The result was calculated by command line sha512sum/sha256sum utilities
without padding, i.e. only "abc" ascii text.

Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>
---
 tests/qtest/aspeed_hace-test.c | 145 +++++++++++++++++++++++++++++++++
 1 file changed, 145 insertions(+)

diff --git a/tests/qtest/aspeed_hace-test.c b/tests/qtest/aspeed_hace-test.c
index 09ee31545e..6a2f404b93 100644
--- a/tests/qtest/aspeed_hace-test.c
+++ b/tests/qtest/aspeed_hace-test.c
@@ -21,6 +21,7 @@
 #define  HACE_ALGO_SHA512        (BIT(5) | BIT(6))
 #define  HACE_ALGO_SHA384        (BIT(5) | BIT(6) | BIT(10))
 #define  HACE_SG_EN              BIT(18)
+#define  HACE_ACCUM_EN           BIT(8)
 
 #define HACE_STS                 0x1c
 #define  HACE_RSA_ISR            BIT(13)
@@ -96,6 +97,57 @@ static const uint8_t test_result_sg_sha256[] = {
     0x55, 0x1e, 0x1e, 0xc5, 0x80, 0xdd, 0x6d, 0x5a, 0x6e, 0xcd, 0xe9, 0xf3,
     0xd3, 0x5e, 0x6e, 0x4a, 0x71, 0x7f, 0xbd, 0xe4};
 
+/*
+ * The accumulative mode requires firmware to provide internal initial state
+ * and message padding (including length L at the end of padding).
+ *
+ * This test vector is a ascii text "abc" with padding message.
+ *
+ * Expected results were generated using command line utitiles:
+ *
+ *  echo -n -e 'abc' | dd of=/tmp/test
+ *  for hash in sha512sum sha256sum; do $hash /tmp/test; done
+ */
+static const uint8_t test_vector_accum_512[] = {
+    0x61, 0x62, 0x63, 0x80, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18};
+
+static const uint8_t test_vector_accum_256[] = {
+    0x61, 0x62, 0x63, 0x80, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18};
+
+static const uint8_t test_result_accum_sha512[] = {
+    0xdd, 0xaf, 0x35, 0xa1, 0x93, 0x61, 0x7a, 0xba, 0xcc, 0x41, 0x73, 0x49,
+    0xae, 0x20, 0x41, 0x31, 0x12, 0xe6, 0xfa, 0x4e, 0x89, 0xa9, 0x7e, 0xa2,
+    0x0a, 0x9e, 0xee, 0xe6, 0x4b, 0x55, 0xd3, 0x9a, 0x21, 0x92, 0x99, 0x2a,
+    0x27, 0x4f, 0xc1, 0xa8, 0x36, 0xba, 0x3c, 0x23, 0xa3, 0xfe, 0xeb, 0xbd,
+    0x45, 0x4d, 0x44, 0x23, 0x64, 0x3c, 0xe8, 0x0e, 0x2a, 0x9a, 0xc9, 0x4f,
+    0xa5, 0x4c, 0xa4, 0x9f};
+
+static const uint8_t test_result_accum_sha256[] = {
+    0xba, 0x78, 0x16, 0xbf, 0x8f, 0x01, 0xcf, 0xea, 0x41, 0x41, 0x40, 0xde,
+    0x5d, 0xae, 0x22, 0x23, 0xb0, 0x03, 0x61, 0xa3, 0x96, 0x17, 0x7a, 0x9c,
+    0xb4, 0x10, 0xff, 0x61, 0xf2, 0x00, 0x15, 0xad};
 
 static void write_regs(QTestState *s, uint32_t base, uint32_t src,
                        uint32_t length, uint32_t out, uint32_t method)
@@ -308,6 +360,86 @@ static void test_sha512_sg(const char *machine, const uint32_t base,
     qtest_quit(s);
 }
 
+static void test_sha256_accum(const char *machine, const uint32_t base,
+                        const uint32_t src_addr)
+{
+    QTestState *s = qtest_init(machine);
+
+    const uint32_t buffer_addr = src_addr + 0x1000000;
+    const uint32_t digest_addr = src_addr + 0x4000000;
+    uint8_t digest[32] = {0};
+    struct AspeedSgList array[] = {
+        {  cpu_to_le32(sizeof(test_vector_accum_256) | SG_LIST_LEN_LAST),
+           cpu_to_le32(buffer_addr) },
+    };
+
+    /* Check engine is idle, no busy or irq bits set */
+    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0);
+
+    /* Write test vector into memory */
+    qtest_memwrite(s, buffer_addr, test_vector_accum_256, sizeof(test_vector_accum_256));
+    qtest_memwrite(s, src_addr, array, sizeof(array));
+
+    write_regs(s, base, src_addr, sizeof(test_vector_accum_256),
+               digest_addr, HACE_ALGO_SHA256 | HACE_SG_EN | HACE_ACCUM_EN);
+
+    /* Check hash IRQ status is asserted */
+    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0x00000200);
+
+    /* Clear IRQ status and check status is deasserted */
+    qtest_writel(s, base + HACE_STS, 0x00000200);
+    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0);
+
+    /* Read computed digest from memory */
+    qtest_memread(s, digest_addr, digest, sizeof(digest));
+
+    /* Check result of computation */
+    g_assert_cmpmem(digest, sizeof(digest),
+                    test_result_accum_sha256, sizeof(digest));
+
+    qtest_quit(s);
+}
+
+static void test_sha512_accum(const char *machine, const uint32_t base,
+                        const uint32_t src_addr)
+{
+    QTestState *s = qtest_init(machine);
+
+    const uint32_t buffer_addr = src_addr + 0x1000000;
+    const uint32_t digest_addr = src_addr + 0x4000000;
+    uint8_t digest[64] = {0};
+    struct AspeedSgList array[] = {
+        {  cpu_to_le32(sizeof(test_vector_accum_512) | SG_LIST_LEN_LAST),
+           cpu_to_le32(buffer_addr) },
+    };
+
+    /* Check engine is idle, no busy or irq bits set */
+    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0);
+
+    /* Write test vector into memory */
+    qtest_memwrite(s, buffer_addr, test_vector_accum_512, sizeof(test_vector_accum_512));
+    qtest_memwrite(s, src_addr, array, sizeof(array));
+
+    write_regs(s, base, src_addr, sizeof(test_vector_accum_512),
+               digest_addr, HACE_ALGO_SHA512 | HACE_SG_EN | HACE_ACCUM_EN);
+
+    /* Check hash IRQ status is asserted */
+    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0x00000200);
+
+    /* Clear IRQ status and check status is deasserted */
+    qtest_writel(s, base + HACE_STS, 0x00000200);
+    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0);
+
+    /* Read computed digest from memory */
+    qtest_memread(s, digest_addr, digest, sizeof(digest));
+
+    /* Check result of computation */
+    g_assert_cmpmem(digest, sizeof(digest),
+                    test_result_accum_sha512, sizeof(digest));
+
+    qtest_quit(s);
+}
+
 struct masks {
     uint32_t src;
     uint32_t dest;
@@ -396,6 +528,16 @@ static void test_sha512_sg_ast2600(void)
     test_sha512_sg("-machine ast2600-evb", 0x1e6d0000, 0x80000000);
 }
 
+static void test_sha256_accum_ast2600(void)
+{
+    test_sha256_accum("-machine ast2600-evb", 0x1e6d0000, 0x80000000);
+}
+
+static void test_sha512_accum_ast2600(void)
+{
+    test_sha512_accum("-machine ast2600-evb", 0x1e6d0000, 0x80000000);
+}
+
 static void test_addresses_ast2600(void)
 {
     test_addresses("-machine ast2600-evb", 0x1e6d0000, &ast2600_masks);
@@ -455,6 +597,9 @@ int main(int argc, char **argv)
     qtest_add_func("ast2600/hace/sha512_sg", test_sha512_sg_ast2600);
     qtest_add_func("ast2600/hace/sha256_sg", test_sha256_sg_ast2600);
 
+    qtest_add_func("ast2600/hace/sha512_accum", test_sha512_accum_ast2600);
+    qtest_add_func("ast2600/hace/sha256_accum", test_sha256_accum_ast2600);
+
     qtest_add_func("ast2500/hace/addresses", test_addresses_ast2500);
     qtest_add_func("ast2500/hace/sha512", test_sha512_ast2500);
     qtest_add_func("ast2500/hace/sha256", test_sha256_ast2500);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 1/3] aspeed/hace: Support HMAC Key Buffer register.
  2022-03-31  7:48 ` [PATCH v4 1/3] aspeed/hace: Support HMAC Key Buffer register Steven Lee
@ 2022-04-19 21:35   ` Cédric Le Goater
  0 siblings, 0 replies; 10+ messages in thread
From: Cédric Le Goater @ 2022-04-19 21:35 UTC (permalink / raw)
  To: Steven Lee, Peter Maydell, Andrew Jeffery, Joel Stanley,
	Thomas Huth, Laurent Vivier, Paolo Bonzini,
	open list:ASPEED BMCs, open list:All patches CC here
  Cc: troy_lee, jamin_lin

On 3/31/22 09:48, Steven Lee wrote:
> Support HACE28: Hash HMAC Key Buffer Base Address Register.
> 
> Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
> Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>

Reviewed-by: Cédric Le Goater <clg@kaod.org>

Thanks,

C.
> ---
>   hw/misc/aspeed_hace.c         | 7 +++++++
>   include/hw/misc/aspeed_hace.h | 1 +
>   2 files changed, 8 insertions(+)
> 
> diff --git a/hw/misc/aspeed_hace.c b/hw/misc/aspeed_hace.c
> index 10f00e65f4..59fe5bfca2 100644
> --- a/hw/misc/aspeed_hace.c
> +++ b/hw/misc/aspeed_hace.c
> @@ -27,6 +27,7 @@
>   
>   #define R_HASH_SRC      (0x20 / 4)
>   #define R_HASH_DEST     (0x24 / 4)
> +#define R_HASH_KEY_BUFF (0x28 / 4)
>   #define R_HASH_SRC_LEN  (0x2c / 4)
>   
>   #define R_HASH_CMD      (0x30 / 4)
> @@ -210,6 +211,9 @@ static void aspeed_hace_write(void *opaque, hwaddr addr, uint64_t data,
>       case R_HASH_DEST:
>           data &= ahc->dest_mask;
>           break;
> +    case R_HASH_KEY_BUFF:
> +        data &= ahc->key_mask;
> +        break;
>       case R_HASH_SRC_LEN:
>           data &= 0x0FFFFFFF;
>           break;
> @@ -333,6 +337,7 @@ static void aspeed_ast2400_hace_class_init(ObjectClass *klass, void *data)
>   
>       ahc->src_mask = 0x0FFFFFFF;
>       ahc->dest_mask = 0x0FFFFFF8;
> +    ahc->key_mask = 0x0FFFFFC0;
>       ahc->hash_mask = 0x000003ff; /* No SG or SHA512 modes */
>   }
>   
> @@ -351,6 +356,7 @@ static void aspeed_ast2500_hace_class_init(ObjectClass *klass, void *data)
>   
>       ahc->src_mask = 0x3fffffff;
>       ahc->dest_mask = 0x3ffffff8;
> +    ahc->key_mask = 0x3FFFFFC0;
>       ahc->hash_mask = 0x000003ff; /* No SG or SHA512 modes */
>   }
>   
> @@ -369,6 +375,7 @@ static void aspeed_ast2600_hace_class_init(ObjectClass *klass, void *data)
>   
>       ahc->src_mask = 0x7FFFFFFF;
>       ahc->dest_mask = 0x7FFFFFF8;
> +    ahc->key_mask = 0x7FFFFFF8;
>       ahc->hash_mask = 0x00147FFF;
>   }
>   
> diff --git a/include/hw/misc/aspeed_hace.h b/include/hw/misc/aspeed_hace.h
> index 94d5ada95f..2242945eb4 100644
> --- a/include/hw/misc/aspeed_hace.h
> +++ b/include/hw/misc/aspeed_hace.h
> @@ -37,6 +37,7 @@ struct AspeedHACEClass {
>   
>       uint32_t src_mask;
>       uint32_t dest_mask;
> +    uint32_t key_mask;
>       uint32_t hash_mask;
>   };
>   



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode
  2022-03-31  7:48 ` [PATCH v4 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode Steven Lee
@ 2022-04-20  9:50   ` Thomas Huth
  2022-04-21  5:55   ` Joel Stanley
  1 sibling, 0 replies; 10+ messages in thread
From: Thomas Huth @ 2022-04-20  9:50 UTC (permalink / raw)
  To: Steven Lee, Cédric Le Goater, Peter Maydell, Andrew Jeffery,
	Joel Stanley, Laurent Vivier, Paolo Bonzini,
	open list:ASPEED BMCs, open list:All patches CC here
  Cc: troy_lee, jamin_lin

On 31/03/2022 09.48, Steven Lee wrote:
> This add two addition test cases for accumulative mode under sg enabled.
> 
> The input vector was manually craft with "abc" + bit 1 + padding zeros + L.
> The padding length depends on algorithm, i.e. SHA512 (1024 bit),
> SHA256 (512 bit).
> 
> The result was calculated by command line sha512sum/sha256sum utilities
> without padding, i.e. only "abc" ascii text.
> 
> Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
> Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>
> ---
>   tests/qtest/aspeed_hace-test.c | 145 +++++++++++++++++++++++++++++++++
>   1 file changed, 145 insertions(+)

Acked-by: Thomas Huth <thuth@redhat.com>



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 2/3] aspeed/hace: Support AST2600 HACE
  2022-03-31  7:48 ` [PATCH v4 2/3] aspeed/hace: Support AST2600 HACE Steven Lee
@ 2022-04-20 12:53   ` Cédric Le Goater
  2022-04-21  2:07     ` Steven Lee
  0 siblings, 1 reply; 10+ messages in thread
From: Cédric Le Goater @ 2022-04-20 12:53 UTC (permalink / raw)
  To: Steven Lee, Peter Maydell, Andrew Jeffery, Joel Stanley,
	Thomas Huth, Laurent Vivier, Paolo Bonzini,
	open list:ASPEED BMCs, open list:All patches CC here
  Cc: troy_lee, jamin_lin

On 3/31/22 09:48, Steven Lee wrote:
> The aspeed ast2600 accumulative mode is described in datasheet
> ast2600v10.pdf section 25.6.4:
>   1. Allocating and initiating accumulative hash digest write buffer
>      with initial state.
>      * Since QEMU crypto/hash api doesn't provide the API to set initial
>        state of hash library, and the initial state is already setted by

s/setted/set/

>        crypto library (gcrypt/glib/...), so skip this step.
>   2. Calculating accumulative hash digest.
>      (a) When receiving the last accumulative data, software need to add
>          padding message at the end of the accumulative data. Padding
>          message described in specific of MD5, SHA-1, SHA224, SHA256,
>          SHA512, SHA512/224, SHA512/256.
>          * Since the crypto library (gcrypt/glib) already pad the
>            padding message internally.
>          * This patch is to remove the padding message which fed byguest
>            machine driver.
> 
> Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
> Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>
> ---
>   hw/misc/aspeed_hace.c | 140 ++++++++++++++++++++++++++++++++++++++++--
>   1 file changed, 136 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/misc/aspeed_hace.c b/hw/misc/aspeed_hace.c
> index 59fe5bfca2..5a7a144602 100644
> --- a/hw/misc/aspeed_hace.c
> +++ b/hw/misc/aspeed_hace.c
> @@ -95,12 +95,115 @@ static int hash_algo_lookup(uint32_t reg)
>       return -1;
>   }
>   
> -static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
> +/**
> + * Check whether the request contains padding message.
> + *
> + * @param iov           iov of current request
> + * @param id            index of iov of current request
> + * @param total_req_len length of all acc_mode requests(including padding msg)
> + * @param req_len       length of the current request
> + * @param total_msg_len length of all acc_mode requests(excluding padding msg)
> + * @param pad_offset    start offset of padding message
> + */
> +static bool has_padding(struct iovec *iov, uint32_t total_req_len,
> +                        hwaddr req_len, uint32_t *total_msg_len,
> +                        uint32_t *pad_offset)
> +{
> +    *total_msg_len = (uint32_t)(ldq_be_p(iov->iov_base + req_len - 8) / 8);
> +    /*
> +     * SG_LIST_LEN_LAST asserted in the request length doesn't mean it is the
> +     * last request. The last request should contain padding message.
> +     * We check whether message contains padding by
> +     *   1. Get total message length. If the current message contains
> +     *      padding, the last 8 bytes are total message length.
> +     *   2. Check whether the total message length is valid.
> +     *      If it is valid, the value should less than or eaual to

s/eaual/equal/

> +     *      total_req_len.
> +     *   3. Current request len - padding_size to get padding offset.
> +     *      The padding message's first byte should be 0x80
> +     */
> +    if (*total_msg_len <= total_req_len) {
> +        uint32_t padding_size = total_req_len - *total_msg_len;
> +        uint8_t *padding = iov->iov_base;
> +        *pad_offset = req_len - padding_size;
> +        if (padding[*pad_offset] == 0x80) {
> +            return true;
> +        }
> +    }
> +
> +    return false;
> +}
> +
> +static int reconstruct_iov(struct iovec *cache, struct iovec *iov, int id,
> +                           uint32_t *total_req_len,
> +                           uint32_t *pad_offset,
> +                           int *count)
> +{
> +    int i, iov_count;
> +    if (pad_offset != 0) {
> +        (cache + *count)->iov_base = (iov + id)->iov_base;

I would prefer the array notation iov[i], like elsewhere in this file..

> +        (cache + *count)->iov_len = *pad_offset;
> +        ++*count;
> +    }
> +    for (i = 0; i < *count; i++) {
> +        (iov + i)->iov_base = (cache + i)->iov_base;
> +        (iov + i)->iov_len = (cache + i)->iov_len;

ditto.

> +    }
> +    iov_count = *count;
> +    *count = 0;
> +    *total_req_len = 0;
> +    return iov_count;
> +}
> +
> +/**
> + * Generate iov for accumulative mode.
> + *
> + * @param cache         cached iov
> + * @param iov           iov of current request
> + * @param id            index of iov of current request
> + * @param total_req_len total length of the request(including padding)
> + * @param req_len       length of the current request
> + * @param count         count of cached iov
> + */
> +static int gen_acc_mode_iov(struct iovec *cache, struct iovec *iov, int id,
> +                            uint32_t *total_req_len, hwaddr *req_len,
> +                            int *count)
> +{
> +    uint32_t pad_offset;
> +    uint32_t total_msg_len;
> +    *total_req_len += *req_len;
> +
> +    if (has_padding(&iov[id], *total_req_len, *req_len, &total_msg_len,
> +                    &pad_offset)) {
> +        if (*count) {
> +            return reconstruct_iov(cache, iov, id, total_req_len,
> +                    &pad_offset, count);
> +        }
> +
> +        *req_len -= *total_req_len - total_msg_len;
> +        *total_req_len = 0;
> +        (iov + id)->iov_len = *req_len;
> +        return id + 1;
> +    } else {
> +        (cache + *count)->iov_base = iov->iov_base;
> +        (cache + *count)->iov_len = *req_len;
> +        ++*count;
> +    }
> +
> +    return 0;
> +}
> +
> +static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode,
> +                              bool acc_mode)
>   {
>       struct iovec iov[ASPEED_HACE_MAX_SG];
>       g_autofree uint8_t *digest_buf;
>       size_t digest_len = 0;
> +    int niov = 0;
>       int i;
> +    static struct iovec iov_cache[ASPEED_HACE_MAX_SG];
> +    static int count;
> +    static uint32_t total_len;

Why static ? Shouldn't these be AspeedHACEState attributes instead ?


>       if (sg_mode) {
>           uint32_t len = 0;
> @@ -124,10 +227,17 @@ static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
>                                           MEMTXATTRS_UNSPECIFIED, NULL);
>               addr &= SG_LIST_ADDR_MASK;
>   
> -            iov[i].iov_len = len & SG_LIST_LEN_MASK;
> -            plen = iov[i].iov_len;
> +            plen = len & SG_LIST_LEN_MASK;
>               iov[i].iov_base = address_space_map(&s->dram_as, addr, &plen, false,
>                                                   MEMTXATTRS_UNSPECIFIED);
> +
> +            if (acc_mode) {
> +                niov = gen_acc_mode_iov(
> +                        iov_cache, iov, i, &total_len, &plen, &count);
> +
> +            } else {
> +                iov[i].iov_len = plen;
> +            }
>           }
>       } else {
>           hwaddr len = s->regs[R_HASH_SRC_LEN];
> @@ -137,6 +247,27 @@ static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
>                                               &len, false,
>                                               MEMTXATTRS_UNSPECIFIED);
>           i = 1;
> +
> +        if (count) {
> +            /*
> +             * In aspeed sdk kernel driver, sg_mode is disabled in hash_final().
> +             * Thus if we received a request with sg_mode disabled, it is
> +             * required to check whether cache is empty. If no, we should
> +             * combine cached iov and the current iov.
> +             */
> +            uint32_t total_msg_len;
> +            uint32_t pad_offset;
> +            total_len += len;
> +            if (has_padding(iov, total_len, len, &total_msg_len,
> +                            &pad_offset)) {
> +                niov = reconstruct_iov(iov_cache, iov, 0, &total_len,
> +                        &pad_offset, &count);
> +            }
> +        }
> +    }
> +
> +    if (niov) {
> +        i = niov;
>       }
>   
>       if (qcrypto_hash_bytesv(algo, iov, i, &digest_buf, &digest_len, NULL) < 0) {
> @@ -238,7 +369,8 @@ static void aspeed_hace_write(void *opaque, hwaddr addr, uint64_t data,
>                           __func__, data & ahc->hash_mask);
>                   break;
>           }
> -        do_hash_operation(s, algo, data & HASH_SG_EN);
> +        do_hash_operation(s, algo, data & HASH_SG_EN,
> +                ((data & HASH_HMAC_MASK) == HASH_DIGEST_ACCUM));
>   
>           if (data & HASH_IRQ_EN) {
>               qemu_irq_raise(s->irq);



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 2/3] aspeed/hace: Support AST2600 HACE
  2022-04-20 12:53   ` Cédric Le Goater
@ 2022-04-21  2:07     ` Steven Lee
  2022-04-21  7:01       ` Cédric Le Goater
  0 siblings, 1 reply; 10+ messages in thread
From: Steven Lee @ 2022-04-21  2:07 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Laurent Vivier, Peter Maydell, Thomas Huth, Jamin Lin,
	Andrew Jeffery, Troy Lee, open list:All patches CC here,
	open list:ASPEED BMCs, Joel Stanley, Paolo Bonzini

The 04/20/2022 20:53, Cédric Le Goater wrote:
> On 3/31/22 09:48, Steven Lee wrote:
> > The aspeed ast2600 accumulative mode is described in datasheet
> > ast2600v10.pdf section 25.6.4:
> >   1. Allocating and initiating accumulative hash digest write buffer
> >      with initial state.
> >      * Since QEMU crypto/hash api doesn't provide the API to set initial
> >        state of hash library, and the initial state is already setted by
> 
> s/setted/set/
> 

will fix it.

> >        crypto library (gcrypt/glib/...), so skip this step.
> >   2. Calculating accumulative hash digest.
> >      (a) When receiving the last accumulative data, software need to add
> >          padding message at the end of the accumulative data. Padding
> >          message described in specific of MD5, SHA-1, SHA224, SHA256,
> >          SHA512, SHA512/224, SHA512/256.
> >          * Since the crypto library (gcrypt/glib) already pad the
> >            padding message internally.
> >          * This patch is to remove the padding message which fed byguest
> >            machine driver.
> > 
> > Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
> > Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>
> > ---
> >   hw/misc/aspeed_hace.c | 140 ++++++++++++++++++++++++++++++++++++++++--
> >   1 file changed, 136 insertions(+), 4 deletions(-)
> > 
> > diff --git a/hw/misc/aspeed_hace.c b/hw/misc/aspeed_hace.c
> > index 59fe5bfca2..5a7a144602 100644
> > --- a/hw/misc/aspeed_hace.c
> > +++ b/hw/misc/aspeed_hace.c
> > @@ -95,12 +95,115 @@ static int hash_algo_lookup(uint32_t reg)
> >       return -1;
> >   }
> >   
> > -static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
> > +/**
> > + * Check whether the request contains padding message.
> > + *
> > + * @param iov           iov of current request
> > + * @param id            index of iov of current request
> > + * @param total_req_len length of all acc_mode requests(including padding msg)
> > + * @param req_len       length of the current request
> > + * @param total_msg_len length of all acc_mode requests(excluding padding msg)
> > + * @param pad_offset    start offset of padding message
> > + */
> > +static bool has_padding(struct iovec *iov, uint32_t total_req_len,
> > +                        hwaddr req_len, uint32_t *total_msg_len,
> > +                        uint32_t *pad_offset)
> > +{
> > +    *total_msg_len = (uint32_t)(ldq_be_p(iov->iov_base + req_len - 8) / 8);
> > +    /*
> > +     * SG_LIST_LEN_LAST asserted in the request length doesn't mean it is the
> > +     * last request. The last request should contain padding message.
> > +     * We check whether message contains padding by
> > +     *   1. Get total message length. If the current message contains
> > +     *      padding, the last 8 bytes are total message length.
> > +     *   2. Check whether the total message length is valid.
> > +     *      If it is valid, the value should less than or eaual to
> 
> s/eaual/equal/
> 

will fix it.

> > +     *      total_req_len.
> > +     *   3. Current request len - padding_size to get padding offset.
> > +     *      The padding message's first byte should be 0x80
> > +     */
> > +    if (*total_msg_len <= total_req_len) {
> > +        uint32_t padding_size = total_req_len - *total_msg_len;
> > +        uint8_t *padding = iov->iov_base;
> > +        *pad_offset = req_len - padding_size;
> > +        if (padding[*pad_offset] == 0x80) {
> > +            return true;
> > +        }
> > +    }
> > +
> > +    return false;
> > +}
> > +
> > +static int reconstruct_iov(struct iovec *cache, struct iovec *iov, int id,
> > +                           uint32_t *total_req_len,
> > +                           uint32_t *pad_offset,
> > +                           int *count)
> > +{
> > +    int i, iov_count;
> > +    if (pad_offset != 0) {
> > +        (cache + *count)->iov_base = (iov + id)->iov_base;
> 
> I would prefer the array notation iov[i], like elsewhere in this file..
> 

will use iov[i] instead of (iov + i).

> > +        (cache + *count)->iov_len = *pad_offset;
> > +        ++*count;
> > +    }
> > +    for (i = 0; i < *count; i++) {
> > +        (iov + i)->iov_base = (cache + i)->iov_base;
> > +        (iov + i)->iov_len = (cache + i)->iov_len;
> 
> ditto.
> 

will use iov[i] instead of (iov + i).

> > +    }
> > +    iov_count = *count;
> > +    *count = 0;
> > +    *total_req_len = 0;
> > +    return iov_count;
> > +}
> > +
> > +/**
> > + * Generate iov for accumulative mode.
> > + *
> > + * @param cache         cached iov
> > + * @param iov           iov of current request
> > + * @param id            index of iov of current request
> > + * @param total_req_len total length of the request(including padding)
> > + * @param req_len       length of the current request
> > + * @param count         count of cached iov
> > + */
> > +static int gen_acc_mode_iov(struct iovec *cache, struct iovec *iov, int id,
> > +                            uint32_t *total_req_len, hwaddr *req_len,
> > +                            int *count)
> > +{
> > +    uint32_t pad_offset;
> > +    uint32_t total_msg_len;
> > +    *total_req_len += *req_len;
> > +
> > +    if (has_padding(&iov[id], *total_req_len, *req_len, &total_msg_len,
> > +                    &pad_offset)) {
> > +        if (*count) {
> > +            return reconstruct_iov(cache, iov, id, total_req_len,
> > +                    &pad_offset, count);
> > +        }
> > +
> > +        *req_len -= *total_req_len - total_msg_len;
> > +        *total_req_len = 0;
> > +        (iov + id)->iov_len = *req_len;
> > +        return id + 1;
> > +    } else {
> > +        (cache + *count)->iov_base = iov->iov_base;
> > +        (cache + *count)->iov_len = *req_len;
> > +        ++*count;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode,
> > +                              bool acc_mode)
> >   {
> >       struct iovec iov[ASPEED_HACE_MAX_SG];
> >       g_autofree uint8_t *digest_buf;
> >       size_t digest_len = 0;
> > +    int niov = 0;
> >       int i;
> > +    static struct iovec iov_cache[ASPEED_HACE_MAX_SG];
> > +    static int count;
> > +    static uint32_t total_len;
> 
> Why static ? Shouldn't these be AspeedHACEState attributes instead ?
> 
> 

will add these static variables in AspeedHACEState.
Thanks for your review.

Steven

> >       if (sg_mode) {
> >           uint32_t len = 0;
> > @@ -124,10 +227,17 @@ static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
> >                                           MEMTXATTRS_UNSPECIFIED, NULL);
> >               addr &= SG_LIST_ADDR_MASK;
> >   
> > -            iov[i].iov_len = len & SG_LIST_LEN_MASK;
> > -            plen = iov[i].iov_len;
> > +            plen = len & SG_LIST_LEN_MASK;
> >               iov[i].iov_base = address_space_map(&s->dram_as, addr, &plen, false,
> >                                                   MEMTXATTRS_UNSPECIFIED);
> > +
> > +            if (acc_mode) {
> > +                niov = gen_acc_mode_iov(
> > +                        iov_cache, iov, i, &total_len, &plen, &count);
> > +
> > +            } else {
> > +                iov[i].iov_len = plen;
> > +            }
> >           }
> >       } else {
> >           hwaddr len = s->regs[R_HASH_SRC_LEN];
> > @@ -137,6 +247,27 @@ static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode)
> >                                               &len, false,
> >                                               MEMTXATTRS_UNSPECIFIED);
> >           i = 1;
> > +
> > +        if (count) {
> > +            /*
> > +             * In aspeed sdk kernel driver, sg_mode is disabled in hash_final().
> > +             * Thus if we received a request with sg_mode disabled, it is
> > +             * required to check whether cache is empty. If no, we should
> > +             * combine cached iov and the current iov.
> > +             */
> > +            uint32_t total_msg_len;
> > +            uint32_t pad_offset;
> > +            total_len += len;
> > +            if (has_padding(iov, total_len, len, &total_msg_len,
> > +                            &pad_offset)) {
> > +                niov = reconstruct_iov(iov_cache, iov, 0, &total_len,
> > +                        &pad_offset, &count);
> > +            }
> > +        }
> > +    }
> > +
> > +    if (niov) {
> > +        i = niov;
> >       }
> >   
> >       if (qcrypto_hash_bytesv(algo, iov, i, &digest_buf, &digest_len, NULL) < 0) {
> > @@ -238,7 +369,8 @@ static void aspeed_hace_write(void *opaque, hwaddr addr, uint64_t data,
> >                           __func__, data & ahc->hash_mask);
> >                   break;
> >           }
> > -        do_hash_operation(s, algo, data & HASH_SG_EN);
> > +        do_hash_operation(s, algo, data & HASH_SG_EN,
> > +                ((data & HASH_HMAC_MASK) == HASH_DIGEST_ACCUM));
> >   
> >           if (data & HASH_IRQ_EN) {
> >               qemu_irq_raise(s->irq);
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode
  2022-03-31  7:48 ` [PATCH v4 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode Steven Lee
  2022-04-20  9:50   ` Thomas Huth
@ 2022-04-21  5:55   ` Joel Stanley
  1 sibling, 0 replies; 10+ messages in thread
From: Joel Stanley @ 2022-04-21  5:55 UTC (permalink / raw)
  To: Steven Lee
  Cc: Laurent Vivier, Peter Maydell, Thomas Huth, Jamin Lin,
	Andrew Jeffery, Troy Lee, open list:All patches CC here,
	open list:ASPEED BMCs, Cédric Le Goater, Paolo Bonzini

On Thu, 31 Mar 2022 at 07:49, Steven Lee <steven_lee@aspeedtech.com> wrote:
>
> This add two addition test cases for accumulative mode under sg enabled.
>
> The input vector was manually craft with "abc" + bit 1 + padding zeros + L.
> The padding length depends on algorithm, i.e. SHA512 (1024 bit),
> SHA256 (512 bit).
>
> The result was calculated by command line sha512sum/sha256sum utilities
> without padding, i.e. only "abc" ascii text.
>
> Signed-off-by: Troy Lee <troy_lee@aspeedtech.com>
> Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>

Reviewed-by: Joel Stanley <joel@jms.id.au>

Thanks for sending this series. I will try to find time to review the
model updates soon.

> ---
>  tests/qtest/aspeed_hace-test.c | 145 +++++++++++++++++++++++++++++++++
>  1 file changed, 145 insertions(+)
>
> diff --git a/tests/qtest/aspeed_hace-test.c b/tests/qtest/aspeed_hace-test.c
> index 09ee31545e..6a2f404b93 100644
> --- a/tests/qtest/aspeed_hace-test.c
> +++ b/tests/qtest/aspeed_hace-test.c
> @@ -21,6 +21,7 @@
>  #define  HACE_ALGO_SHA512        (BIT(5) | BIT(6))
>  #define  HACE_ALGO_SHA384        (BIT(5) | BIT(6) | BIT(10))
>  #define  HACE_SG_EN              BIT(18)
> +#define  HACE_ACCUM_EN           BIT(8)
>
>  #define HACE_STS                 0x1c
>  #define  HACE_RSA_ISR            BIT(13)
> @@ -96,6 +97,57 @@ static const uint8_t test_result_sg_sha256[] = {
>      0x55, 0x1e, 0x1e, 0xc5, 0x80, 0xdd, 0x6d, 0x5a, 0x6e, 0xcd, 0xe9, 0xf3,
>      0xd3, 0x5e, 0x6e, 0x4a, 0x71, 0x7f, 0xbd, 0xe4};
>
> +/*
> + * The accumulative mode requires firmware to provide internal initial state
> + * and message padding (including length L at the end of padding).
> + *
> + * This test vector is a ascii text "abc" with padding message.
> + *
> + * Expected results were generated using command line utitiles:
> + *
> + *  echo -n -e 'abc' | dd of=/tmp/test
> + *  for hash in sha512sum sha256sum; do $hash /tmp/test; done
> + */
> +static const uint8_t test_vector_accum_512[] = {
> +    0x61, 0x62, 0x63, 0x80, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18};
> +
> +static const uint8_t test_vector_accum_256[] = {
> +    0x61, 0x62, 0x63, 0x80, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18};
> +
> +static const uint8_t test_result_accum_sha512[] = {
> +    0xdd, 0xaf, 0x35, 0xa1, 0x93, 0x61, 0x7a, 0xba, 0xcc, 0x41, 0x73, 0x49,
> +    0xae, 0x20, 0x41, 0x31, 0x12, 0xe6, 0xfa, 0x4e, 0x89, 0xa9, 0x7e, 0xa2,
> +    0x0a, 0x9e, 0xee, 0xe6, 0x4b, 0x55, 0xd3, 0x9a, 0x21, 0x92, 0x99, 0x2a,
> +    0x27, 0x4f, 0xc1, 0xa8, 0x36, 0xba, 0x3c, 0x23, 0xa3, 0xfe, 0xeb, 0xbd,
> +    0x45, 0x4d, 0x44, 0x23, 0x64, 0x3c, 0xe8, 0x0e, 0x2a, 0x9a, 0xc9, 0x4f,
> +    0xa5, 0x4c, 0xa4, 0x9f};
> +
> +static const uint8_t test_result_accum_sha256[] = {
> +    0xba, 0x78, 0x16, 0xbf, 0x8f, 0x01, 0xcf, 0xea, 0x41, 0x41, 0x40, 0xde,
> +    0x5d, 0xae, 0x22, 0x23, 0xb0, 0x03, 0x61, 0xa3, 0x96, 0x17, 0x7a, 0x9c,
> +    0xb4, 0x10, 0xff, 0x61, 0xf2, 0x00, 0x15, 0xad};
>
>  static void write_regs(QTestState *s, uint32_t base, uint32_t src,
>                         uint32_t length, uint32_t out, uint32_t method)
> @@ -308,6 +360,86 @@ static void test_sha512_sg(const char *machine, const uint32_t base,
>      qtest_quit(s);
>  }
>
> +static void test_sha256_accum(const char *machine, const uint32_t base,
> +                        const uint32_t src_addr)
> +{
> +    QTestState *s = qtest_init(machine);
> +
> +    const uint32_t buffer_addr = src_addr + 0x1000000;
> +    const uint32_t digest_addr = src_addr + 0x4000000;
> +    uint8_t digest[32] = {0};
> +    struct AspeedSgList array[] = {
> +        {  cpu_to_le32(sizeof(test_vector_accum_256) | SG_LIST_LEN_LAST),
> +           cpu_to_le32(buffer_addr) },
> +    };
> +
> +    /* Check engine is idle, no busy or irq bits set */
> +    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0);
> +
> +    /* Write test vector into memory */
> +    qtest_memwrite(s, buffer_addr, test_vector_accum_256, sizeof(test_vector_accum_256));
> +    qtest_memwrite(s, src_addr, array, sizeof(array));
> +
> +    write_regs(s, base, src_addr, sizeof(test_vector_accum_256),
> +               digest_addr, HACE_ALGO_SHA256 | HACE_SG_EN | HACE_ACCUM_EN);
> +
> +    /* Check hash IRQ status is asserted */
> +    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0x00000200);
> +
> +    /* Clear IRQ status and check status is deasserted */
> +    qtest_writel(s, base + HACE_STS, 0x00000200);
> +    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0);
> +
> +    /* Read computed digest from memory */
> +    qtest_memread(s, digest_addr, digest, sizeof(digest));
> +
> +    /* Check result of computation */
> +    g_assert_cmpmem(digest, sizeof(digest),
> +                    test_result_accum_sha256, sizeof(digest));
> +
> +    qtest_quit(s);
> +}
> +
> +static void test_sha512_accum(const char *machine, const uint32_t base,
> +                        const uint32_t src_addr)
> +{
> +    QTestState *s = qtest_init(machine);
> +
> +    const uint32_t buffer_addr = src_addr + 0x1000000;
> +    const uint32_t digest_addr = src_addr + 0x4000000;
> +    uint8_t digest[64] = {0};
> +    struct AspeedSgList array[] = {
> +        {  cpu_to_le32(sizeof(test_vector_accum_512) | SG_LIST_LEN_LAST),
> +           cpu_to_le32(buffer_addr) },
> +    };
> +
> +    /* Check engine is idle, no busy or irq bits set */
> +    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0);
> +
> +    /* Write test vector into memory */
> +    qtest_memwrite(s, buffer_addr, test_vector_accum_512, sizeof(test_vector_accum_512));
> +    qtest_memwrite(s, src_addr, array, sizeof(array));
> +
> +    write_regs(s, base, src_addr, sizeof(test_vector_accum_512),
> +               digest_addr, HACE_ALGO_SHA512 | HACE_SG_EN | HACE_ACCUM_EN);
> +
> +    /* Check hash IRQ status is asserted */
> +    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0x00000200);
> +
> +    /* Clear IRQ status and check status is deasserted */
> +    qtest_writel(s, base + HACE_STS, 0x00000200);
> +    g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0);
> +
> +    /* Read computed digest from memory */
> +    qtest_memread(s, digest_addr, digest, sizeof(digest));
> +
> +    /* Check result of computation */
> +    g_assert_cmpmem(digest, sizeof(digest),
> +                    test_result_accum_sha512, sizeof(digest));
> +
> +    qtest_quit(s);
> +}
> +
>  struct masks {
>      uint32_t src;
>      uint32_t dest;
> @@ -396,6 +528,16 @@ static void test_sha512_sg_ast2600(void)
>      test_sha512_sg("-machine ast2600-evb", 0x1e6d0000, 0x80000000);
>  }
>
> +static void test_sha256_accum_ast2600(void)
> +{
> +    test_sha256_accum("-machine ast2600-evb", 0x1e6d0000, 0x80000000);
> +}
> +
> +static void test_sha512_accum_ast2600(void)
> +{
> +    test_sha512_accum("-machine ast2600-evb", 0x1e6d0000, 0x80000000);
> +}
> +
>  static void test_addresses_ast2600(void)
>  {
>      test_addresses("-machine ast2600-evb", 0x1e6d0000, &ast2600_masks);
> @@ -455,6 +597,9 @@ int main(int argc, char **argv)
>      qtest_add_func("ast2600/hace/sha512_sg", test_sha512_sg_ast2600);
>      qtest_add_func("ast2600/hace/sha256_sg", test_sha256_sg_ast2600);
>
> +    qtest_add_func("ast2600/hace/sha512_accum", test_sha512_accum_ast2600);
> +    qtest_add_func("ast2600/hace/sha256_accum", test_sha256_accum_ast2600);
> +
>      qtest_add_func("ast2500/hace/addresses", test_addresses_ast2500);
>      qtest_add_func("ast2500/hace/sha512", test_sha512_ast2500);
>      qtest_add_func("ast2500/hace/sha256", test_sha256_ast2500);
> --
> 2.17.1
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 2/3] aspeed/hace: Support AST2600 HACE
  2022-04-21  2:07     ` Steven Lee
@ 2022-04-21  7:01       ` Cédric Le Goater
  0 siblings, 0 replies; 10+ messages in thread
From: Cédric Le Goater @ 2022-04-21  7:01 UTC (permalink / raw)
  To: Steven Lee
  Cc: Laurent Vivier, Peter Maydell, Thomas Huth, Jamin Lin,
	Andrew Jeffery, Troy Lee, open list:All patches CC here,
	open list:ASPEED BMCs, Joel Stanley, Paolo Bonzini

Hello Steven,

>>> +static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode,
>>> +                              bool acc_mode)
>>>    {
>>>        struct iovec iov[ASPEED_HACE_MAX_SG];
>>>        g_autofree uint8_t *digest_buf;
>>>        size_t digest_len = 0;
>>> +    int niov = 0;
>>>        int i;
>>> +    static struct iovec iov_cache[ASPEED_HACE_MAX_SG];
>>> +    static int count;
>>> +    static uint32_t total_len;
>>
>> Why static ? Shouldn't these be AspeedHACEState attributes instead ?
>>
> 
> will add these static variables in AspeedHACEState.

When you do, please update the reset handler and the vmstate.

Thanks,

C.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-04-21  8:14 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-31  7:48 [PATCH v4 0/3] aspeed/hace: Support AST2600 HACE Steven Lee
2022-03-31  7:48 ` [PATCH v4 1/3] aspeed/hace: Support HMAC Key Buffer register Steven Lee
2022-04-19 21:35   ` Cédric Le Goater
2022-03-31  7:48 ` [PATCH v4 2/3] aspeed/hace: Support AST2600 HACE Steven Lee
2022-04-20 12:53   ` Cédric Le Goater
2022-04-21  2:07     ` Steven Lee
2022-04-21  7:01       ` Cédric Le Goater
2022-03-31  7:48 ` [PATCH v4 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode Steven Lee
2022-04-20  9:50   ` Thomas Huth
2022-04-21  5:55   ` Joel Stanley

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.