linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930
@ 2021-01-22  7:09 Meng Yu
  2021-01-22  7:09 ` [PATCH v7 1/7] crypto: hisilicon/hpre - add version adapt to new algorithms Meng Yu
                   ` (6 more replies)
  0 siblings, 7 replies; 25+ messages in thread
From: Meng Yu @ 2021-01-22  7:09 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, xuzaibo, wangzhou1, yumeng18, linux-kernel

1. Add some new elliptic curve parameters definitions, and reorder
   ECC 'Curves IDs';
2. Add interface to get elliptic curve by curve_id in
   "include/crypto/ecc_curve.h" by curve_id;
3. Add ECDH and CURVE25519 algorithms support for Kunpeng 930.

v6->v7:
- patch #4: add function interface to expose elliptic curve parameters
- patch #4: eliminate warning by 'kernel test robot'
- patch #5: add function interface to expose curve25519 parameters

v5->v6:
- patch #1: add a new patch (the first patch), which is the "depend on" patch before

v4->v5:
- patch #4: delete P-128 and P-320 curve, as the few using case in the kernel

v3 -> v4:
- patch #3: add new, move ecc_curve params to "include/crypto"

v2 -> v3:
- patch #5: fix sparse warnings
- patch #5: add 'CRYPTO_LIB_CURVE25519_GENERIC' in 'Kconfig'

v1 -> v2:
- patch #5: delete `curve25519_null_point'


Hui Tang (1):
  crypto: hisilicon/hpre - add some updates to adapt to Kunpeng 930

Meng Yu (6):
  crypto: hisilicon/hpre - add version adapt to new algorithms
  crypto: hisilicon/hpre - add algorithm type
  crypto: add ecc curve and expose them
  crypto: add curve 25519 and expose them
  crypto: hisilicon/hpre - add 'ECDH' algorithm
  crypto: hisilicon/hpre - add 'CURVE25519' algorithm

 crypto/ecc.c                                |  22 +-
 crypto/ecc.h                                |  37 +-
 crypto/ecc_curve_defs.h                     | 163 +++++-
 crypto/testmgr.h                            |  12 +-
 drivers/crypto/hisilicon/Kconfig            |   1 +
 drivers/crypto/hisilicon/hpre/hpre.h        |  25 +-
 drivers/crypto/hisilicon/hpre/hpre_crypto.c | 861 +++++++++++++++++++++++++++-
 drivers/crypto/hisilicon/hpre/hpre_main.c   | 105 ++--
 drivers/crypto/hisilicon/qm.c               |   4 +-
 drivers/crypto/hisilicon/qm.h               |   4 +-
 drivers/crypto/hisilicon/sec2/sec.h         |   4 +-
 drivers/crypto/hisilicon/sec2/sec_crypto.c  |   4 +-
 drivers/crypto/hisilicon/sec2/sec_crypto.h  |   4 +-
 drivers/crypto/hisilicon/zip/zip.h          |   4 +-
 drivers/crypto/hisilicon/zip/zip_crypto.c   |   4 +-
 include/crypto/ecc_curve.h                  |  60 ++
 include/crypto/ecdh.h                       |   5 +-
 17 files changed, 1191 insertions(+), 128 deletions(-)
 create mode 100644 include/crypto/ecc_curve.h

-- 
2.8.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v7 1/7] crypto: hisilicon/hpre - add version adapt to new algorithms
  2021-01-22  7:09 [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930 Meng Yu
@ 2021-01-22  7:09 ` Meng Yu
  2021-01-22  7:09 ` [PATCH v7 2/7] crypto: hisilicon/hpre - add some updates to adapt to Kunpeng 930 Meng Yu
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Meng Yu @ 2021-01-22  7:09 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, xuzaibo, wangzhou1, yumeng18, linux-kernel

A new generation of accelerator Kunpeng930 has appeared, and the
corresponding driver needs to be updated to support some new
algorithms of Kunpeng930. To be compatible with Kunpeng920, we
add parameter 'struct hisi_qm *qm' to sec_algs_(un)register to
identify the chip's version.

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Reviewed-by: Longfang Liu <liulongfang@huawei.com>
---
 drivers/crypto/hisilicon/hpre/hpre.h        | 5 +++--
 drivers/crypto/hisilicon/hpre/hpre_crypto.c | 4 ++--
 drivers/crypto/hisilicon/qm.c               | 4 ++--
 drivers/crypto/hisilicon/qm.h               | 4 ++--
 drivers/crypto/hisilicon/sec2/sec.h         | 4 ++--
 drivers/crypto/hisilicon/sec2/sec_crypto.c  | 4 ++--
 drivers/crypto/hisilicon/sec2/sec_crypto.h  | 4 ++--
 drivers/crypto/hisilicon/zip/zip.h          | 4 ++--
 drivers/crypto/hisilicon/zip/zip_crypto.c   | 4 ++--
 9 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h
index f69252b..e784712 100644
--- a/drivers/crypto/hisilicon/hpre/hpre.h
+++ b/drivers/crypto/hisilicon/hpre/hpre.h
@@ -91,7 +91,8 @@ struct hpre_sqe {
 };
 
 struct hisi_qp *hpre_create_qp(void);
-int hpre_algs_register(void);
-void hpre_algs_unregister(void);
+int hpre_algs_register(struct hisi_qm *qm);
+void hpre_algs_unregister(struct hisi_qm *qm);
+
 
 #endif
diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
index a87f990..d89b2f5 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
@@ -1154,7 +1154,7 @@ static struct kpp_alg dh = {
 };
 #endif
 
-int hpre_algs_register(void)
+int hpre_algs_register(struct hisi_qm *qm)
 {
 	int ret;
 
@@ -1171,7 +1171,7 @@ int hpre_algs_register(void)
 	return ret;
 }
 
-void hpre_algs_unregister(void)
+void hpre_algs_unregister(struct hisi_qm *qm)
 {
 	crypto_unregister_akcipher(&rsa);
 #ifdef CONFIG_CRYPTO_DH
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index 904b99a..5e9d8d7 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -4015,7 +4015,7 @@ int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
 	mutex_unlock(&qm_list->lock);
 
 	if (flag) {
-		ret = qm_list->register_to_crypto();
+		ret = qm_list->register_to_crypto(qm);
 		if (ret) {
 			mutex_lock(&qm_list->lock);
 			list_del(&qm->list);
@@ -4046,7 +4046,7 @@ void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
 	mutex_unlock(&qm_list->lock);
 
 	if (list_empty(&qm_list->list))
-		qm_list->unregister_from_crypto();
+		qm_list->unregister_from_crypto(qm);
 }
 EXPORT_SYMBOL_GPL(hisi_qm_alg_unregister);
 
diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h
index c1dd0fc..7d0d626 100644
--- a/drivers/crypto/hisilicon/qm.h
+++ b/drivers/crypto/hisilicon/qm.h
@@ -198,8 +198,8 @@ struct hisi_qm_err_ini {
 struct hisi_qm_list {
 	struct mutex lock;
 	struct list_head list;
-	int (*register_to_crypto)(void);
-	void (*unregister_from_crypto)(void);
+	int (*register_to_crypto)(struct hisi_qm *qm);
+	void (*unregister_from_crypto)(struct hisi_qm *qm);
 };
 
 struct hisi_qm {
diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
index 0849191..17ddb20 100644
--- a/drivers/crypto/hisilicon/sec2/sec.h
+++ b/drivers/crypto/hisilicon/sec2/sec.h
@@ -183,6 +183,6 @@ struct sec_dev {
 
 void sec_destroy_qps(struct hisi_qp **qps, int qp_num);
 struct hisi_qp **sec_create_qps(void);
-int sec_register_to_crypto(void);
-void sec_unregister_from_crypto(void);
+int sec_register_to_crypto(struct hisi_qm *qm);
+void sec_unregister_from_crypto(struct hisi_qm *qm);
 #endif
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
index 2eaa516..f835514 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
@@ -1634,7 +1634,7 @@ static struct aead_alg sec_aeads[] = {
 		     AES_BLOCK_SIZE, AES_BLOCK_SIZE, SHA512_DIGEST_SIZE),
 };
 
-int sec_register_to_crypto(void)
+int sec_register_to_crypto(struct hisi_qm *qm)
 {
 	int ret;
 
@@ -1651,7 +1651,7 @@ int sec_register_to_crypto(void)
 	return ret;
 }
 
-void sec_unregister_from_crypto(void)
+void sec_unregister_from_crypto(struct hisi_qm *qm)
 {
 	crypto_unregister_skciphers(sec_skciphers,
 				    ARRAY_SIZE(sec_skciphers));
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.h b/drivers/crypto/hisilicon/sec2/sec_crypto.h
index b2786e1..0e933e7 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.h
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.h
@@ -211,6 +211,6 @@ struct sec_sqe {
 	struct sec_sqe_type2 type2;
 };
 
-int sec_register_to_crypto(void);
-void sec_unregister_from_crypto(void);
+int sec_register_to_crypto(struct hisi_qm *qm);
+void sec_unregister_from_crypto(struct hisi_qm *qm);
 #endif
diff --git a/drivers/crypto/hisilicon/zip/zip.h b/drivers/crypto/hisilicon/zip/zip.h
index 92397f9..9ed7461 100644
--- a/drivers/crypto/hisilicon/zip/zip.h
+++ b/drivers/crypto/hisilicon/zip/zip.h
@@ -62,6 +62,6 @@ struct hisi_zip_sqe {
 };
 
 int zip_create_qps(struct hisi_qp **qps, int ctx_num, int node);
-int hisi_zip_register_to_crypto(void);
-void hisi_zip_unregister_from_crypto(void);
+int hisi_zip_register_to_crypto(struct hisi_qm *qm);
+void hisi_zip_unregister_from_crypto(struct hisi_qm *qm);
 #endif
diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c
index 08b4660..41f6966 100644
--- a/drivers/crypto/hisilicon/zip/zip_crypto.c
+++ b/drivers/crypto/hisilicon/zip/zip_crypto.c
@@ -665,7 +665,7 @@ static struct acomp_alg hisi_zip_acomp_gzip = {
 	}
 };
 
-int hisi_zip_register_to_crypto(void)
+int hisi_zip_register_to_crypto(struct hisi_qm *qm)
 {
 	int ret;
 
@@ -684,7 +684,7 @@ int hisi_zip_register_to_crypto(void)
 	return ret;
 }
 
-void hisi_zip_unregister_from_crypto(void)
+void hisi_zip_unregister_from_crypto(struct hisi_qm *qm)
 {
 	crypto_unregister_acomp(&hisi_zip_acomp_gzip);
 	crypto_unregister_acomp(&hisi_zip_acomp_zlib);
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v7 2/7] crypto: hisilicon/hpre - add some updates to adapt to Kunpeng 930
  2021-01-22  7:09 [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930 Meng Yu
  2021-01-22  7:09 ` [PATCH v7 1/7] crypto: hisilicon/hpre - add version adapt to new algorithms Meng Yu
@ 2021-01-22  7:09 ` Meng Yu
  2021-01-22  7:09 ` [PATCH v7 3/7] crypto: hisilicon/hpre - add algorithm type Meng Yu
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Meng Yu @ 2021-01-22  7:09 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, xuzaibo, wangzhou1, yumeng18, linux-kernel

From: Hui Tang <tanghui20@huawei.com>

HPRE of Kunpeng 930 is updated on cluster numbers and configurations
of Kunpeng 920 HPRE, so we try to update this driver to make it running
okay on both chips.

Signed-off-by: Hui Tang <tanghui20@huawei.com>
Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
---
 drivers/crypto/hisilicon/hpre/hpre.h      |  8 ++-
 drivers/crypto/hisilicon/hpre/hpre_main.c | 93 +++++++++++++++++++++----------
 2 files changed, 68 insertions(+), 33 deletions(-)

diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h
index e784712..cc50f23 100644
--- a/drivers/crypto/hisilicon/hpre/hpre.h
+++ b/drivers/crypto/hisilicon/hpre/hpre.h
@@ -14,8 +14,7 @@ enum {
 	HPRE_CLUSTER0,
 	HPRE_CLUSTER1,
 	HPRE_CLUSTER2,
-	HPRE_CLUSTER3,
-	HPRE_CLUSTERS_NUM,
+	HPRE_CLUSTER3
 };
 
 enum hpre_ctrl_dbgfs_file {
@@ -36,7 +35,10 @@ enum hpre_dfx_dbgfs_file {
 	HPRE_DFX_FILE_NUM
 };
 
-#define HPRE_DEBUGFS_FILE_NUM    (HPRE_DEBUG_FILE_NUM + HPRE_CLUSTERS_NUM - 1)
+#define HPRE_CLUSTERS_NUM_V2		(HPRE_CLUSTER3 + 1)
+#define HPRE_CLUSTERS_NUM_V3		1
+#define HPRE_CLUSTERS_NUM_MAX		HPRE_CLUSTERS_NUM_V2
+#define HPRE_DEBUGFS_FILE_NUM (HPRE_DEBUG_FILE_NUM + HPRE_CLUSTERS_NUM_MAX - 1)
 
 struct hpre_debugfs_file {
 	int index;
diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
index ad8b691..52827b0 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
@@ -30,6 +30,8 @@
 #define HPRE_BD_ARUSR_CFG		0x301030
 #define HPRE_BD_AWUSR_CFG		0x301034
 #define HPRE_TYPES_ENB			0x301038
+#define HPRE_RSA_ENB			BIT(0)
+#define HPRE_ECC_ENB			BIT(1)
 #define HPRE_DATA_RUSER_CFG		0x30103c
 #define HPRE_DATA_WUSER_CFG		0x301040
 #define HPRE_INT_MASK			0x301400
@@ -74,7 +76,8 @@
 #define HPRE_QM_AXI_CFG_MASK		0xffff
 #define HPRE_QM_VFG_AX_MASK		0xff
 #define HPRE_BD_USR_MASK		0x3
-#define HPRE_CLUSTER_CORE_MASK		0xf
+#define HPRE_CLUSTER_CORE_MASK_V2	0xf
+#define HPRE_CLUSTER_CORE_MASK_V3	0xff
 
 #define HPRE_AM_OOO_SHUTDOWN_ENB	0x301044
 #define HPRE_AM_OOO_SHUTDOWN_ENABLE	BIT(0)
@@ -87,6 +90,11 @@
 #define HPRE_QM_PM_FLR			BIT(11)
 #define HPRE_QM_SRIOV_FLR		BIT(12)
 
+#define HPRE_CLUSTERS_NUM(qm)		\
+	(((qm)->ver >= QM_HW_V3) ? HPRE_CLUSTERS_NUM_V3 : HPRE_CLUSTERS_NUM_V2)
+#define HPRE_CLUSTER_CORE_MASK(qm)	\
+	(((qm)->ver >= QM_HW_V3) ? HPRE_CLUSTER_CORE_MASK_V3 :\
+		HPRE_CLUSTER_CORE_MASK_V2)
 #define HPRE_VIA_MSI_DSM		1
 #define HPRE_SQE_MASK_OFFSET		8
 #define HPRE_SQE_MASK_LEN		24
@@ -276,8 +284,40 @@ static int hpre_cfg_by_dsm(struct hisi_qm *qm)
 	return 0;
 }
 
+static int hpre_set_cluster(struct hisi_qm *qm)
+{
+	u32 cluster_core_mask = HPRE_CLUSTER_CORE_MASK(qm);
+	u8 clusters_num = HPRE_CLUSTERS_NUM(qm);
+	struct device *dev = &qm->pdev->dev;
+	unsigned long offset;
+	u32 val = 0;
+	int ret, i;
+
+	for (i = 0; i < clusters_num; i++) {
+		offset = i * HPRE_CLSTR_ADDR_INTRVL;
+
+		/* clusters initiating */
+		writel(cluster_core_mask,
+		       HPRE_ADDR(qm, offset + HPRE_CORE_ENB));
+		writel(0x1, HPRE_ADDR(qm, offset + HPRE_CORE_INI_CFG));
+		ret = readl_relaxed_poll_timeout(HPRE_ADDR(qm, offset +
+					HPRE_CORE_INI_STATUS), val,
+					((val & cluster_core_mask) ==
+					cluster_core_mask),
+					HPRE_REG_RD_INTVRL_US,
+					HPRE_REG_RD_TMOUT_US);
+		if (ret) {
+			dev_err(dev,
+				"cluster %d int st status timeout!\n", i);
+			return -ETIMEDOUT;
+		}
+	}
+
+	return 0;
+}
+
 /*
- * For Hi1620, we shoul disable FLR triggered by hardware (BME/PM/SRIOV).
+ * For Kunpeng 920, we shoul disable FLR triggered by hardware (BME/PM/SRIOV).
  * Or it may stay in D3 state when we bind and unbind hpre quickly,
  * as it does FLR triggered by hardware.
  */
@@ -295,9 +335,8 @@ static void disable_flr_of_bme(struct hisi_qm *qm)
 static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
 {
 	struct device *dev = &qm->pdev->dev;
-	unsigned long offset;
-	int ret, i;
 	u32 val;
+	int ret;
 
 	writel(HPRE_QM_USR_CFG_MASK, HPRE_ADDR(qm, QM_ARUSER_M_CFG_ENABLE));
 	writel(HPRE_QM_USR_CFG_MASK, HPRE_ADDR(qm, QM_AWUSER_M_CFG_ENABLE));
@@ -308,7 +347,12 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
 	val |= BIT(HPRE_TIMEOUT_ABNML_BIT);
 	writel_relaxed(val, HPRE_ADDR(qm, HPRE_QM_ABNML_INT_MASK));
 
-	writel(0x1, HPRE_ADDR(qm, HPRE_TYPES_ENB));
+	if (qm->ver >= QM_HW_V3)
+		writel(HPRE_RSA_ENB | HPRE_ECC_ENB,
+			HPRE_ADDR(qm, HPRE_TYPES_ENB));
+	else
+		writel(HPRE_RSA_ENB, HPRE_ADDR(qm, HPRE_TYPES_ENB));
+
 	writel(HPRE_QM_VFG_AX_MASK, HPRE_ADDR(qm, HPRE_VFG_AXCACHE));
 	writel(0x0, HPRE_ADDR(qm, HPRE_BD_ENDIAN));
 	writel(0x0, HPRE_ADDR(qm, HPRE_INT_MASK));
@@ -333,37 +377,25 @@ static int hpre_set_user_domain_and_cache(struct hisi_qm *qm)
 		return -ETIMEDOUT;
 	}
 
-	for (i = 0; i < HPRE_CLUSTERS_NUM; i++) {
-		offset = i * HPRE_CLSTR_ADDR_INTRVL;
-
-		/* clusters initiating */
-		writel(HPRE_CLUSTER_CORE_MASK,
-		       HPRE_ADDR(qm, offset + HPRE_CORE_ENB));
-		writel(0x1, HPRE_ADDR(qm, offset + HPRE_CORE_INI_CFG));
-		ret = readl_relaxed_poll_timeout(HPRE_ADDR(qm, offset +
-					HPRE_CORE_INI_STATUS), val,
-					((val & HPRE_CLUSTER_CORE_MASK) ==
-					HPRE_CLUSTER_CORE_MASK),
-					HPRE_REG_RD_INTVRL_US,
-					HPRE_REG_RD_TMOUT_US);
-		if (ret) {
-			dev_err(dev,
-				"cluster %d int st status timeout!\n", i);
-			return -ETIMEDOUT;
-		}
-	}
-
-	ret = hpre_cfg_by_dsm(qm);
+	ret = hpre_set_cluster(qm);
 	if (ret)
-		dev_err(dev, "acpi_evaluate_dsm err.\n");
+		return -ETIMEDOUT;
 
-	disable_flr_of_bme(qm);
+	/* This setting is only needed by Kunpeng 920. */
+	if (qm->ver == QM_HW_V2) {
+		ret = hpre_cfg_by_dsm(qm);
+		if (ret)
+			dev_err(dev, "acpi_evaluate_dsm err.\n");
+
+		disable_flr_of_bme(qm);
+	}
 
 	return ret;
 }
 
 static void hpre_cnt_regs_clear(struct hisi_qm *qm)
 {
+	u8 clusters_num = HPRE_CLUSTERS_NUM(qm);
 	unsigned long offset;
 	int i;
 
@@ -372,7 +404,7 @@ static void hpre_cnt_regs_clear(struct hisi_qm *qm)
 	writel(0x0, qm->io_base + QM_DFX_DB_CNT_VF);
 
 	/* clear clusterX/cluster_ctrl */
-	for (i = 0; i < HPRE_CLUSTERS_NUM; i++) {
+	for (i = 0; i < clusters_num; i++) {
 		offset = HPRE_CLSTR_BASE + i * HPRE_CLSTR_ADDR_INTRVL;
 		writel(0x0, qm->io_base + offset + HPRE_CLUSTER_INQURY);
 	}
@@ -671,13 +703,14 @@ static int hpre_pf_comm_regs_debugfs_init(struct hisi_qm *qm)
 
 static int hpre_cluster_debugfs_init(struct hisi_qm *qm)
 {
+	u8 clusters_num = HPRE_CLUSTERS_NUM(qm);
 	struct device *dev = &qm->pdev->dev;
 	char buf[HPRE_DBGFS_VAL_MAX_LEN];
 	struct debugfs_regset32 *regset;
 	struct dentry *tmp_d;
 	int i, ret;
 
-	for (i = 0; i < HPRE_CLUSTERS_NUM; i++) {
+	for (i = 0; i < clusters_num; i++) {
 		ret = snprintf(buf, HPRE_DBGFS_VAL_MAX_LEN, "cluster%d", i);
 		if (ret < 0)
 			return -EINVAL;
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v7 3/7] crypto: hisilicon/hpre - add algorithm type
  2021-01-22  7:09 [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930 Meng Yu
  2021-01-22  7:09 ` [PATCH v7 1/7] crypto: hisilicon/hpre - add version adapt to new algorithms Meng Yu
  2021-01-22  7:09 ` [PATCH v7 2/7] crypto: hisilicon/hpre - add some updates to adapt to Kunpeng 930 Meng Yu
@ 2021-01-22  7:09 ` Meng Yu
  2021-01-22  7:09 ` [PATCH v7 4/7] crypto: add ecc curve and expose them Meng Yu
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Meng Yu @ 2021-01-22  7:09 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, xuzaibo, wangzhou1, yumeng18, linux-kernel

Algorithm type is brought in to get hardware HPRE queue
to support different algorithms.

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
---
 drivers/crypto/hisilicon/hpre/hpre.h        | 10 +++++++++-
 drivers/crypto/hisilicon/hpre/hpre_crypto.c | 12 ++++++------
 drivers/crypto/hisilicon/hpre/hpre_main.c   | 11 +++++++++--
 3 files changed, 24 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h
index cc50f23..02193e1 100644
--- a/drivers/crypto/hisilicon/hpre/hpre.h
+++ b/drivers/crypto/hisilicon/hpre/hpre.h
@@ -10,6 +10,14 @@
 #define HPRE_PF_DEF_Q_NUM		64
 #define HPRE_PF_DEF_Q_BASE		0
 
+/*
+ * type used in qm sqc DW6.
+ * 0 - Algorithm which has been supported in V2, like RSA, DH and so on;
+ * 1 - ECC algorithm in V3.
+ */
+#define HPRE_V2_ALG_TYPE	0
+#define HPRE_V3_ECC_ALG_TYPE	1
+
 enum {
 	HPRE_CLUSTER0,
 	HPRE_CLUSTER1,
@@ -92,7 +100,7 @@ struct hpre_sqe {
 	__le32 rsvd1[_HPRE_SQE_ALIGN_EXT];
 };
 
-struct hisi_qp *hpre_create_qp(void);
+struct hisi_qp *hpre_create_qp(u8 type);
 int hpre_algs_register(struct hisi_qm *qm);
 void hpre_algs_unregister(struct hisi_qm *qm);
 
diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
index d89b2f5..712bea9 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
@@ -152,12 +152,12 @@ static void hpre_rm_req_from_ctx(struct hpre_asym_request *hpre_req)
 	}
 }
 
-static struct hisi_qp *hpre_get_qp_and_start(void)
+static struct hisi_qp *hpre_get_qp_and_start(u8 type)
 {
 	struct hisi_qp *qp;
 	int ret;
 
-	qp = hpre_create_qp();
+	qp = hpre_create_qp(type);
 	if (!qp) {
 		pr_err("Can not create hpre qp!\n");
 		return ERR_PTR(-ENODEV);
@@ -422,11 +422,11 @@ static void hpre_alg_cb(struct hisi_qp *qp, void *resp)
 	req->cb(ctx, resp);
 }
 
-static int hpre_ctx_init(struct hpre_ctx *ctx)
+static int hpre_ctx_init(struct hpre_ctx *ctx, u8 type)
 {
 	struct hisi_qp *qp;
 
-	qp = hpre_get_qp_and_start();
+	qp = hpre_get_qp_and_start(type);
 	if (IS_ERR(qp))
 		return PTR_ERR(qp);
 
@@ -674,7 +674,7 @@ static int hpre_dh_init_tfm(struct crypto_kpp *tfm)
 {
 	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
 
-	return hpre_ctx_init(ctx);
+	return hpre_ctx_init(ctx, HPRE_V2_ALG_TYPE);
 }
 
 static void hpre_dh_exit_tfm(struct crypto_kpp *tfm)
@@ -1100,7 +1100,7 @@ static int hpre_rsa_init_tfm(struct crypto_akcipher *tfm)
 		return PTR_ERR(ctx->rsa.soft_tfm);
 	}
 
-	ret = hpre_ctx_init(ctx);
+	ret = hpre_ctx_init(ctx, HPRE_V2_ALG_TYPE);
 	if (ret)
 		crypto_free_akcipher(ctx->rsa.soft_tfm);
 
diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
index 52827b0..d3ec3b4 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
@@ -223,13 +223,20 @@ static u32 vfs_num;
 module_param_cb(vfs_num, &vfs_num_ops, &vfs_num, 0444);
 MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
 
-struct hisi_qp *hpre_create_qp(void)
+struct hisi_qp *hpre_create_qp(u8 type)
 {
 	int node = cpu_to_node(smp_processor_id());
 	struct hisi_qp *qp = NULL;
 	int ret;
 
-	ret = hisi_qm_alloc_qps_node(&hpre_devices, 1, 0, node, &qp);
+	if (type != HPRE_V2_ALG_TYPE && type != HPRE_V3_ECC_ALG_TYPE)
+		return NULL;
+
+	/*
+	 * type: 0 - RSA/DH. algorithm supported in V2,
+	 *       1 - ECC algorithm in V3.
+	 */
+	ret = hisi_qm_alloc_qps_node(&hpre_devices, 1, type, node, &qp);
 	if (!ret)
 		return qp;
 
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-22  7:09 [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930 Meng Yu
                   ` (2 preceding siblings ...)
  2021-01-22  7:09 ` [PATCH v7 3/7] crypto: hisilicon/hpre - add algorithm type Meng Yu
@ 2021-01-22  7:09 ` Meng Yu
  2021-01-28  5:03   ` Herbert Xu
  2021-01-22  7:09 ` [PATCH v7 5/7] crypto: add curve 25519 " Meng Yu
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 25+ messages in thread
From: Meng Yu @ 2021-01-22  7:09 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, xuzaibo, wangzhou1, yumeng18, linux-kernel

1. Add ecc curves(P224, P384, P521) for ECDH;
2. Reorder ECC 'Curves ID' in 'include/crypto/ecdh.h', and
   modify 'curve_id' used in 'testmgr.h';
3. Add function 'ecc_get_curve_param' in 'include/crypto/ecc_curve.h' for
   users, so everyone in the kernel tree can easily get ecc curve params;

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 crypto/ecc.c               |  15 ++++-
 crypto/ecc.h               |  37 +----------
 crypto/ecc_curve_defs.h    | 152 ++++++++++++++++++++++++++++++++++++++-------
 crypto/testmgr.h           |  12 ++--
 include/crypto/ecc_curve.h |  53 ++++++++++++++++
 include/crypto/ecdh.h      |   5 +-
 6 files changed, 207 insertions(+), 67 deletions(-)
 create mode 100644 include/crypto/ecc_curve.h

diff --git a/crypto/ecc.c b/crypto/ecc.c
index c80aa25..cfa1dc3 100644
--- a/crypto/ecc.c
+++ b/crypto/ecc.c
@@ -24,6 +24,7 @@
  * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
+#include <crypto/ecc_curve.h>
 #include <linux/module.h>
 #include <linux/random.h>
 #include <linux/slab.h>
@@ -42,14 +43,24 @@ typedef struct {
 	u64 m_high;
 } uint128_t;
 
+/* Returns CURVE if get curve succssful, NULL otherwise */
+const struct ecc_curve *ecc_get_curve_by_id(unsigned int curve_id)
+{
+	if (curve_id >= ECC_CURVE_NIST_P192 && curve_id <= ECC_CURVE_NIST_P521)
+		return &ecc_curve_list[curve_id - 1];
+
+	return NULL;
+}
+EXPORT_SYMBOL(ecc_get_curve_by_id);
+
 static inline const struct ecc_curve *ecc_get_curve(unsigned int curve_id)
 {
 	switch (curve_id) {
 	/* In FIPS mode only allow P256 and higher */
 	case ECC_CURVE_NIST_P192:
-		return fips_enabled ? NULL : &nist_p192;
+		return fips_enabled ? NULL : ecc_get_curve_by_id(curve_id);
 	case ECC_CURVE_NIST_P256:
-		return &nist_p256;
+		return ecc_get_curve_by_id(curve_id);
 	default:
 		return NULL;
 	}
diff --git a/crypto/ecc.h b/crypto/ecc.h
index d4e546b..38a81d4 100644
--- a/crypto/ecc.h
+++ b/crypto/ecc.h
@@ -26,6 +26,8 @@
 #ifndef _CRYPTO_ECC_H
 #define _CRYPTO_ECC_H
 
+#include <crypto/ecc_curve.h>
+
 /* One digit is u64 qword. */
 #define ECC_CURVE_NIST_P192_DIGITS  3
 #define ECC_CURVE_NIST_P256_DIGITS  4
@@ -33,44 +35,9 @@
 
 #define ECC_DIGITS_TO_BYTES_SHIFT 3
 
-/**
- * struct ecc_point - elliptic curve point in affine coordinates
- *
- * @x:		X coordinate in vli form.
- * @y:		Y coordinate in vli form.
- * @ndigits:	Length of vlis in u64 qwords.
- */
-struct ecc_point {
-	u64 *x;
-	u64 *y;
-	u8 ndigits;
-};
-
 #define ECC_POINT_INIT(x, y, ndigits)	(struct ecc_point) { x, y, ndigits }
 
 /**
- * struct ecc_curve - definition of elliptic curve
- *
- * @name:	Short name of the curve.
- * @g:		Generator point of the curve.
- * @p:		Prime number, if Barrett's reduction is used for this curve
- *		pre-calculated value 'mu' is appended to the @p after ndigits.
- *		Use of Barrett's reduction is heuristically determined in
- *		vli_mmod_fast().
- * @n:		Order of the curve group.
- * @a:		Curve parameter a.
- * @b:		Curve parameter b.
- */
-struct ecc_curve {
-	char *name;
-	struct ecc_point g;
-	u64 *p;
-	u64 *n;
-	u64 *a;
-	u64 *b;
-};
-
-/**
  * ecc_is_key_valid() - Validate a given ECDH private key
  *
  * @curve_id:		id representing the curve to use
diff --git a/crypto/ecc_curve_defs.h b/crypto/ecc_curve_defs.h
index 69be6c7..b81e580 100644
--- a/crypto/ecc_curve_defs.h
+++ b/crypto/ecc_curve_defs.h
@@ -15,18 +15,20 @@ static u64 nist_p192_a[] = { 0xFFFFFFFFFFFFFFFCull, 0xFFFFFFFFFFFFFFFEull,
 				0xFFFFFFFFFFFFFFFFull };
 static u64 nist_p192_b[] = { 0xFEB8DEECC146B9B1ull, 0x0FA7E9AB72243049ull,
 				0x64210519E59C80E7ull };
-static struct ecc_curve nist_p192 = {
-	.name = "nist_192",
-	.g = {
-		.x = nist_p192_g_x,
-		.y = nist_p192_g_y,
-		.ndigits = 3,
-	},
-	.p = nist_p192_p,
-	.n = nist_p192_n,
-	.a = nist_p192_a,
-	.b = nist_p192_b
-};
+
+/* NIST P-224 */
+static u64 nist_p224_g_x[] = { 0x343280D6115C1D21ull, 0x4A03C1D356C21122ull,
+				0x6BB4BF7F321390B9ull, 0xB70E0CBDull };
+static u64 nist_p224_g_y[] = { 0x44d5819985007e34ull, 0xcd4375a05a074764ull,
+				0xb5f723fb4c22dfe6ull, 0xbd376388ull };
+static u64 nist_p224_p[] = { 0x0000000000000001ull, 0xFFFFFFFF00000000ull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFull };
+static u64 nist_p224_n[] = { 0x13DD29455C5C2A3Dull, 0xFFFF16A2E0B8F03Eull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFull };
+static u64 nist_p224_a[] = { 0xFFFFFFFFFFFFFFFEull, 0xFFFFFFFEFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFull };
+static u64 nist_p224_b[] = { 0x270B39432355FFB4ull, 0x5044B0B7D7BFD8BAull,
+				0x0C04B3ABF5413256ull, 0xB4050A85ull };
 
 /* NIST P-256: a = p - 3 */
 static u64 nist_p256_g_x[] = { 0xF4A13945D898C296ull, 0x77037D812DEB33A0ull,
@@ -41,17 +43,121 @@ static u64 nist_p256_a[] = { 0xFFFFFFFFFFFFFFFCull, 0x00000000FFFFFFFFull,
 				0x0000000000000000ull, 0xFFFFFFFF00000001ull };
 static u64 nist_p256_b[] = { 0x3BCE3C3E27D2604Bull, 0x651D06B0CC53B0F6ull,
 				0xB3EBBD55769886BCull, 0x5AC635D8AA3A93E7ull };
-static struct ecc_curve nist_p256 = {
-	.name = "nist_256",
-	.g = {
-		.x = nist_p256_g_x,
-		.y = nist_p256_g_y,
-		.ndigits = 4,
-	},
-	.p = nist_p256_p,
-	.n = nist_p256_n,
-	.a = nist_p256_a,
-	.b = nist_p256_b
+
+
+/* NIST P-384: a = p - 3 */
+static u64 nist_p384_g_x[] = { 0x3A545E3872760AB7ull, 0x5502F25DBF55296Cull,
+				0x59F741E082542A38ull, 0x6E1D3B628BA79B98ull,
+				0x8EB1C71EF320AD74ull, 0xAA87CA22BE8B0537ull};
+static u64 nist_p384_g_y[] = { 0x7A431D7C90EA0E5Full, 0x0A60B1CE1D7E819Dull,
+				0xE9DA3113B5F0B8C0ull, 0xF8F41DBD289A147Cull,
+				0x5D9E98BF9292DC29ull, 0x3617DE4A96262C6Full};
+static u64 nist_p384_p[] = { 0x00000000FFFFFFFFull, 0xFFFFFFFF00000000ull,
+				0xFFFFFFFFFFFFFFFEull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull};
+static u64 nist_p384_n[] = { 0xECEC196ACCC52973ull, 0x581A0DB248B0A77Aull,
+				0xC7634D81F4372DDFull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull};
+static u64 nist_p384_a[] = { 0x00000000FFFFFFFCull, 0xFFFFFFFF00000000ull,
+				0xFFFFFFFFFFFFFFFEull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull};
+static u64 nist_p384_b[] = { 0x2A85C8EDD3EC2AEFull, 0xC656398D8A2ED19Dull,
+				0x0314088F5013875Aull, 0x181D9C6EFE814112ull,
+				0x988E056BE3F82D19ull, 0xB3312FA7E23EE7E4ull};
+
+/* NIST P-521: a = p - 3 */
+static u64 nist_p521_g_x[] = { 0xF97E7E31C2E5BD66ull, 0x3348B3C1856A429Bull,
+				0xFE1DC127A2FFA8DEull, 0xA14B5E77EFE75928ull,
+				0xF828AF606B4D3DBAull, 0x9C648139053FB521ull,
+				0x9E3ECB662395B442ull, 0x858E06B70404E9CDull,
+				0x00C6ull };
+static u64 nist_p521_g_y[] = { 0x88be94769fd16650ull, 0x353c7086a272c240ull,
+				0xc550b9013fad0761ull, 0x97ee72995ef42640ull,
+				0x17afbd17273e662cull, 0x98f54449579b4468ull,
+				0x5c8a5fb42c7d1bd9ull, 0x39296a789a3bc004ull,
+				0x0118ull };
+static u64 nist_p521_p[] = {0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull,
+				0x01FFull };
+static u64 nist_p521_n[] = { 0xBB6FB71E91386409ull, 0x3BB5C9B8899C47AEull,
+				0x7FCC0148F709A5D0ull, 0x51868783BF2F966Bull,
+				0xFFFFFFFFFFFFFFFAull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull,
+				0x01FFull };
+static u64 nist_p521_a[] = { 0xFFFFFFFFFFFFFFFCull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull,
+				0xFFFFFFFFFFFFFFFFull, 0xFFFFFFFFFFFFFFFFull,
+				0x01FFull };
+static u64 nist_p521_b[] = { 0xEF451FD46B503F00ull, 0x3573DF883D2C34F1ull,
+				0x1652C0BD3BB1BF07ull, 0x56193951EC7E937Bull,
+				0xB8B489918EF109E1ull, 0xA2DA725B99B315F3ull,
+				0x929A21A0B68540EEull, 0x953EB9618E1C9A1Full,
+				0x0051ull };
+
+/**
+ * 'ecc_curve_list' is ordered as 'Curves IDs'
+ * defined in "include/crypto/ecdh.h"
+ */
+static const struct ecc_curve ecc_curve_list[] = {
+	{
+		.name = "nist_192",
+		.g = {
+			.x = nist_p192_g_x,
+			.y = nist_p192_g_y,
+			.ndigits = 3,
+		},
+		.p = nist_p192_p,
+		.n = nist_p192_n,
+		.a = nist_p192_a,
+		.b = nist_p192_b,
+	}, {
+		.name = "nist_224",
+		.g = {
+			.x = nist_p224_g_x,
+			.y = nist_p224_g_y,
+			.ndigits = 4,
+		},
+		.p = nist_p224_p,
+		.n = nist_p224_n,
+		.a = nist_p224_a,
+		.b = nist_p224_b,
+	}, {
+		.name = "nist_256",
+		.g = {
+			.x = nist_p256_g_x,
+			.y = nist_p256_g_y,
+			.ndigits = 4,
+		},
+		.p = nist_p256_p,
+		.n = nist_p256_n,
+		.a = nist_p256_a,
+		.b = nist_p256_b,
+	}, {
+		.name = "nist_384",
+		.g = {
+			.x = nist_p384_g_x,
+			.y = nist_p384_g_y,
+			.ndigits = 6,
+		},
+		.p = nist_p384_p,
+		.n = nist_p384_n,
+		.a = nist_p384_a,
+		.b = nist_p384_b,
+	}, {
+		.name = "nist_521",
+		.g = {
+			.x = nist_p521_g_x,
+			.y = nist_p521_g_y,
+			.ndigits = 9,
+		},
+		.p = nist_p521_p,
+		.n = nist_p521_n,
+		.a = nist_p521_a,
+		.b = nist_p521_b,
+	}
 };
 
 #endif
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 8c83811..7fe0fb9 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -2307,12 +2307,12 @@ static const struct kpp_testvec ecdh_tv_template[] = {
 #ifdef __LITTLE_ENDIAN
 	"\x02\x00" /* type */
 	"\x28\x00" /* len */
-	"\x02\x00" /* curve_id */
+	"\x03\x00" /* curve_id */
 	"\x20\x00" /* key_size */
 #else
 	"\x00\x02" /* type */
 	"\x00\x28" /* len */
-	"\x00\x02" /* curve_id */
+	"\x00\x03" /* curve_id */
 	"\x00\x20" /* key_size */
 #endif
 	"\x24\xd1\x21\xeb\xe5\xcf\x2d\x83"
@@ -2351,24 +2351,24 @@ static const struct kpp_testvec ecdh_tv_template[] = {
 #ifdef __LITTLE_ENDIAN
 	"\x02\x00" /* type */
 	"\x08\x00" /* len */
-	"\x02\x00" /* curve_id */
+	"\x03\x00" /* curve_id */
 	"\x00\x00", /* key_size */
 #else
 	"\x00\x02" /* type */
 	"\x00\x08" /* len */
-	"\x00\x02" /* curve_id */
+	"\x00\x03" /* curve_id */
 	"\x00\x00", /* key_size */
 #endif
 	.b_secret =
 #ifdef __LITTLE_ENDIAN
 	"\x02\x00" /* type */
 	"\x28\x00" /* len */
-	"\x02\x00" /* curve_id */
+	"\x03\x00" /* curve_id */
 	"\x20\x00" /* key_size */
 #else
 	"\x00\x02" /* type */
 	"\x00\x28" /* len */
-	"\x00\x02" /* curve_id */
+	"\x00\x03" /* curve_id */
 	"\x00\x20" /* key_size */
 #endif
 	"\x24\xd1\x21\xeb\xe5\xcf\x2d\x83"
diff --git a/include/crypto/ecc_curve.h b/include/crypto/ecc_curve.h
new file mode 100644
index 0000000..a3adf1e
--- /dev/null
+++ b/include/crypto/ecc_curve.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2021 HiSilicon Limited. */
+
+#ifndef _CRYTO_ECC_CURVE_H
+#define _CRYTO_ECC_CURVE_H
+
+#include <linux/types.h>
+
+/**
+ * struct ecc_point - elliptic curve point in affine coordinates
+ *
+ * @x:		X coordinate in vli form.
+ * @y:		Y coordinate in vli form.
+ * @ndigits:	Length of vlis in u64 qwords.
+ */
+struct ecc_point {
+	u64 *x;
+	u64 *y;
+	u8 ndigits;
+};
+
+/**
+ * struct ecc_curve - definition of elliptic curve
+ *
+ * @name:	Short name of the curve.
+ * @g:		Generator point of the curve.
+ * @p:		Prime number, if Barrett's reduction is used for this curve
+ *		pre-calculated value 'mu' is appended to the @p after ndigits.
+ *		Use of Barrett's reduction is heuristically determined in
+ *		vli_mmod_fast().
+ * @n:		Order of the curve group.
+ * @a:		Curve parameter a.
+ * @b:		Curve parameter b.
+ */
+struct ecc_curve {
+	char *name;
+	struct ecc_point g;
+	u64 *p;
+	u64 *n;
+	u64 *a;
+	u64 *b;
+};
+
+/**
+ * ecc_get_curve_by_id() - get elliptic curve;
+ * @curve_id:           Curves IDs:
+ *                      defined in "include/crypto/ecc_curve.h";
+ *
+ * Returns curve if get curve succssful, NULL otherwise
+ */
+const struct ecc_curve *ecc_get_curve_by_id(unsigned int curve_id);
+
+#endif
\ No newline at end of file
diff --git a/include/crypto/ecdh.h b/include/crypto/ecdh.h
index a5b805b..741d18a 100644
--- a/include/crypto/ecdh.h
+++ b/include/crypto/ecdh.h
@@ -24,7 +24,10 @@
 
 /* Curves IDs */
 #define ECC_CURVE_NIST_P192	0x0001
-#define ECC_CURVE_NIST_P256	0x0002
+#define ECC_CURVE_NIST_P224	0x0002
+#define ECC_CURVE_NIST_P256	0x0003
+#define ECC_CURVE_NIST_P384	0x0004
+#define ECC_CURVE_NIST_P521	0x0005
 
 /**
  * struct ecdh - define an ECDH private key
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v7 5/7] crypto: add curve 25519 and expose them
  2021-01-22  7:09 [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930 Meng Yu
                   ` (3 preceding siblings ...)
  2021-01-22  7:09 ` [PATCH v7 4/7] crypto: add ecc curve and expose them Meng Yu
@ 2021-01-22  7:09 ` Meng Yu
  2021-01-22  7:09 ` [PATCH v7 6/7] crypto: hisilicon/hpre - add 'ECDH' algorithm Meng Yu
  2021-01-22  7:09 ` [PATCH v7 7/7] crypto: hisilicon/hpre - add 'CURVE25519' algorithm Meng Yu
  6 siblings, 0 replies; 25+ messages in thread
From: Meng Yu @ 2021-01-22  7:09 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, xuzaibo, wangzhou1, yumeng18, linux-kernel

1. Add curve 25519 parameters;
2. Add curve25519 function 'ecc_get_curve25519_param',
   to be exposed to everyone in kernel tree.

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
---
 crypto/ecc.c               |  7 +++++++
 crypto/ecc_curve_defs.h    | 17 +++++++++++++++++
 include/crypto/ecc_curve.h |  7 +++++++
 3 files changed, 31 insertions(+)

diff --git a/crypto/ecc.c b/crypto/ecc.c
index cfa1dc3..025b5e6e 100644
--- a/crypto/ecc.c
+++ b/crypto/ecc.c
@@ -53,6 +53,13 @@ const struct ecc_curve *ecc_get_curve_by_id(unsigned int curve_id)
 }
 EXPORT_SYMBOL(ecc_get_curve_by_id);
 
+/* Returns curv25519 curve param */
+const struct ecc_curve *ecc_get_curve25519(void)
+{
+	return &ecc_25519;
+}
+EXPORT_SYMBOL(ecc_get_curve25519);
+
 static inline const struct ecc_curve *ecc_get_curve(unsigned int curve_id)
 {
 	switch (curve_id) {
diff --git a/crypto/ecc_curve_defs.h b/crypto/ecc_curve_defs.h
index b81e580..91b3d4b 100644
--- a/crypto/ecc_curve_defs.h
+++ b/crypto/ecc_curve_defs.h
@@ -160,4 +160,21 @@ static const struct ecc_curve ecc_curve_list[] = {
 	}
 };
 
+/* curve25519 */
+static u64 curve25519_g_x[] = { 0x0000000000000009, 0x0000000000000000,
+				0x0000000000000000, 0x0000000000000000 };
+static u64 curve25519_p[] = { 0xffffffffffffffed, 0xffffffffffffffff,
+				0xffffffffffffffff, 0x7fffffffffffffff };
+static u64 curve25519_a[] = { 0x000000000001DB41, 0x0000000000000000,
+				 0x0000000000000000, 0x0000000000000000 };
+static const struct ecc_curve ecc_25519 = {
+	.name = "curve25519",
+	.g = {
+		.x = curve25519_g_x,
+		.ndigits = 4,
+	},
+	.p = curve25519_p,
+	.a = curve25519_a,
+};
+
 #endif
diff --git a/include/crypto/ecc_curve.h b/include/crypto/ecc_curve.h
index a3adf1e..2d22647 100644
--- a/include/crypto/ecc_curve.h
+++ b/include/crypto/ecc_curve.h
@@ -50,4 +50,11 @@ struct ecc_curve {
  */
 const struct ecc_curve *ecc_get_curve_by_id(unsigned int curve_id);
 
+/**
+ * ecc_get_curve25519() - get curve25519 curve;
+ *
+ * Returns curve25519
+ */
+const struct ecc_curve *ecc_get_curve25519(void);
+
 #endif
\ No newline at end of file
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v7 6/7] crypto: hisilicon/hpre - add 'ECDH' algorithm
  2021-01-22  7:09 [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930 Meng Yu
                   ` (4 preceding siblings ...)
  2021-01-22  7:09 ` [PATCH v7 5/7] crypto: add curve 25519 " Meng Yu
@ 2021-01-22  7:09 ` Meng Yu
  2021-01-22  7:09 ` [PATCH v7 7/7] crypto: hisilicon/hpre - add 'CURVE25519' algorithm Meng Yu
  6 siblings, 0 replies; 25+ messages in thread
From: Meng Yu @ 2021-01-22  7:09 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, xuzaibo, wangzhou1, yumeng18, linux-kernel

1. Enable 'ECDH' algorithm in Kunpeng 930;
2. HPRE ECDH Support ECC curve: P192, P224, P256, P384, P521.

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
---
 drivers/crypto/hisilicon/hpre/hpre.h        |   2 +-
 drivers/crypto/hisilicon/hpre/hpre_crypto.c | 493 +++++++++++++++++++++++++++-
 drivers/crypto/hisilicon/hpre/hpre_main.c   |   1 +
 3 files changed, 491 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h
index 02193e1..50e6b2e 100644
--- a/drivers/crypto/hisilicon/hpre/hpre.h
+++ b/drivers/crypto/hisilicon/hpre/hpre.h
@@ -83,6 +83,7 @@ enum hpre_alg_type {
 	HPRE_ALG_KG_CRT = 0x3,
 	HPRE_ALG_DH_G2 = 0x4,
 	HPRE_ALG_DH = 0x5,
+	HPRE_ALG_ECC_MUL = 0xD,
 };
 
 struct hpre_sqe {
@@ -104,5 +105,4 @@ struct hisi_qp *hpre_create_qp(u8 type);
 int hpre_algs_register(struct hisi_qm *qm);
 void hpre_algs_unregister(struct hisi_qm *qm);
 
-
 #endif
diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
index 712bea9..778a0057 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
@@ -2,6 +2,8 @@
 /* Copyright (c) 2019 HiSilicon Limited. */
 #include <crypto/akcipher.h>
 #include <crypto/dh.h>
+#include <crypto/ecc_curve.h>
+#include <crypto/ecdh.h>
 #include <crypto/internal/akcipher.h>
 #include <crypto/internal/kpp.h>
 #include <crypto/internal/rsa.h>
@@ -36,6 +38,20 @@ struct hpre_ctx;
 #define HPRE_DFX_SEC_TO_US	1000000
 #define HPRE_DFX_US_TO_NS	1000
 
+/* size in bytes of the n prime */
+#define HPRE_ECC_NIST_P192_N_SIZE	24
+#define HPRE_ECC_NIST_P224_N_SIZE	28
+#define HPRE_ECC_NIST_P256_N_SIZE	32
+#define HPRE_ECC_NIST_P384_N_SIZE	48
+#define HPRE_ECC_NIST_P521_N_SIZE	66
+
+/* size in bytes */
+#define HPRE_ECC_HW256_KSZ_B	32
+#define HPRE_ECC_HW384_KSZ_B	48
+#define HPRE_ECC_HW576_KSZ_B	72
+
+#define HPRE_ECDH_MAX_SZ	HPRE_ECC_HW576_KSZ_B
+
 typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe);
 
 struct hpre_rsa_ctx {
@@ -61,14 +77,25 @@ struct hpre_dh_ctx {
 	 * else if base if the counterpart public key we
 	 * compute the shared secret
 	 *	ZZ = yb^xa mod p; [RFC2631 sec 2.1.1]
+	 * low address: d--->n, please refer to Hisilicon HPRE UM
 	 */
-	char *xa_p; /* low address: d--->n, please refer to Hisilicon HPRE UM */
+	char *xa_p;
 	dma_addr_t dma_xa_p;
 
 	char *g; /* m */
 	dma_addr_t dma_g;
 };
 
+struct hpre_ecdh_ctx {
+	/* low address: p->a->k->b */
+	unsigned char *p;
+	dma_addr_t dma_p;
+
+	/* low address: x->y */
+	unsigned char *g;
+	dma_addr_t dma_g;
+};
+
 struct hpre_ctx {
 	struct hisi_qp *qp;
 	struct hpre_asym_request **req_list;
@@ -80,7 +107,10 @@ struct hpre_ctx {
 	union {
 		struct hpre_rsa_ctx rsa;
 		struct hpre_dh_ctx dh;
+		struct hpre_ecdh_ctx ecdh;
 	};
+	/* for ecc algorithms */
+	unsigned int curve_id;
 };
 
 struct hpre_asym_request {
@@ -91,6 +121,7 @@ struct hpre_asym_request {
 	union {
 		struct akcipher_request *rsa;
 		struct kpp_request *dh;
+		struct kpp_request *ecdh;
 	} areq;
 	int err;
 	int req_id;
@@ -1115,6 +1146,428 @@ static void hpre_rsa_exit_tfm(struct crypto_akcipher *tfm)
 	crypto_free_akcipher(ctx->rsa.soft_tfm);
 }
 
+static void hpre_key_to_big_end(u8 *data, int len)
+{
+	int i, j;
+	u8 tmp;
+
+	for (i = 0; i < len / 2; i++) {
+		j = len - i - 1;
+		tmp = data[j];
+		data[j] = data[i];
+		data[i] = tmp;
+	}
+}
+
+static void hpre_ecc_clear_ctx(struct hpre_ctx *ctx, bool is_clear_all,
+			       bool is_ecdh)
+{
+	struct device *dev = HPRE_DEV(ctx);
+	unsigned int sz = ctx->key_sz;
+	unsigned int shift = sz << 1;
+
+	if (is_clear_all)
+		hisi_qm_stop_qp(ctx->qp);
+
+	if (is_ecdh && ctx->ecdh.p) {
+		/* ecdh: p->a->k->b */
+		memzero_explicit(ctx->ecdh.p + shift, sz);
+		dma_free_coherent(dev, sz << 3, ctx->ecdh.p, ctx->ecdh.dma_p);
+		ctx->ecdh.p = NULL;
+	}
+
+	ctx->curve_id = 0;
+	hpre_ctx_clear(ctx, is_clear_all);
+}
+
+/*
+ * The bits of 192/224/256/384/521 are supported by HPRE,
+ * and convert the bits like:
+ * bits<=256, bits=256; 256<bits<=384, bits=384; 384<bits<=576, bits=576;
+ * If the parameter bit width is insufficient, then we fill in the
+ * high-order zeros by soft, so TASK_LENGTH1 is 0x3/0x5/0x8;
+ */
+static unsigned int hpre_ecdh_supported_curve(unsigned short id)
+{
+	switch (id) {
+	case ECC_CURVE_NIST_P192:
+	case ECC_CURVE_NIST_P224:
+	case ECC_CURVE_NIST_P256:
+		return HPRE_ECC_HW256_KSZ_B;
+	case ECC_CURVE_NIST_P384:
+		return HPRE_ECC_HW384_KSZ_B;
+	case ECC_CURVE_NIST_P521:
+		return HPRE_ECC_HW576_KSZ_B;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static void fill_curve_param(void *addr, u64 *param, unsigned int cur_sz, u8 ndigits)
+{
+	unsigned int sz = cur_sz - (ndigits - 1) * sizeof(u64);
+	u8 i = 0;
+
+	while (i < ndigits - 1) {
+		memcpy(addr + sizeof(u64) * i, &param[i], sizeof(u64));
+		i++;
+	}
+
+	memcpy(addr + sizeof(u64) * i, &param[ndigits - 1], sz);
+	hpre_key_to_big_end((u8 *)addr, cur_sz);
+}
+
+static int hpre_ecdh_fill_curve(struct hpre_ctx *ctx, struct ecdh *params,
+				unsigned int cur_sz)
+{
+	unsigned int shifta = ctx->key_sz << 1;
+	unsigned int shiftb = ctx->key_sz << 2;
+	void *p = ctx->ecdh.p + ctx->key_sz - cur_sz;
+	void *a = ctx->ecdh.p + shifta - cur_sz;
+	void *b = ctx->ecdh.p + shiftb - cur_sz;
+	void *x = ctx->ecdh.g + ctx->key_sz - cur_sz;
+	void *y = ctx->ecdh.g + shifta - cur_sz;
+	const struct ecc_curve *curve;
+	char *n;
+
+	n = kzalloc(ctx->key_sz, GFP_KERNEL);
+	if (unlikely(!n))
+		return -ENOMEM;
+
+	curve = ecc_get_curve_by_id(params->curve_id);
+	if (unlikely(!curve))
+		goto free;
+
+	fill_curve_param(p, curve->p, cur_sz, curve->g.ndigits);
+	fill_curve_param(a, curve->a, cur_sz, curve->g.ndigits);
+	fill_curve_param(b, curve->b, cur_sz, curve->g.ndigits);
+	fill_curve_param(x, curve->g.x, cur_sz, curve->g.ndigits);
+	fill_curve_param(y, curve->g.y, cur_sz, curve->g.ndigits);
+	fill_curve_param(n, curve->n, cur_sz, curve->g.ndigits);
+
+	if (params->key_size == cur_sz && strcmp(params->key, n) >= 0)
+		goto free;
+
+	kfree(n);
+	return 0;
+
+free:
+	kfree(n);
+	return -EINVAL;
+}
+
+static unsigned int hpre_ecdh_get_curvesz(unsigned short id)
+{
+	switch (id) {
+	case ECC_CURVE_NIST_P192:
+		return HPRE_ECC_NIST_P192_N_SIZE;
+	case ECC_CURVE_NIST_P224:
+		return HPRE_ECC_NIST_P224_N_SIZE;
+	case ECC_CURVE_NIST_P256:
+		return HPRE_ECC_NIST_P256_N_SIZE;
+	case ECC_CURVE_NIST_P384:
+		return HPRE_ECC_NIST_P384_N_SIZE;
+	case ECC_CURVE_NIST_P521:
+		return HPRE_ECC_NIST_P521_N_SIZE;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int hpre_ecdh_set_param(struct hpre_ctx *ctx, struct ecdh *params)
+{
+	struct device *dev = HPRE_DEV(ctx);
+	unsigned int sz, shift, curve_sz;
+	int ret;
+
+	ctx->key_sz = hpre_ecdh_supported_curve(params->curve_id);
+	if (!ctx->key_sz)
+		return -EINVAL;
+
+	curve_sz = hpre_ecdh_get_curvesz(params->curve_id);
+	if (!curve_sz || params->key_size > curve_sz)
+		return -EINVAL;
+
+	sz = ctx->key_sz;
+	ctx->curve_id = params->curve_id;
+
+	if (!ctx->ecdh.p) {
+		ctx->ecdh.p = dma_alloc_coherent(dev, sz << 3, &ctx->ecdh.dma_p,
+						 GFP_KERNEL);
+		if (!ctx->ecdh.p)
+			return -ENOMEM;
+	}
+
+	shift = sz << 2;
+	ctx->ecdh.g = ctx->ecdh.p + shift;
+	ctx->ecdh.dma_g = ctx->ecdh.dma_p + shift;
+
+	ret = hpre_ecdh_fill_curve(ctx, params, curve_sz);
+	if (ret) {
+		dev_err(dev, "failed to fill curve_param, ret = %d!\n", ret);
+		dma_free_coherent(dev, sz << 3, ctx->ecdh.p, ctx->ecdh.dma_p);
+		ctx->ecdh.p = NULL;
+		return ret;
+	}
+
+	return 0;
+}
+
+static bool hpre_key_is_valid(char *key, unsigned short key_sz)
+{
+	int i;
+
+	for (i = 0; i < key_sz; i++)
+		if (key[i])
+			return true;
+
+	return false;
+}
+
+static int hpre_ecdh_set_secret(struct crypto_kpp *tfm, const void *buf,
+				unsigned int len)
+{
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+	struct device *dev = HPRE_DEV(ctx);
+	unsigned int sz, sz_shift;
+	struct ecdh params;
+	int ret;
+
+	if (crypto_ecdh_decode_key(buf, len, &params) < 0) {
+		dev_err(dev, "failed to decode ecdh key!\n");
+		return -EINVAL;
+	}
+
+	if (!hpre_key_is_valid(params.key, params.key_size)) {
+		dev_err(dev, "Invalid hpre key!\n");
+		return -EINVAL;
+	}
+
+	hpre_ecc_clear_ctx(ctx, false, true);
+
+	ret = hpre_ecdh_set_param(ctx, &params);
+	if (ret < 0) {
+		dev_err(dev, "failed to set hpre param, ret = %d!\n", ret);
+		return ret;
+	}
+
+	sz = ctx->key_sz;
+	sz_shift = (sz << 1) + sz - params.key_size;
+	memcpy(ctx->ecdh.p + sz_shift, params.key, params.key_size);
+
+	return 0;
+}
+
+static void hpre_ecdh_hw_data_clr_all(struct hpre_ctx *ctx,
+				      struct hpre_asym_request *req,
+				      struct scatterlist *dst,
+				      struct scatterlist *src)
+{
+	struct device *dev = HPRE_DEV(ctx);
+	struct hpre_sqe *sqe = &req->req;
+	dma_addr_t dma;
+
+	dma = le64_to_cpu(sqe->in);
+	if (unlikely(!dma))
+		return;
+
+	if (src && req->src)
+		dma_free_coherent(dev, ctx->key_sz << 2, req->src, dma);
+
+	dma = le64_to_cpu(sqe->out);
+	if (unlikely(!dma))
+		return;
+
+	if (req->dst)
+		dma_free_coherent(dev, ctx->key_sz << 1, req->dst, dma);
+	if (dst)
+		dma_unmap_single(dev, dma, ctx->key_sz << 1, DMA_FROM_DEVICE);
+}
+
+static void hpre_ecdh_cb(struct hpre_ctx *ctx, void *resp)
+{
+	unsigned int curve_sz = hpre_ecdh_get_curvesz(ctx->curve_id);
+	struct hpre_dfx *dfx = ctx->hpre->debug.dfx;
+	struct hpre_asym_request *req = NULL;
+	struct kpp_request *areq;
+	u64 overtime_thrhld;
+	char *p;
+	int ret;
+
+	ret = hpre_alg_res_post_hf(ctx, resp, (void **)&req);
+	areq = req->areq.ecdh;
+	areq->dst_len = ctx->key_sz << 1;
+
+	overtime_thrhld = atomic64_read(&dfx[HPRE_OVERTIME_THRHLD].value);
+	if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
+		atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
+
+	p = sg_virt(areq->dst);
+	memmove(p, p + ctx->key_sz - curve_sz, curve_sz);
+	memmove(p + curve_sz, p + areq->dst_len - curve_sz, curve_sz);
+
+	hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src);
+	kpp_request_complete(areq, ret);
+
+	atomic64_inc(&dfx[HPRE_RECV_CNT].value);
+}
+
+static int hpre_ecdh_msg_request_set(struct hpre_ctx *ctx,
+				     struct kpp_request *req)
+{
+	struct hpre_asym_request *h_req;
+	struct hpre_sqe *msg;
+	int req_id;
+	void *tmp;
+
+	if (req->dst_len < ctx->key_sz << 1) {
+		req->dst_len = ctx->key_sz << 1;
+		return -EINVAL;
+	}
+
+	tmp = kpp_request_ctx(req);
+	h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ);
+	h_req->cb = hpre_ecdh_cb;
+	h_req->areq.ecdh = req;
+	msg = &h_req->req;
+	memset(msg, 0, sizeof(*msg));
+	msg->key = cpu_to_le64(ctx->ecdh.dma_p);
+
+	msg->dw0 |= cpu_to_le32(0x1U << HPRE_SQE_DONE_SHIFT);
+	msg->task_len1 = (ctx->key_sz >> HPRE_BITS_2_BYTES_SHIFT) - 1;
+	h_req->ctx = ctx;
+
+	req_id = hpre_add_req_to_ctx(h_req);
+	if (req_id < 0)
+		return -EBUSY;
+
+	msg->tag = cpu_to_le16((u16)req_id);
+	return 0;
+}
+
+static int hpre_ecdh_src_data_init(struct hpre_asym_request *hpre_req,
+				   struct scatterlist *data, unsigned int len)
+{
+	struct hpre_sqe *msg = &hpre_req->req;
+	struct hpre_ctx *ctx = hpre_req->ctx;
+	struct device *dev = HPRE_DEV(ctx);
+	unsigned int tmpshift;
+	dma_addr_t dma = 0;
+	void *ptr;
+	int shift;
+
+	/* Src_data include gx and gy. */
+	shift = ctx->key_sz - (len >> 1);
+	if (unlikely(shift < 0))
+		return -EINVAL;
+
+	ptr = dma_alloc_coherent(dev, ctx->key_sz << 2, &dma, GFP_KERNEL);
+	if (unlikely(!ptr))
+		return -ENOMEM;
+
+	tmpshift = ctx->key_sz << 1;
+	scatterwalk_map_and_copy(ptr + tmpshift, data, 0, len, 0);
+	memcpy(ptr + shift, ptr + tmpshift, len >> 1);
+	memcpy(ptr + ctx->key_sz + shift, ptr + tmpshift + (len >> 1), len >> 1);
+
+	hpre_req->src = ptr;
+	msg->in = cpu_to_le64(dma);
+	return 0;
+}
+
+static int hpre_ecdh_dst_data_init(struct hpre_asym_request *hpre_req,
+				   struct scatterlist *data, unsigned int len)
+{
+	struct hpre_sqe *msg = &hpre_req->req;
+	struct hpre_ctx *ctx = hpre_req->ctx;
+	struct device *dev = HPRE_DEV(ctx);
+	dma_addr_t dma = 0;
+
+	if (unlikely(!data || !sg_is_last(data) || len != ctx->key_sz << 1)) {
+		dev_err(dev, "data or data length is illegal!\n");
+		return -EINVAL;
+	}
+
+	hpre_req->dst = NULL;
+	dma = dma_map_single(dev, sg_virt(data), len, DMA_FROM_DEVICE);
+	if (unlikely(dma_mapping_error(dev, dma))) {
+		dev_err(dev, "dma map data err!\n");
+		return -ENOMEM;
+	}
+
+	msg->out = cpu_to_le64(dma);
+	return 0;
+}
+
+static int hpre_ecdh_compute_value(struct kpp_request *req)
+{
+	struct crypto_kpp *tfm = crypto_kpp_reqtfm(req);
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+	struct device *dev = HPRE_DEV(ctx);
+	void *tmp = kpp_request_ctx(req);
+	struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ);
+	struct hpre_sqe *msg = &hpre_req->req;
+	int ret;
+
+	ret = hpre_ecdh_msg_request_set(ctx, req);
+	if (unlikely(ret)) {
+		dev_err(dev, "failed to set ecdh request, ret = %d!\n", ret);
+		return ret;
+	}
+
+	if (req->src) {
+		ret = hpre_ecdh_src_data_init(hpre_req, req->src, req->src_len);
+		if (unlikely(ret)) {
+			dev_err(dev, "failed to init src data, ret = %d!\n", ret);
+			goto clear_all;
+		}
+	} else {
+		msg->in = cpu_to_le64(ctx->ecdh.dma_g);
+	}
+
+	ret = hpre_ecdh_dst_data_init(hpre_req, req->dst, req->dst_len);
+	if (unlikely(ret)) {
+		dev_err(dev, "failed to init dst data, ret = %d!\n", ret);
+		goto clear_all;
+	}
+
+	msg->dw0 = cpu_to_le32(le32_to_cpu(msg->dw0) | HPRE_ALG_ECC_MUL);
+	ret = hpre_send(ctx, msg);
+	if (likely(!ret))
+		return -EINPROGRESS;
+
+clear_all:
+	hpre_rm_req_from_ctx(hpre_req);
+	hpre_ecdh_hw_data_clr_all(ctx, hpre_req, req->dst, req->src);
+	return ret;
+}
+
+static unsigned int hpre_ecdh_max_size(struct crypto_kpp *tfm)
+{
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+
+	/* max size is the pub_key_size, include x and y */
+	return ctx->key_sz << 1;
+}
+
+static int hpre_ecdh_init_tfm(struct crypto_kpp *tfm)
+{
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+
+	return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE);
+}
+
+static void hpre_ecdh_exit_tfm(struct crypto_kpp *tfm)
+{
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+
+	hpre_ecc_clear_ctx(ctx, true, true);
+}
+
 static struct akcipher_alg rsa = {
 	.sign = hpre_rsa_dec,
 	.verify = hpre_rsa_enc,
@@ -1154,6 +1607,22 @@ static struct kpp_alg dh = {
 };
 #endif
 
+static struct kpp_alg ecdh = {
+	.set_secret = hpre_ecdh_set_secret,
+	.generate_public_key = hpre_ecdh_compute_value,
+	.compute_shared_secret = hpre_ecdh_compute_value,
+	.max_size = hpre_ecdh_max_size,
+	.init = hpre_ecdh_init_tfm,
+	.exit = hpre_ecdh_exit_tfm,
+	.reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ,
+	.base = {
+		.cra_ctxsize = sizeof(struct hpre_ctx),
+		.cra_priority = HPRE_CRYPTO_ALG_PRI,
+		.cra_name = "ecdh",
+		.cra_driver_name = "hpre-ecdh",
+		.cra_module = THIS_MODULE,
+	},
+};
 int hpre_algs_register(struct hisi_qm *qm)
 {
 	int ret;
@@ -1164,17 +1633,33 @@ int hpre_algs_register(struct hisi_qm *qm)
 		return ret;
 #ifdef CONFIG_CRYPTO_DH
 	ret = crypto_register_kpp(&dh);
-	if (ret)
+	if (ret) {
 		crypto_unregister_akcipher(&rsa);
+		return ret;
+	}
 #endif
 
-	return ret;
+	if (qm->ver >= QM_HW_V3) {
+		ret = crypto_register_kpp(&ecdh);
+		if (ret) {
+#ifdef CONFIG_CRYPTO_DH
+			crypto_unregister_kpp(&dh);
+#endif
+			crypto_unregister_akcipher(&rsa);
+			return ret;
+		}
+	}
+
+	return 0;
 }
 
 void hpre_algs_unregister(struct hisi_qm *qm)
 {
-	crypto_unregister_akcipher(&rsa);
+	if (qm->ver >= QM_HW_V3)
+		crypto_unregister_kpp(&ecdh);
+
 #ifdef CONFIG_CRYPTO_DH
 	crypto_unregister_kpp(&dh);
 #endif
+	crypto_unregister_akcipher(&rsa);
 }
diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
index d3ec3b4..a6c8dd2 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
@@ -1074,4 +1074,5 @@ module_exit(hpre_exit);
 
 MODULE_LICENSE("GPL v2");
 MODULE_AUTHOR("Zaibo Xu <xuzaibo@huawei.com>");
+MODULE_AUTHOR("Meng Yu <yumeng18@huawei.com>");
 MODULE_DESCRIPTION("Driver for HiSilicon HPRE accelerator");
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v7 7/7] crypto: hisilicon/hpre - add 'CURVE25519' algorithm
  2021-01-22  7:09 [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930 Meng Yu
                   ` (5 preceding siblings ...)
  2021-01-22  7:09 ` [PATCH v7 6/7] crypto: hisilicon/hpre - add 'ECDH' algorithm Meng Yu
@ 2021-01-22  7:09 ` Meng Yu
  6 siblings, 0 replies; 25+ messages in thread
From: Meng Yu @ 2021-01-22  7:09 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, xuzaibo, wangzhou1, yumeng18, linux-kernel

Enable 'CURVE25519' algorithm in Kunpeng 930.

Signed-off-by: Meng Yu <yumeng18@huawei.com>
Reviewed-by: Zaibo Xu <xuzaibo@huawei.com>
---
 drivers/crypto/hisilicon/Kconfig            |   1 +
 drivers/crypto/hisilicon/hpre/hpre.h        |   2 +
 drivers/crypto/hisilicon/hpre/hpre_crypto.c | 370 +++++++++++++++++++++++++++-
 3 files changed, 364 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/hisilicon/Kconfig b/drivers/crypto/hisilicon/Kconfig
index 8431926..c45adb1 100644
--- a/drivers/crypto/hisilicon/Kconfig
+++ b/drivers/crypto/hisilicon/Kconfig
@@ -65,6 +65,7 @@ config CRYPTO_DEV_HISI_HPRE
 	depends on UACCE || UACCE=n
 	depends on ARM64 || (COMPILE_TEST && 64BIT)
 	depends on ACPI
+	select CRYPTO_LIB_CURVE25519_GENERIC
 	select CRYPTO_DEV_HISI_QM
 	select CRYPTO_DH
 	select CRYPTO_RSA
diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h
index 50e6b2e..92892e3 100644
--- a/drivers/crypto/hisilicon/hpre/hpre.h
+++ b/drivers/crypto/hisilicon/hpre/hpre.h
@@ -84,6 +84,8 @@ enum hpre_alg_type {
 	HPRE_ALG_DH_G2 = 0x4,
 	HPRE_ALG_DH = 0x5,
 	HPRE_ALG_ECC_MUL = 0xD,
+	/* shared by x25519 and x448, but x448 is not supported now */
+	HPRE_ALG_CURVE25519_MUL = 0x10,
 };
 
 struct hpre_sqe {
diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
index 778a0057..5b67f45 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright (c) 2019 HiSilicon Limited. */
 #include <crypto/akcipher.h>
+#include <crypto/curve25519.h>
 #include <crypto/dh.h>
 #include <crypto/ecc_curve.h>
 #include <crypto/ecdh.h>
@@ -96,6 +97,16 @@ struct hpre_ecdh_ctx {
 	dma_addr_t dma_g;
 };
 
+struct hpre_curve25519_ctx {
+	/* low address: p->a->k */
+	unsigned char *p;
+	dma_addr_t dma_p;
+
+	/* gx coordinate */
+	unsigned char *g;
+	dma_addr_t dma_g;
+};
+
 struct hpre_ctx {
 	struct hisi_qp *qp;
 	struct hpre_asym_request **req_list;
@@ -108,6 +119,7 @@ struct hpre_ctx {
 		struct hpre_rsa_ctx rsa;
 		struct hpre_dh_ctx dh;
 		struct hpre_ecdh_ctx ecdh;
+		struct hpre_curve25519_ctx curve25519;
 	};
 	/* for ecc algorithms */
 	unsigned int curve_id;
@@ -122,6 +134,7 @@ struct hpre_asym_request {
 		struct akcipher_request *rsa;
 		struct kpp_request *dh;
 		struct kpp_request *ecdh;
+		struct kpp_request *curve25519;
 	} areq;
 	int err;
 	int req_id;
@@ -444,7 +457,6 @@ static void hpre_alg_cb(struct hisi_qp *qp, void *resp)
 	struct hpre_sqe *sqe = resp;
 	struct hpre_asym_request *req = ctx->req_list[le16_to_cpu(sqe->tag)];
 
-
 	if (unlikely(!req)) {
 		atomic64_inc(&dfx[HPRE_INVALID_REQ_CNT].value);
 		return;
@@ -1174,6 +1186,12 @@ static void hpre_ecc_clear_ctx(struct hpre_ctx *ctx, bool is_clear_all,
 		memzero_explicit(ctx->ecdh.p + shift, sz);
 		dma_free_coherent(dev, sz << 3, ctx->ecdh.p, ctx->ecdh.dma_p);
 		ctx->ecdh.p = NULL;
+	} else if (!is_ecdh && ctx->curve25519.p) {
+		/* curve25519: p->a->k */
+		memzero_explicit(ctx->curve25519.p + shift, sz);
+		dma_free_coherent(dev, sz << 2, ctx->curve25519.p,
+				  ctx->curve25519.dma_p);
+		ctx->curve25519.p = NULL;
 	}
 
 	ctx->curve_id = 0;
@@ -1568,6 +1586,313 @@ static void hpre_ecdh_exit_tfm(struct crypto_kpp *tfm)
 	hpre_ecc_clear_ctx(ctx, true, true);
 }
 
+static void hpre_curve25519_fill_curve(struct hpre_ctx *ctx, const void *buf,
+				       unsigned int len)
+{
+	u8 secret[CURVE25519_KEY_SIZE] = { 0 };
+	unsigned int sz = ctx->key_sz;
+	const struct ecc_curve *curve;
+	unsigned int shift = sz << 1;
+	void *p;
+
+	/*
+	 * The key from 'buf' is in little-endian, we should preprocess it as
+	 * the description in rfc7748: "k[0] &= 248, k[31] &= 127, k[31] |= 64",
+	 * then convert it to big endian. Only in this way, the result can be
+	 * the same as the software curve-25519 that exists in crypto.
+	 */
+	memcpy(secret, buf, len);
+	curve25519_clamp_secret(secret);
+	hpre_key_to_big_end(secret, CURVE25519_KEY_SIZE);
+
+	p = ctx->curve25519.p + sz - len;
+
+	curve = ecc_get_curve25519();
+
+	/* fill curve parameters */
+	fill_curve_param(p, curve->p, len, curve->g.ndigits);
+	fill_curve_param(p + sz, curve->a, len, curve->g.ndigits);
+	memcpy(p + shift, secret, len);
+	fill_curve_param(p + shift + sz, curve->g.x, len, curve->g.ndigits);
+	memzero_explicit(secret, CURVE25519_KEY_SIZE);
+}
+
+static int hpre_curve25519_set_param(struct hpre_ctx *ctx, const void *buf,
+				     unsigned int len)
+{
+	struct device *dev = HPRE_DEV(ctx);
+	unsigned int sz = ctx->key_sz;
+	unsigned int shift = sz << 1;
+
+	/* p->a->k->gx */
+	if (!ctx->curve25519.p) {
+		ctx->curve25519.p = dma_alloc_coherent(dev, sz << 2,
+						       &ctx->curve25519.dma_p,
+						       GFP_KERNEL);
+		if (!ctx->curve25519.p)
+			return -ENOMEM;
+	}
+
+	ctx->curve25519.g = ctx->curve25519.p + shift + sz;
+	ctx->curve25519.dma_g = ctx->curve25519.dma_p + shift + sz;
+
+	hpre_curve25519_fill_curve(ctx, buf, len);
+
+	return 0;
+}
+
+static int hpre_curve25519_set_secret(struct crypto_kpp *tfm, const void *buf,
+				      unsigned int len)
+{
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+	struct device *dev = HPRE_DEV(ctx);
+	int ret = -EINVAL;
+
+	if (len != CURVE25519_KEY_SIZE ||
+	    !crypto_memneq(buf, curve25519_null_point, CURVE25519_KEY_SIZE)) {
+		dev_err(dev, "key is null or key len is not 32bytes!\n");
+		return ret;
+	}
+
+	/* Free old secret if any */
+	hpre_ecc_clear_ctx(ctx, false, false);
+
+	ctx->key_sz = CURVE25519_KEY_SIZE;
+	ret = hpre_curve25519_set_param(ctx, buf, CURVE25519_KEY_SIZE);
+	if (ret) {
+		dev_err(dev, "failed to set curve25519 param, ret = %d!\n", ret);
+		hpre_ecc_clear_ctx(ctx, false, false);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void hpre_curve25519_hw_data_clr_all(struct hpre_ctx *ctx,
+					    struct hpre_asym_request *req,
+					    struct scatterlist *dst,
+					    struct scatterlist *src)
+{
+	struct device *dev = HPRE_DEV(ctx);
+	struct hpre_sqe *sqe = &req->req;
+	dma_addr_t dma;
+
+	dma = le64_to_cpu(sqe->in);
+	if (unlikely(!dma))
+		return;
+
+	if (src && req->src)
+		dma_free_coherent(dev, ctx->key_sz, req->src, dma);
+
+	dma = le64_to_cpu(sqe->out);
+	if (unlikely(!dma))
+		return;
+
+	if (req->dst)
+		dma_free_coherent(dev, ctx->key_sz, req->dst, dma);
+	if (dst)
+		dma_unmap_single(dev, dma, ctx->key_sz, DMA_FROM_DEVICE);
+}
+
+static void hpre_curve25519_cb(struct hpre_ctx *ctx, void *resp)
+{
+	struct hpre_dfx *dfx = ctx->hpre->debug.dfx;
+	struct hpre_asym_request *req = NULL;
+	struct kpp_request *areq;
+	u64 overtime_thrhld;
+	int ret;
+
+	ret = hpre_alg_res_post_hf(ctx, resp, (void **)&req);
+	areq = req->areq.curve25519;
+	areq->dst_len = ctx->key_sz;
+
+	overtime_thrhld = atomic64_read(&dfx[HPRE_OVERTIME_THRHLD].value);
+	if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
+		atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
+
+	hpre_key_to_big_end(sg_virt(areq->dst), CURVE25519_KEY_SIZE);
+
+	hpre_curve25519_hw_data_clr_all(ctx, req, areq->dst, areq->src);
+	kpp_request_complete(areq, ret);
+
+	atomic64_inc(&dfx[HPRE_RECV_CNT].value);
+}
+
+static int hpre_curve25519_msg_request_set(struct hpre_ctx *ctx,
+					   struct kpp_request *req)
+{
+	struct hpre_asym_request *h_req;
+	struct hpre_sqe *msg;
+	int req_id;
+	void *tmp;
+
+	if (unlikely(req->dst_len < ctx->key_sz)) {
+		req->dst_len = ctx->key_sz;
+		return -EINVAL;
+	}
+
+	tmp = kpp_request_ctx(req);
+	h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ);
+	h_req->cb = hpre_curve25519_cb;
+	h_req->areq.curve25519 = req;
+	msg = &h_req->req;
+	memset(msg, 0, sizeof(*msg));
+	msg->key = cpu_to_le64(ctx->curve25519.dma_p);
+
+	msg->dw0 |= cpu_to_le32(0x1U << HPRE_SQE_DONE_SHIFT);
+	msg->task_len1 = (ctx->key_sz >> HPRE_BITS_2_BYTES_SHIFT) - 1;
+	h_req->ctx = ctx;
+
+	req_id = hpre_add_req_to_ctx(h_req);
+	if (req_id < 0)
+		return -EBUSY;
+
+	msg->tag = cpu_to_le16((u16)req_id);
+	return 0;
+}
+
+static int hpre_curve25519_src_init(struct hpre_asym_request *hpre_req,
+				    struct scatterlist *data, unsigned int len)
+{
+	struct hpre_sqe *msg = &hpre_req->req;
+	struct hpre_ctx *ctx = hpre_req->ctx;
+	struct device *dev = HPRE_DEV(ctx);
+	u8 p[CURVE25519_KEY_SIZE] = { 0 };
+	const struct ecc_curve *curve;
+	dma_addr_t dma = 0;
+	u8 *ptr;
+
+	if (len != CURVE25519_KEY_SIZE) {
+		dev_err(dev, "sourc_data len is not 32bytes, len = %u!\n", len);
+		return -EINVAL;
+	}
+
+	ptr = dma_alloc_coherent(dev, ctx->key_sz, &dma, GFP_KERNEL);
+	if (unlikely(!ptr))
+		return -ENOMEM;
+
+	scatterwalk_map_and_copy(ptr, data, 0, len, 0);
+
+	if (!crypto_memneq(ptr, curve25519_null_point, CURVE25519_KEY_SIZE)) {
+		dev_err(dev, "gx is null!\n");
+		goto err;
+	}
+
+	/*
+	 * Src_data(gx) is in little-endian order, MSB in the final byte should
+	 * be masked as discribed in RFC7748, then transform it to big-endian
+	 * form, then hisi_hpre can use the data.
+	 */
+	ptr[31] &= 0x7f;
+	hpre_key_to_big_end(ptr, CURVE25519_KEY_SIZE);
+
+	curve = ecc_get_curve25519();
+
+	fill_curve_param(p, curve->p, CURVE25519_KEY_SIZE,
+			 curve->g.ndigits);
+	if (strcmp(ptr, p) >= 0) {
+		dev_err(dev, "gx is out of p!\n");
+		goto err;
+	}
+
+	hpre_req->src = ptr;
+	msg->in = cpu_to_le64(dma);
+	return 0;
+
+err:
+	dma_free_coherent(dev, ctx->key_sz, ptr, dma);
+	return -EINVAL;
+}
+
+static int hpre_curve25519_dst_init(struct hpre_asym_request *hpre_req,
+				    struct scatterlist *data, unsigned int len)
+{
+	struct hpre_sqe *msg = &hpre_req->req;
+	struct hpre_ctx *ctx = hpre_req->ctx;
+	struct device *dev = HPRE_DEV(ctx);
+	dma_addr_t dma = 0;
+
+	if (!data || !sg_is_last(data) || len != ctx->key_sz) {
+		dev_err(dev, "data or data length is illegal!\n");
+		return -EINVAL;
+	}
+
+	hpre_req->dst = NULL;
+	dma = dma_map_single(dev, sg_virt(data), len, DMA_FROM_DEVICE);
+	if (unlikely(dma_mapping_error(dev, dma))) {
+		dev_err(dev, "dma map data err!\n");
+		return -ENOMEM;
+	}
+
+	msg->out = cpu_to_le64(dma);
+	return 0;
+}
+
+static int hpre_curve25519_compute_value(struct kpp_request *req)
+{
+	struct crypto_kpp *tfm = crypto_kpp_reqtfm(req);
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+	struct device *dev = HPRE_DEV(ctx);
+	void *tmp = kpp_request_ctx(req);
+	struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ);
+	struct hpre_sqe *msg = &hpre_req->req;
+	int ret;
+
+	ret = hpre_curve25519_msg_request_set(ctx, req);
+	if (unlikely(ret)) {
+		dev_err(dev, "failed to set curve25519 request, ret = %d!\n", ret);
+		return ret;
+	}
+
+	if (req->src) {
+		ret = hpre_curve25519_src_init(hpre_req, req->src, req->src_len);
+		if (unlikely(ret)) {
+			dev_err(dev, "failed to init src data, ret = %d!\n",
+				ret);
+			goto clear_all;
+		}
+	} else {
+		msg->in = cpu_to_le64(ctx->curve25519.dma_g);
+	}
+
+	ret = hpre_curve25519_dst_init(hpre_req, req->dst, req->dst_len);
+	if (unlikely(ret)) {
+		dev_err(dev, "failed to init dst data, ret = %d!\n", ret);
+		goto clear_all;
+	}
+
+	msg->dw0 = cpu_to_le32(le32_to_cpu(msg->dw0) | HPRE_ALG_CURVE25519_MUL);
+	ret = hpre_send(ctx, msg);
+	if (likely(!ret))
+		return -EINPROGRESS;
+
+clear_all:
+	hpre_rm_req_from_ctx(hpre_req);
+	hpre_curve25519_hw_data_clr_all(ctx, hpre_req, req->dst, req->src);
+	return ret;
+}
+
+static unsigned int hpre_curve25519_max_size(struct crypto_kpp *tfm)
+{
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+
+	return ctx->key_sz;
+}
+
+static int hpre_curve25519_init_tfm(struct crypto_kpp *tfm)
+{
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+
+	return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE);
+}
+
+static void hpre_curve25519_exit_tfm(struct crypto_kpp *tfm)
+{
+	struct hpre_ctx *ctx = kpp_tfm_ctx(tfm);
+
+	hpre_ecc_clear_ctx(ctx, true, false);
+}
+
 static struct akcipher_alg rsa = {
 	.sign = hpre_rsa_dec,
 	.verify = hpre_rsa_enc,
@@ -1623,6 +1948,24 @@ static struct kpp_alg ecdh = {
 		.cra_module = THIS_MODULE,
 	},
 };
+
+static struct kpp_alg curve25519_alg = {
+	.set_secret = hpre_curve25519_set_secret,
+	.generate_public_key = hpre_curve25519_compute_value,
+	.compute_shared_secret = hpre_curve25519_compute_value,
+	.max_size = hpre_curve25519_max_size,
+	.init = hpre_curve25519_init_tfm,
+	.exit = hpre_curve25519_exit_tfm,
+	.reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ,
+	.base = {
+		.cra_ctxsize = sizeof(struct hpre_ctx),
+		.cra_priority = HPRE_CRYPTO_ALG_PRI,
+		.cra_name = "curve25519",
+		.cra_driver_name = "hpre-curve25519",
+		.cra_module = THIS_MODULE,
+	},
+};
+
 int hpre_algs_register(struct hisi_qm *qm)
 {
 	int ret;
@@ -1637,26 +1980,35 @@ int hpre_algs_register(struct hisi_qm *qm)
 		crypto_unregister_akcipher(&rsa);
 		return ret;
 	}
-#endif
 
+#endif
 	if (qm->ver >= QM_HW_V3) {
 		ret = crypto_register_kpp(&ecdh);
+		if (ret)
+			goto reg_err;
+
+		ret = crypto_register_kpp(&curve25519_alg);
 		if (ret) {
-#ifdef CONFIG_CRYPTO_DH
-			crypto_unregister_kpp(&dh);
-#endif
-			crypto_unregister_akcipher(&rsa);
-			return ret;
+			crypto_unregister_kpp(&ecdh);
+			goto reg_err;
 		}
 	}
-
 	return 0;
+
+reg_err:
+#ifdef CONFIG_CRYPTO_DH
+	crypto_unregister_kpp(&dh);
+#endif
+	crypto_unregister_akcipher(&rsa);
+	return ret;
 }
 
 void hpre_algs_unregister(struct hisi_qm *qm)
 {
-	if (qm->ver >= QM_HW_V3)
+	if (qm->ver >= QM_HW_V3) {
+		crypto_unregister_kpp(&curve25519_alg);
 		crypto_unregister_kpp(&ecdh);
+	}
 
 #ifdef CONFIG_CRYPTO_DH
 	crypto_unregister_kpp(&dh);
-- 
2.8.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-22  7:09 ` [PATCH v7 4/7] crypto: add ecc curve and expose them Meng Yu
@ 2021-01-28  5:03   ` Herbert Xu
  2021-01-28 10:30     ` Ard Biesheuvel
                       ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Herbert Xu @ 2021-01-28  5:03 UTC (permalink / raw)
  To: Meng Yu
  Cc: davem, linux-crypto, xuzaibo, wangzhou1, linux-kernel,
	Ard Biesheuvel, Daniele Alessandrelli, Mark Gross, Khurana,
	Prabhjot, Reshetova, Elena

On Fri, Jan 22, 2021 at 03:09:52PM +0800, Meng Yu wrote:
> 1. Add ecc curves(P224, P384, P521) for ECDH;

OK I think this is getting unwieldy.

In light of the fact that we already have hardware that supports
a specific subset of curves, I think perhaps it would be better
to move the curve ID from the key into the algorithm name instead.

IOW, instead of allocating ecdh, you would allocate ecdh-nist-pXXX.

Any comments?

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-28  5:03   ` Herbert Xu
@ 2021-01-28 10:30     ` Ard Biesheuvel
  2021-01-28 10:39       ` Herbert Xu
  2021-01-29  2:49       ` Stefan Berger
  2021-02-01  3:45     ` yumeng
  2021-02-03 18:03     ` Saulo Alessandre
  2 siblings, 2 replies; 25+ messages in thread
From: Ard Biesheuvel @ 2021-01-28 10:30 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Meng Yu, David S. Miller, Linux Crypto Mailing List, Zaibo Xu,
	wangzhou1, Linux Kernel Mailing List, Daniele Alessandrelli,
	Mark Gross, Khurana, Prabhjot, Reshetova, Elena

On Thu, 28 Jan 2021 at 06:04, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Fri, Jan 22, 2021 at 03:09:52PM +0800, Meng Yu wrote:
> > 1. Add ecc curves(P224, P384, P521) for ECDH;
>
> OK I think this is getting unwieldy.
>
> In light of the fact that we already have hardware that supports
> a specific subset of curves, I think perhaps it would be better
> to move the curve ID from the key into the algorithm name instead.
>
> IOW, instead of allocating ecdh, you would allocate ecdh-nist-pXXX.
>
> Any comments?
>

Agreed. Bluetooth appears to be the only in-kernel user at the moment,
and it is hard coded to use p256, so it can be easily updated.

But this also begs the question which ecdh-nist-pXXX implementations
we actually need? Why are we exposing these curves in the first place?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-28 10:30     ` Ard Biesheuvel
@ 2021-01-28 10:39       ` Herbert Xu
  2021-02-01 17:09         ` Daniele Alessandrelli
  2021-01-29  2:49       ` Stefan Berger
  1 sibling, 1 reply; 25+ messages in thread
From: Herbert Xu @ 2021-01-28 10:39 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Meng Yu, David S. Miller, Linux Crypto Mailing List, Zaibo Xu,
	wangzhou1, Linux Kernel Mailing List, Daniele Alessandrelli,
	Mark Gross, Khurana, Prabhjot, Reshetova, Elena

On Thu, Jan 28, 2021 at 11:30:23AM +0100, Ard Biesheuvel wrote:
>
> But this also begs the question which ecdh-nist-pXXX implementations
> we actually need? Why are we exposing these curves in the first place?

Once they're distinct algorithms, we can then make sure that only
the ones that are used in the kernel is added, even if some hardware
may support more curves.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-28 10:30     ` Ard Biesheuvel
  2021-01-28 10:39       ` Herbert Xu
@ 2021-01-29  2:49       ` Stefan Berger
  2021-01-29  3:00         ` Herbert Xu
  1 sibling, 1 reply; 25+ messages in thread
From: Stefan Berger @ 2021-01-29  2:49 UTC (permalink / raw)
  To: Ard Biesheuvel, Herbert Xu
  Cc: Meng Yu, David S. Miller, Linux Crypto Mailing List, Zaibo Xu,
	wangzhou1, Linux Kernel Mailing List, Daniele Alessandrelli,
	Mark Gross, Khurana, Prabhjot, Reshetova, Elena,
	Patrick Uiterwijk

On 1/28/21 5:30 AM, Ard Biesheuvel wrote:
> On Thu, 28 Jan 2021 at 06:04, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>> On Fri, Jan 22, 2021 at 03:09:52PM +0800, Meng Yu wrote:
>>> 1. Add ecc curves(P224, P384, P521) for ECDH;
>> OK I think this is getting unwieldy.
>>
>> In light of the fact that we already have hardware that supports
>> a specific subset of curves, I think perhaps it would be better
>> to move the curve ID from the key into the algorithm name instead.
>>
>> IOW, instead of allocating ecdh, you would allocate ecdh-nist-pXXX.
>>
>> Any comments?
>>
> Agreed. Bluetooth appears to be the only in-kernel user at the moment,
> and it is hard coded to use p256, so it can be easily updated.
>
> But this also begs the question which ecdh-nist-pXXX implementations
> we actually need? Why are we exposing these curves in the first place?

In the patch series that I just submitted I would like to expose at 
least NIST p256 to users. Fedora 34 is working to add signatures for 
files to rpms for the Integrity Measurment Architecture (IMA) to be able 
to enforce signatures on executables, libraries etc. The signatures are 
written out into security.ima extended attribute upon rpm installation. 
IMA accesses keys on a keyring to verify these file signatures. RSA 
signatures are much larger, so elliptic curve signatures are much better 
in terms of storage size needed for storing them in rpms. They add like 
1% of size. Fedora is using a NIST P256 key.

Besides that Fedora and RHEL support only these curves here in openssl 
(Ubuntu supports a lot more):

$ openssl ecparam -list_curves
   secp224r1 : NIST/SECG curve over a 224 bit prime field
   secp256k1 : SECG curve over a 256 bit prime field
   secp384r1 : NIST/SECG curve over a 384 bit prime field
   secp521r1 : NIST/SECG curve over a 521 bit prime field
   prime256v1: X9.62/SECG curve over a 256 bit prime field

So from that perspective it makes a lot of sense to support some of 
these and allow users to load them into the kernel.


In my patch series I initially had registered the akciphers under the 
names ecc-nist-p192 and ecc-nist-p256 but now, in V4, joined them 
together as 'ecdsa'. This may be too generic for a name. Maybe it should 
be called ecsda-nist for the NIST family.

    Stefan


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-29  2:49       ` Stefan Berger
@ 2021-01-29  3:00         ` Herbert Xu
  2021-02-08  6:35           ` Vitaly Chikunov
  0 siblings, 1 reply; 25+ messages in thread
From: Herbert Xu @ 2021-01-29  3:00 UTC (permalink / raw)
  To: Stefan Berger
  Cc: Ard Biesheuvel, Meng Yu, David S. Miller,
	Linux Crypto Mailing List, Zaibo Xu, wangzhou1,
	Linux Kernel Mailing List, Daniele Alessandrelli, Mark Gross,
	Khurana, Prabhjot, Reshetova, Elena, Patrick Uiterwijk

On Thu, Jan 28, 2021 at 09:49:41PM -0500, Stefan Berger wrote:
>
> In my patch series I initially had registered the akciphers under the names
> ecc-nist-p192 and ecc-nist-p256 but now, in V4, joined them together as
> 'ecdsa'. This may be too generic for a name. Maybe it should be called
> ecsda-nist for the NIST family.

What I'm proposing is specifying the curve in the name as well, i.e.,
ecdsa-nist-p192 instead of just ecdsa or ecdsa-nist.

This simplifies the task of handling hardware that only supports a
subset of curves.

There is a parallel discussion of exactly what curves we should
support in the kernel.  Personally if there is a user in the kernel
for it then I'm happy to see it added.  In your specific case, as
long as your use of the algorithm in x509 is accepted then I don't
have any problems with adding support in the Crypto API.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-28  5:03   ` Herbert Xu
  2021-01-28 10:30     ` Ard Biesheuvel
@ 2021-02-01  3:45     ` yumeng
  2021-02-03 18:03     ` Saulo Alessandre
  2 siblings, 0 replies; 25+ messages in thread
From: yumeng @ 2021-02-01  3:45 UTC (permalink / raw)
  To: Herbert Xu
  Cc: davem, linux-crypto, xuzaibo, wangzhou1, linux-kernel,
	Ard Biesheuvel, Daniele Alessandrelli, Mark Gross, Khurana,
	Prabhjot, Reshetova, Elena



在 2021/1/28 13:03, Herbert Xu 写道:
> On Fri, Jan 22, 2021 at 03:09:52PM +0800, Meng Yu wrote:
>> 1. Add ecc curves(P224, P384, P521) for ECDH;
> 
> OK I think this is getting unwieldy.
> 
> In light of the fact that we already have hardware that supports
> a specific subset of curves, I think perhaps it would be better
> to move the curve ID from the key into the algorithm name instead.
> 
> IOW, instead of allocating ecdh, you would allocate ecdh-nist-pXXX.
> 
> Any comments?
> 
> Thanks,
> 

Yes,It is a good idea, thank you!

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-28 10:39       ` Herbert Xu
@ 2021-02-01 17:09         ` Daniele Alessandrelli
  2021-02-02  5:13           ` Herbert Xu
  0 siblings, 1 reply; 25+ messages in thread
From: Daniele Alessandrelli @ 2021-02-01 17:09 UTC (permalink / raw)
  To: Herbert Xu, Ard Biesheuvel
  Cc: Meng Yu, David S. Miller, Linux Crypto Mailing List, Zaibo Xu,
	wangzhou1, Linux Kernel Mailing List, Mark Gross, Khurana,
	Prabhjot, Reshetova, Elena, Daniele Alessandrelli

On Thu, 2021-01-28 at 21:39 +1100, Herbert Xu wrote:
> Once they're distinct algorithms, we can then make sure that only
> the ones that are used in the kernel is added, even if some hardware
> may support more curves.

I like the idea of having different algorithms names (ecdh-nist-
pXXX) for different curves, but I'm not fully convinced by the above
statement.

What's the downside of letting device drivers enable all the curves
supported by the HW (with the exception of obsolete curves /
algorithms), even if there is (currently) no user of such curves in the
kernel? Code size and maintainability?

I think that once there is support for certain curves, it's more likely
that drivers / modules using them will appear.

Also, even if there are no in-tree users, there might be a few out-of-
tree ones.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-02-01 17:09         ` Daniele Alessandrelli
@ 2021-02-02  5:13           ` Herbert Xu
  2021-02-02  9:27             ` Alessandrelli, Daniele
  0 siblings, 1 reply; 25+ messages in thread
From: Herbert Xu @ 2021-02-02  5:13 UTC (permalink / raw)
  To: Daniele Alessandrelli
  Cc: Ard Biesheuvel, Meng Yu, David S. Miller,
	Linux Crypto Mailing List, Zaibo Xu, wangzhou1,
	Linux Kernel Mailing List, Mark Gross, Khurana, Prabhjot,
	Reshetova, Elena, Daniele Alessandrelli

On Mon, Feb 01, 2021 at 05:09:41PM +0000, Daniele Alessandrelli wrote:
> What's the downside of letting device drivers enable all the curves
> supported by the HW (with the exception of obsolete curves /
> algorithms), even if there is (currently) no user of such curves in the
> kernel? Code size and maintainability?

The issue is that we always require a software implementation for
any given hardware algorithm.  As otherwise kernel users cannot
rely on the algorithm to work.

Of course we don't want to add every single algorithm out there
to the kernel so that's why require there to be an actual in-kernel
user before adding a given algorithm.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-02-02  5:13           ` Herbert Xu
@ 2021-02-02  9:27             ` Alessandrelli, Daniele
  2021-02-02  9:42               ` Herbert Xu
  0 siblings, 1 reply; 25+ messages in thread
From: Alessandrelli, Daniele @ 2021-02-02  9:27 UTC (permalink / raw)
  To: herbert
  Cc: Khurana, Prabhjot, davem, Reshetova, Elena, mgross, linux-kernel,
	ardb, xuzaibo, wangzhou1, linux-crypto, yumeng18

On Tue, 2021-02-02 at 16:13 +1100, Herbert Xu wrote:
> The issue is that we always require a software implementation for
> any given hardware algorithm.  As otherwise kernel users cannot
> rely on the algorithm to work.

I understand. This sounds very reasonable to me.

> Of course we don't want to add every single algorithm out there
> to the kernel so that's why require there to be an actual in-kernel
> user before adding a given algorithm.

I see. Just to clarify: does the in-kernel user requirement also apply
to the case when the author of a device driver also provides the
software implementation for the new algorithms supported by device
driver / HW?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-02-02  9:27             ` Alessandrelli, Daniele
@ 2021-02-02  9:42               ` Herbert Xu
  2021-02-02 12:35                 ` Alessandrelli, Daniele
  0 siblings, 1 reply; 25+ messages in thread
From: Herbert Xu @ 2021-02-02  9:42 UTC (permalink / raw)
  To: Alessandrelli, Daniele
  Cc: Khurana, Prabhjot, davem, Reshetova, Elena, mgross, linux-kernel,
	ardb, xuzaibo, wangzhou1, linux-crypto, yumeng18

On Tue, Feb 02, 2021 at 09:27:33AM +0000, Alessandrelli, Daniele wrote:
>
> I see. Just to clarify: does the in-kernel user requirement also apply
> to the case when the author of a device driver also provides the
> software implementation for the new algorithms supported by device
> driver / HW?

Yes we need an actual user.  For example, if your algorithm is used
by the Security Subsystem (IMA) that would be sufficient.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-02-02  9:42               ` Herbert Xu
@ 2021-02-02 12:35                 ` Alessandrelli, Daniele
  2021-02-04  5:31                   ` Herbert Xu
  0 siblings, 1 reply; 25+ messages in thread
From: Alessandrelli, Daniele @ 2021-02-02 12:35 UTC (permalink / raw)
  To: herbert
  Cc: Khurana, Prabhjot, Reshetova, Elena, davem, mgross, linux-kernel,
	ardb, wangzhou1, xuzaibo, linux-crypto, yumeng18

On Tue, 2021-02-02 at 20:42 +1100, Herbert Xu wrote:
> On Tue, Feb 02, 2021 at 09:27:33AM +0000, Alessandrelli, Daniele
> wrote:
> > I see. Just to clarify: does the in-kernel user requirement also
> > apply
> > to the case when the author of a device driver also provides the
> > software implementation for the new algorithms supported by device
> > driver / HW?
> 
> Yes we need an actual user.  For example, if your algorithm is used
> by the Security Subsystem (IMA) that would be sufficient.

Understood, thanks!

Unrelated question: I have my Keem Bay OCS ECC patchset [1] almost
ready for re-submission. Should I go ahead or should I wait for the
final decision about using 'ecdh-nist-pXXX' in place of 'ecdh'?

Also, I guess I'll have to drop P-384 support from the driver, since
the are no in-kernel user AFAIK.

[1] https://lore.kernel.org/linux-crypto/20201217172101.381772-1-daniele.alessandrelli@linux.intel.com/

> 
> Cheers,

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-28  5:03   ` Herbert Xu
  2021-01-28 10:30     ` Ard Biesheuvel
  2021-02-01  3:45     ` yumeng
@ 2021-02-03 18:03     ` Saulo Alessandre
  2021-02-04  5:41       ` Herbert Xu
  2 siblings, 1 reply; 25+ messages in thread
From: Saulo Alessandre @ 2021-02-03 18:03 UTC (permalink / raw)
  To: herbert
  Cc: ardb, daniele.alessandrelli, davem, elena.reshetova,
	linux-crypto, linux-kernel, mgross, prabhjot.khurana, wangzhou1,
	xuzaibo, yumeng18, saulo.alessandre

On 28/01/2021 02:03, Herbert Xu wrote:
> On Fri, Jan 22, 2021 at 03:09:52PM +0800, Meng Yu wrote:
>> 1. Add ecc curves(P224, P384, P521) for ECDH;
>
> OK I think this is getting unwieldy.
>
> In light of the fact that we already have hardware that supports
> a specific subset of curves, I think perhaps it would be better
> to move the curve ID from the key into the algorithm name instead.
>
I think I understand you, I'm not using ECDH at the moment, but IMHO maybe
we could use enum OID of oid_registry.h as curve ID and eliminate the 
duplicate ECC_CURVE_NIST_{...} from ecdh.h. Or perhaps put another param
in struct ecc_curve with OID, because the name already exists.

> IOW, instead of allocating ecdh, you would allocate ecdh-nist-pXXX.
>
> Any comments?

I recently sent a patch for the ECDSA signature verification, that use
the NIST-P curves to check elf32 binary modules and signatures in
about 450k T-DRE voting machines, in the Brazilian Elections across
the country. I put the other curves because we started testing them
(P256, P384) for speed measurement, but we ended up using P521 in our
production version since 2017 in the 4.9.xxx kernel, and now in 5.4.xxx.

In this patch I'm using akcipher allocate like ecdsa(sha1,sha256,...), 
because the ecdsa algo is generic, and using the curve name and ndigits
inside vli_mmod_fast to discover the curve, but I agree the correct way
would be allocate ecdsa-nist-p521(sha1,...) and have all params for the
curve inside strut ecc_curve, remembering that we have anothers curves 
incoming, like Edwards.

regards,
--
Email: Saulo Alessandre <saulo.alessandre@gmail.com>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-02-02 12:35                 ` Alessandrelli, Daniele
@ 2021-02-04  5:31                   ` Herbert Xu
  0 siblings, 0 replies; 25+ messages in thread
From: Herbert Xu @ 2021-02-04  5:31 UTC (permalink / raw)
  To: Alessandrelli, Daniele, Tudor Ambarus
  Cc: Khurana, Prabhjot, Reshetova, Elena, davem, mgross, linux-kernel,
	ardb, wangzhou1, xuzaibo, linux-crypto, yumeng18

On Tue, Feb 02, 2021 at 12:35:26PM +0000, Alessandrelli, Daniele wrote:
>
> Unrelated question: I have my Keem Bay OCS ECC patchset [1] almost
> ready for re-submission. Should I go ahead or should I wait for the
> final decision about using 'ecdh-nist-pXXX' in place of 'ecdh'?

If we agree on going down this route, then the first step is to
convert the existing ecdh generic algorithm and its users to this
scheme to ensure no regressions.

After that then you can add your driver.

PS I just noticed that we already have one driver implementing
ecdh, atmel so it too would need to be converted before we take
on any new drivers for ecdh.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-02-03 18:03     ` Saulo Alessandre
@ 2021-02-04  5:41       ` Herbert Xu
  0 siblings, 0 replies; 25+ messages in thread
From: Herbert Xu @ 2021-02-04  5:41 UTC (permalink / raw)
  To: Saulo Alessandre, Stefan Berger
  Cc: ardb, daniele.alessandrelli, davem, elena.reshetova,
	linux-crypto, linux-kernel, mgross, prabhjot.khurana, wangzhou1,
	xuzaibo, yumeng18

On Wed, Feb 03, 2021 at 03:03:44PM -0300, Saulo Alessandre wrote:
>
> In this patch I'm using akcipher allocate like ecdsa(sha1,sha256,...), 
> because the ecdsa algo is generic, and using the curve name and ndigits
> inside vli_mmod_fast to discover the curve, but I agree the correct way
> would be allocate ecdsa-nist-p521(sha1,...) and have all params for the
> curve inside strut ecc_curve, remembering that we have anothers curves 
> incoming, like Edwards.

I'm not sure whether we really should encode hash into the algorithm
name.  This may be something that we can move into setkey instead.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-01-29  3:00         ` Herbert Xu
@ 2021-02-08  6:35           ` Vitaly Chikunov
  2021-02-08  6:47             ` Ard Biesheuvel
  0 siblings, 1 reply; 25+ messages in thread
From: Vitaly Chikunov @ 2021-02-08  6:35 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Stefan Berger, Ard Biesheuvel, Meng Yu, David S. Miller,
	Linux Crypto Mailing List, Zaibo Xu, wangzhou1,
	Linux Kernel Mailing List, Daniele Alessandrelli, Mark Gross,
	Khurana, Prabhjot, Reshetova, Elena, Patrick Uiterwijk

Herbert,

On Fri, Jan 29, 2021 at 02:00:04PM +1100, Herbert Xu wrote:
> On Thu, Jan 28, 2021 at 09:49:41PM -0500, Stefan Berger wrote:
> >
> > In my patch series I initially had registered the akciphers under the names
> > ecc-nist-p192 and ecc-nist-p256 but now, in V4, joined them together as
> > 'ecdsa'. This may be too generic for a name. Maybe it should be called
> > ecsda-nist for the NIST family.
> 
> What I'm proposing is specifying the curve in the name as well, i.e.,
> ecdsa-nist-p192 instead of just ecdsa or ecdsa-nist.
> 
> This simplifies the task of handling hardware that only supports a
> subset of curves.

So, if some implementation supports multiple curves (like EC-RDSA
currently supports 5 curves), it should add 5 ecrdsa-{a,b,c,..}
algorithms with actually the same top level implementation?
Right?


> There is a parallel discussion of exactly what curves we should
> support in the kernel.  Personally if there is a user in the kernel
> for it then I'm happy to see it added.  In your specific case, as
> long as your use of the algorithm in x509 is accepted then I don't
> have any problems with adding support in the Crypto API.
> 
> Cheers,
> -- 
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-02-08  6:35           ` Vitaly Chikunov
@ 2021-02-08  6:47             ` Ard Biesheuvel
  2021-02-08 21:27               ` Vitaly Chikunov
  0 siblings, 1 reply; 25+ messages in thread
From: Ard Biesheuvel @ 2021-02-08  6:47 UTC (permalink / raw)
  To: Vitaly Chikunov
  Cc: Herbert Xu, Stefan Berger, Meng Yu, David S. Miller,
	Linux Crypto Mailing List, Zaibo Xu, wangzhou1,
	Linux Kernel Mailing List, Daniele Alessandrelli, Mark Gross,
	Khurana, Prabhjot, Reshetova, Elena, Patrick Uiterwijk

On Mon, 8 Feb 2021 at 07:37, Vitaly Chikunov <vt@altlinux.org> wrote:
>
> Herbert,
>
> On Fri, Jan 29, 2021 at 02:00:04PM +1100, Herbert Xu wrote:
> > On Thu, Jan 28, 2021 at 09:49:41PM -0500, Stefan Berger wrote:
> > >
> > > In my patch series I initially had registered the akciphers under the names
> > > ecc-nist-p192 and ecc-nist-p256 but now, in V4, joined them together as
> > > 'ecdsa'. This may be too generic for a name. Maybe it should be called
> > > ecsda-nist for the NIST family.
> >
> > What I'm proposing is specifying the curve in the name as well, i.e.,
> > ecdsa-nist-p192 instead of just ecdsa or ecdsa-nist.
> >
> > This simplifies the task of handling hardware that only supports a
> > subset of curves.
>
> So, if some implementation supports multiple curves (like EC-RDSA
> currently supports 5 curves), it should add 5 ecrdsa-{a,b,c,..}
> algorithms with actually the same top level implementation?
> Right?
>

Yes. The only difference will be the init() function, which can be
used to set the TFM properties that define which curve is being used.
The other routines can be generic, and refer to those properties if
the behavior is curve-specific.


>
> > There is a parallel discussion of exactly what curves we should
> > support in the kernel.  Personally if there is a user in the kernel
> > for it then I'm happy to see it added.  In your specific case, as
> > long as your use of the algorithm in x509 is accepted then I don't
> > have any problems with adding support in the Crypto API.
> >
> > Cheers,
> > --
> > Email: Herbert Xu <herbert@gondor.apana.org.au>
> > Home Page: http://gondor.apana.org.au/~herbert/
> > PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v7 4/7] crypto: add ecc curve and expose them
  2021-02-08  6:47             ` Ard Biesheuvel
@ 2021-02-08 21:27               ` Vitaly Chikunov
  0 siblings, 0 replies; 25+ messages in thread
From: Vitaly Chikunov @ 2021-02-08 21:27 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Herbert Xu, Stefan Berger, Meng Yu, David S. Miller,
	Linux Crypto Mailing List, Zaibo Xu, wangzhou1,
	Linux Kernel Mailing List, Daniele Alessandrelli, Mark Gross,
	Khurana, Prabhjot, Reshetova, Elena, Patrick Uiterwijk

Ard,

On Mon, Feb 08, 2021 at 07:47:44AM +0100, Ard Biesheuvel wrote:
> On Mon, 8 Feb 2021 at 07:37, Vitaly Chikunov <vt@altlinux.org> wrote:
> >
> > Herbert,
> >
> > On Fri, Jan 29, 2021 at 02:00:04PM +1100, Herbert Xu wrote:
> > > On Thu, Jan 28, 2021 at 09:49:41PM -0500, Stefan Berger wrote:
> > > >
> > > > In my patch series I initially had registered the akciphers under the names
> > > > ecc-nist-p192 and ecc-nist-p256 but now, in V4, joined them together as
> > > > 'ecdsa'. This may be too generic for a name. Maybe it should be called
> > > > ecsda-nist for the NIST family.
> > >
> > > What I'm proposing is specifying the curve in the name as well, i.e.,
> > > ecdsa-nist-p192 instead of just ecdsa or ecdsa-nist.
> > >
> > > This simplifies the task of handling hardware that only supports a
> > > subset of curves.
> >
> > So, if some implementation supports multiple curves (like EC-RDSA
> > currently supports 5 curves), it should add 5 ecrdsa-{a,b,c,..}
> > algorithms with actually the same top level implementation?
> > Right?
> >
> 
> Yes. The only difference will be the init() function, which can be
> used to set the TFM properties that define which curve is being used.
> The other routines can be generic, and refer to those properties if
> the behavior is curve-specific.

Thanks. This may be good!

JFYI. There is possible non-hardware accelerated implementations
for ECC algorithms which (perhaps) may never go to the kernel source,
because they are generated code. For example
  https://gitlab.com/nisec/ecckiila


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2021-02-08 21:29 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-22  7:09 [PATCH v7 0/7] add ECDH and CURVE25519 algorithms support for Kunpeng 930 Meng Yu
2021-01-22  7:09 ` [PATCH v7 1/7] crypto: hisilicon/hpre - add version adapt to new algorithms Meng Yu
2021-01-22  7:09 ` [PATCH v7 2/7] crypto: hisilicon/hpre - add some updates to adapt to Kunpeng 930 Meng Yu
2021-01-22  7:09 ` [PATCH v7 3/7] crypto: hisilicon/hpre - add algorithm type Meng Yu
2021-01-22  7:09 ` [PATCH v7 4/7] crypto: add ecc curve and expose them Meng Yu
2021-01-28  5:03   ` Herbert Xu
2021-01-28 10:30     ` Ard Biesheuvel
2021-01-28 10:39       ` Herbert Xu
2021-02-01 17:09         ` Daniele Alessandrelli
2021-02-02  5:13           ` Herbert Xu
2021-02-02  9:27             ` Alessandrelli, Daniele
2021-02-02  9:42               ` Herbert Xu
2021-02-02 12:35                 ` Alessandrelli, Daniele
2021-02-04  5:31                   ` Herbert Xu
2021-01-29  2:49       ` Stefan Berger
2021-01-29  3:00         ` Herbert Xu
2021-02-08  6:35           ` Vitaly Chikunov
2021-02-08  6:47             ` Ard Biesheuvel
2021-02-08 21:27               ` Vitaly Chikunov
2021-02-01  3:45     ` yumeng
2021-02-03 18:03     ` Saulo Alessandre
2021-02-04  5:41       ` Herbert Xu
2021-01-22  7:09 ` [PATCH v7 5/7] crypto: add curve 25519 " Meng Yu
2021-01-22  7:09 ` [PATCH v7 6/7] crypto: hisilicon/hpre - add 'ECDH' algorithm Meng Yu
2021-01-22  7:09 ` [PATCH v7 7/7] crypto: hisilicon/hpre - add 'CURVE25519' algorithm Meng Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).