All of lore.kernel.org
 help / color / mirror / Atom feed
From: Pablo de Lara <pablo.de.lara.guarch@intel.com>
To: declan.doherty@intel.com
Cc: dev@dpdk.org, Pablo de Lara <pablo.de.lara.guarch@intel.com>
Subject: [PATCH 1/2] crypto/aesni_gcm: migrate to Multi-buffer library
Date: Fri, 26 May 2017 11:16:05 +0100	[thread overview]
Message-ID: <1495793766-200890-2-git-send-email-pablo.de.lara.guarch@intel.com> (raw)
In-Reply-To: <1495793766-200890-1-git-send-email-pablo.de.lara.guarch@intel.com>

Since Intel Multi Buffer library for IPSec has been updated to
support Scatter Gather List, the AESNI GCM PMD can link
to this library, instead of the ISA-L library.

This move eases the maintainance of the driver, as it will
use the same library as the AESNI MB PMD.
It also adds support for 192-bit keys.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
 devtools/test-build.sh                           |   4 +-
 doc/guides/cryptodevs/aesni_gcm.rst              |  18 ++--
 doc/guides/cryptodevs/features/aesni_gcm.ini     |   4 +-
 doc/guides/rel_notes/release_17_08.rst           |   8 ++
 drivers/crypto/aesni_gcm/Makefile                |  10 +-
 drivers/crypto/aesni_gcm/aesni_gcm_ops.h         | 130 ++++++++++++++++++++++-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c         |  90 ++++++++--------
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c     |  18 ++--
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h |  15 +--
 mk/rte.app.mk                                    |   3 +-
 10 files changed, 223 insertions(+), 77 deletions(-)

diff --git a/devtools/test-build.sh b/devtools/test-build.sh
index 61bdce7..81b57a4 100755
--- a/devtools/test-build.sh
+++ b/devtools/test-build.sh
@@ -38,7 +38,6 @@ default_path=$PATH
 # - DPDK_BUILD_TEST_CONFIGS (defconfig1+option1+option2 defconfig2)
 # - DPDK_DEP_ARCHIVE
 # - DPDK_DEP_CFLAGS
-# - DPDK_DEP_ISAL_CRYPTO (y/[n])
 # - DPDK_DEP_LDFLAGS
 # - DPDK_DEP_MOFED (y/[n])
 # - DPDK_DEP_NUMA (y/[n])
@@ -121,7 +120,6 @@ reset_env ()
 	unset CROSS
 	unset DPDK_DEP_ARCHIVE
 	unset DPDK_DEP_CFLAGS
-	unset DPDK_DEP_ISAL_CRYPTO
 	unset DPDK_DEP_LDFLAGS
 	unset DPDK_DEP_MOFED
 	unset DPDK_DEP_NUMA
@@ -182,7 +180,7 @@ config () # <directory> <target> <options>
 		sed -ri   's,(PMD_ARMV8_CRYPTO=)n,\1y,' $1/.config
 		test -z "$AESNI_MULTI_BUFFER_LIB_PATH" || \
 		sed -ri       's,(PMD_AESNI_MB=)n,\1y,' $1/.config
-		test "$DPDK_DEP_ISAL_CRYPTO" != y || \
+		test -z "$AESNI_MULTI_BUFFER_LIB_PATH" || \
 		sed -ri      's,(PMD_AESNI_GCM=)n,\1y,' $1/.config
 		test -z "$LIBSSO_SNOW3G_PATH" || \
 		sed -ri         's,(PMD_SNOW3G=)n,\1y,' $1/.config
diff --git a/doc/guides/cryptodevs/aesni_gcm.rst b/doc/guides/cryptodevs/aesni_gcm.rst
index 84cdc52..10282a1 100644
--- a/doc/guides/cryptodevs/aesni_gcm.rst
+++ b/doc/guides/cryptodevs/aesni_gcm.rst
@@ -1,5 +1,5 @@
 ..  BSD LICENSE
-    Copyright(c) 2016 Intel Corporation. All rights reserved.
+    Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
 
     Redistribution and use in source and binary forms, with or without
     modification, are permitted provided that the following conditions
@@ -32,8 +32,8 @@ AES-NI GCM Crypto Poll Mode Driver
 
 
 The AES-NI GCM PMD (**librte_pmd_aesni_gcm**) provides poll mode crypto driver
-support for utilizing Intel ISA-L crypto library, which provides operation acceleration
-through the AES-NI instruction sets for AES-GCM authenticated cipher algorithm.
+support for utilizing Intel multi buffer library (see AES-NI Multi-buffer PMD documentation
+to learn more about it, including installation).
 
 Features
 --------
@@ -49,19 +49,15 @@ Authentication algorithms:
 * RTE_CRYPTO_AUTH_AES_GCM
 * RTE_CRYPTO_AUTH_AES_GMAC
 
-Installation
-------------
-
-To build DPDK with the AESNI_GCM_PMD the user is required to install
-the ``libisal_crypto`` library in the build environment.
-For download and more details please visit `<https://github.com/01org/isa-l_crypto>`_.
-
 Initialization
 --------------
 
 In order to enable this virtual crypto PMD, user must:
 
-* Install the ISA-L crypto library (explained in Installation section).
+* Export the environmental variable AESNI_MULTI_BUFFER_LIB_PATH with the path where
+  the library was extracted.
+
+* Build the multi buffer library (go to Installation section in AES-NI MB PMD documentation).
 
 * Set CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=y in config/common_base.
 
diff --git a/doc/guides/cryptodevs/features/aesni_gcm.ini b/doc/guides/cryptodevs/features/aesni_gcm.ini
index 5d9e119..0f11fef 100644
--- a/doc/guides/cryptodevs/features/aesni_gcm.ini
+++ b/doc/guides/cryptodevs/features/aesni_gcm.ini
@@ -7,7 +7,9 @@
 Symmetric crypto       = Y
 Sym operation chaining = Y
 CPU AESNI              = Y
-
+CPU SSE                = Y
+CPU AVX                = Y
+CPU AVX2               = Y
 ;
 ; Supported crypto algorithms of the 'aesni_gcm' crypto driver.
 ;
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 74aae10..724a880 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -41,6 +41,14 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+**Updated the AES-NI GCM PMD.**
+
+  The AES-NI GCM PMD was migrated from the ISA-L library to the Multi Buffer
+  library, as the latter library has Scatter Gather List support
+  now. The migration entailed adding additional support for:
+
+  * 192-bit key.
+
 
 Resolved Issues
 ---------------
diff --git a/drivers/crypto/aesni_gcm/Makefile b/drivers/crypto/aesni_gcm/Makefile
index 59a7c6a..4412894 100644
--- a/drivers/crypto/aesni_gcm/Makefile
+++ b/drivers/crypto/aesni_gcm/Makefile
@@ -1,6 +1,6 @@
 #   BSD LICENSE
 #
-#   Copyright(c) 2016 Intel Corporation. All rights reserved.
+#   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
 #
 #   Redistribution and use in source and binary forms, with or without
 #   modification, are permitted provided that the following conditions
@@ -31,6 +31,9 @@
 include $(RTE_SDK)/mk/rte.vars.mk
 
 ifneq ($(MAKECMDGOALS),clean)
+ifeq ($(AESNI_MULTI_BUFFER_LIB_PATH),)
+$(error "Please define AESNI_MULTI_BUFFER_LIB_PATH environment variable")
+endif
 endif
 
 # library name
@@ -47,7 +50,10 @@ LIBABIVER := 1
 EXPORT_MAP := rte_pmd_aesni_gcm_version.map
 
 # external library dependencies
-LDLIBS += -lisal_crypto
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)
+CFLAGS += -I$(AESNI_MULTI_BUFFER_LIB_PATH)/include
+LDLIBS += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
+LDLIBS += -lcrypto
 
 # library source files
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += aesni_gcm_pmd.c
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
index e9de654..c6e647a 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -37,7 +37,32 @@
 #define LINUX
 #endif
 
-#include <isa-l_crypto/aes_gcm.h>
+#include <gcm_defines.h>
+#include <aux_funcs.h>
+
+/** Supported vector modes */
+enum aesni_gcm_vector_mode {
+	RTE_AESNI_GCM_NOT_SUPPORTED = 0,
+	RTE_AESNI_GCM_SSE,
+	RTE_AESNI_GCM_AVX,
+	RTE_AESNI_GCM_AVX2,
+	RTE_AESNI_GCM_VECTOR_NUM
+};
+
+enum aesni_gcm_key {
+	AESNI_GCM_KEY_128,
+	AESNI_GCM_KEY_192,
+	AESNI_GCM_KEY_256,
+	AESNI_GCM_KEY_NUM
+};
+
+
+typedef void (*aesni_gcm_t)(struct gcm_data *my_ctx_data, uint8_t *out,
+		const uint8_t *in, uint64_t plaintext_len, uint8_t *iv,
+		const uint8_t *aad, uint64_t aad_len,
+		uint8_t *auth_tag, uint64_t auth_tag_len);
+
+typedef void (*aesni_gcm_precomp_t)(const void *key, struct gcm_data *my_ctx_data);
 
 typedef void (*aesni_gcm_init_t)(struct gcm_data *my_ctx_data,
 		uint8_t *iv,
@@ -53,10 +78,109 @@ typedef void (*aesni_gcm_finalize_t)(struct gcm_data *my_ctx_data,
 		uint8_t *auth_tag,
 		uint64_t auth_tag_len);
 
+/** GCM library function pointer table */
 struct aesni_gcm_ops {
+	aesni_gcm_t enc;        /**< GCM encode function pointer */
+	aesni_gcm_t dec;        /**< GCM decode function pointer */
+	aesni_gcm_precomp_t precomp;    /**< GCM pre-compute */
 	aesni_gcm_init_t init;
-	aesni_gcm_update_t update;
+	aesni_gcm_update_t update_enc;
+	aesni_gcm_update_t update_dec;
 	aesni_gcm_finalize_t finalize;
 };
 
+static const struct aesni_gcm_ops gcm_ops[RTE_AESNI_GCM_VECTOR_NUM][AESNI_GCM_KEY_NUM] = {
+	[RTE_AESNI_GCM_NOT_SUPPORTED] = {
+		[AESNI_GCM_KEY_128] = {NULL},
+		[AESNI_GCM_KEY_192] = {NULL},
+		[AESNI_GCM_KEY_256] = {NULL}
+	},
+	[RTE_AESNI_GCM_SSE] = {
+		[AESNI_GCM_KEY_128] = {
+			aesni_gcm128_enc_sse,
+			aesni_gcm128_dec_sse,
+			aesni_gcm128_pre_sse,
+			aesni_gcm128_init_sse,
+			aesni_gcm128_enc_update_sse,
+			aesni_gcm128_dec_update_sse,
+			aesni_gcm128_enc_finalize_sse
+		},
+		[AESNI_GCM_KEY_192] = {
+			aesni_gcm192_enc_sse,
+			aesni_gcm192_dec_sse,
+			aesni_gcm192_pre_sse,
+			aesni_gcm192_init_sse,
+			aesni_gcm192_enc_update_sse,
+			aesni_gcm192_dec_update_sse,
+			aesni_gcm192_enc_finalize_sse
+		},
+		[AESNI_GCM_KEY_256] = {
+			aesni_gcm256_enc_sse,
+			aesni_gcm256_dec_sse,
+			aesni_gcm256_pre_sse,
+			aesni_gcm256_init_sse,
+			aesni_gcm256_enc_update_sse,
+			aesni_gcm256_dec_update_sse,
+			aesni_gcm256_enc_finalize_sse
+		}
+	},
+	[RTE_AESNI_GCM_AVX] = {
+		[AESNI_GCM_KEY_128] = {
+			aesni_gcm128_enc_avx_gen2,
+			aesni_gcm128_dec_avx_gen2,
+			aesni_gcm128_pre_avx_gen2,
+			aesni_gcm128_init_avx_gen2,
+			aesni_gcm128_enc_update_avx_gen2,
+			aesni_gcm128_dec_update_avx_gen2,
+			aesni_gcm128_enc_finalize_avx_gen2
+		},
+		[AESNI_GCM_KEY_192] = {
+			aesni_gcm192_enc_avx_gen2,
+			aesni_gcm192_dec_avx_gen2,
+			aesni_gcm192_pre_avx_gen2,
+			aesni_gcm192_init_avx_gen2,
+			aesni_gcm192_enc_update_avx_gen2,
+			aesni_gcm192_dec_update_avx_gen2,
+			aesni_gcm192_enc_finalize_avx_gen2
+		},
+		[AESNI_GCM_KEY_256] = {
+			aesni_gcm256_enc_avx_gen2,
+			aesni_gcm256_dec_avx_gen2,
+			aesni_gcm256_pre_avx_gen2,
+			aesni_gcm256_init_avx_gen2,
+			aesni_gcm256_enc_update_avx_gen2,
+			aesni_gcm256_dec_update_avx_gen2,
+			aesni_gcm256_enc_finalize_avx_gen2
+		}
+	},
+	[RTE_AESNI_GCM_AVX2] = {
+		[AESNI_GCM_KEY_128] = {
+			aesni_gcm128_enc_avx_gen4,
+			aesni_gcm128_dec_avx_gen4,
+			aesni_gcm128_pre_avx_gen4,
+			aesni_gcm128_init_avx_gen4,
+			aesni_gcm128_enc_update_avx_gen4,
+			aesni_gcm128_dec_update_avx_gen4,
+			aesni_gcm128_enc_finalize_avx_gen4
+		},
+		[AESNI_GCM_KEY_192] = {
+			aesni_gcm192_enc_avx_gen4,
+			aesni_gcm192_dec_avx_gen4,
+			aesni_gcm192_pre_avx_gen4,
+			aesni_gcm192_init_avx_gen4,
+			aesni_gcm192_enc_update_avx_gen4,
+			aesni_gcm192_dec_update_avx_gen4,
+			aesni_gcm192_enc_finalize_avx_gen4
+		},
+		[AESNI_GCM_KEY_256] = {
+			aesni_gcm256_enc_avx_gen4,
+			aesni_gcm256_dec_avx_gen4,
+			aesni_gcm256_pre_avx_gen4,
+			aesni_gcm256_init_avx_gen4,
+			aesni_gcm256_enc_update_avx_gen4,
+			aesni_gcm256_dec_update_avx_gen4,
+			aesni_gcm256_enc_finalize_avx_gen4
+		}
+	}
+};
 #endif /* _AESNI_GCM_OPS_H_ */
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 101ef98..e90c5dc 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -42,37 +42,11 @@
 
 #include "aesni_gcm_pmd_private.h"
 
-/** GCM encode functions pointer table */
-static const struct aesni_gcm_ops aesni_gcm_enc[] = {
-		[AESNI_GCM_KEY_128] = {
-				aesni_gcm128_init,
-				aesni_gcm128_enc_update,
-				aesni_gcm128_enc_finalize
-		},
-		[AESNI_GCM_KEY_256] = {
-				aesni_gcm256_init,
-				aesni_gcm256_enc_update,
-				aesni_gcm256_enc_finalize
-		}
-};
-
-/** GCM decode functions pointer table */
-static const struct aesni_gcm_ops aesni_gcm_dec[] = {
-		[AESNI_GCM_KEY_128] = {
-				aesni_gcm128_init,
-				aesni_gcm128_dec_update,
-				aesni_gcm128_dec_finalize
-		},
-		[AESNI_GCM_KEY_256] = {
-				aesni_gcm256_init,
-				aesni_gcm256_dec_update,
-				aesni_gcm256_dec_finalize
-		}
-};
 
 /** Parse crypto xform chain and set private session parameters */
 int
-aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
+aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops,
+		struct aesni_gcm_session *sess,
 		const struct rte_crypto_sym_xform *xform)
 {
 	const struct rte_crypto_sym_xform *auth_xform;
@@ -119,20 +93,21 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
 	/* Check key length and calculate GCM pre-compute. */
 	switch (cipher_xform->cipher.key.length) {
 	case 16:
-		aesni_gcm128_pre(cipher_xform->cipher.key.data, &sess->gdata);
 		sess->key = AESNI_GCM_KEY_128;
-
+		break;
+	case 24:
+		sess->key = AESNI_GCM_KEY_192;
 		break;
 	case 32:
-		aesni_gcm256_pre(cipher_xform->cipher.key.data, &sess->gdata);
 		sess->key = AESNI_GCM_KEY_256;
-
 		break;
 	default:
 		GCM_LOG_ERR("Unsupported cipher key length");
 		return -EINVAL;
 	}
 
+	gcm_ops[sess->key].precomp(cipher_xform->cipher.key.data,
+					   &sess->gdata);
 	return 0;
 }
 
@@ -157,7 +132,7 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
 		sess = (struct aesni_gcm_session *)
 			((struct rte_cryptodev_sym_session *)_sess)->_private;
 
-		if (unlikely(aesni_gcm_set_session_parameters(sess,
+		if (unlikely(aesni_gcm_set_session_parameters(qp->ops, sess,
 				op->xform) != 0)) {
 			rte_mempool_put(qp->sess_mp, _sess);
 			sess = NULL;
@@ -178,7 +153,7 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
  *
  */
 static int
-process_gcm_crypto_op(struct rte_crypto_sym_op *op,
+process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op,
 		struct aesni_gcm_session *session)
 {
 	uint8_t *src, *dst;
@@ -242,12 +217,12 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
 
 	if (session->op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION) {
 
-		aesni_gcm_enc[session->key].init(&session->gdata,
+		qp->ops[session->key].init(&session->gdata,
 				op->cipher.iv.data,
 				op->auth.aad.data,
 				(uint64_t)op->auth.aad.length);
 
-		aesni_gcm_enc[session->key].update(&session->gdata, dst, src,
+		qp->ops[session->key].update_enc(&session->gdata, dst, src,
 				(uint64_t)part_len);
 		total_len = op->cipher.data.length - part_len;
 
@@ -261,13 +236,13 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
 			part_len = (m_src->data_len < total_len) ?
 					m_src->data_len : total_len;
 
-			aesni_gcm_enc[session->key].update(&session->gdata,
+			qp->ops[session->key].update_enc(&session->gdata,
 					dst, src,
 					(uint64_t)part_len);
 			total_len -= part_len;
 		}
 
-		aesni_gcm_enc[session->key].finalize(&session->gdata,
+		qp->ops[session->key].finalize(&session->gdata,
 				op->auth.digest.data,
 				(uint64_t)op->auth.digest.length);
 	} else { /* session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION */
@@ -280,12 +255,12 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
 			return -1;
 		}
 
-		aesni_gcm_dec[session->key].init(&session->gdata,
+		qp->ops[session->key].init(&session->gdata,
 				op->cipher.iv.data,
 				op->auth.aad.data,
 				(uint64_t)op->auth.aad.length);
 
-		aesni_gcm_dec[session->key].update(&session->gdata, dst, src,
+		qp->ops[session->key].update_dec(&session->gdata, dst, src,
 				(uint64_t)part_len);
 		total_len = op->cipher.data.length - part_len;
 
@@ -299,13 +274,13 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
 			part_len = (m_src->data_len < total_len) ?
 					m_src->data_len : total_len;
 
-			aesni_gcm_dec[session->key].update(&session->gdata,
+			qp->ops[session->key].update_dec(&session->gdata,
 					dst, src,
 					(uint64_t)part_len);
 			total_len -= part_len;
 		}
 
-		aesni_gcm_dec[session->key].finalize(&session->gdata,
+		qp->ops[session->key].finalize(&session->gdata,
 				auth_tag,
 				(uint64_t)op->auth.digest.length);
 	}
@@ -399,7 +374,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair,
 			break;
 		}
 
-		retval = process_gcm_crypto_op(ops[i]->sym, sess);
+		retval = process_gcm_crypto_op(qp, ops[i]->sym, sess);
 		if (retval < 0) {
 			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
 			qp->qp_stats.dequeue_err_count++;
@@ -438,6 +413,7 @@ aesni_gcm_create(const char *name,
 {
 	struct rte_cryptodev *dev;
 	struct aesni_gcm_private *internals;
+	enum aesni_gcm_vector_mode vector_mode;
 
 	if (init_params->name[0] == '\0')
 		snprintf(init_params->name, sizeof(init_params->name),
@@ -449,6 +425,18 @@ aesni_gcm_create(const char *name,
 		return -EFAULT;
 	}
 
+	/* Check CPU for supported vector instruction set */
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2))
+		vector_mode = RTE_AESNI_GCM_AVX2;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX))
+		vector_mode = RTE_AESNI_GCM_AVX;
+	else if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1))
+		vector_mode = RTE_AESNI_GCM_SSE;
+	else {
+		GCM_LOG_ERR("Vector instructions are not supported by CPU");
+		return -EFAULT;
+	}
+
 	dev = rte_cryptodev_pmd_virtual_dev_init(init_params->name,
 			sizeof(struct aesni_gcm_private), init_params->socket_id);
 	if (dev == NULL) {
@@ -468,8 +456,24 @@ aesni_gcm_create(const char *name,
 			RTE_CRYPTODEV_FF_CPU_AESNI |
 			RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER;
 
+	switch (vector_mode) {
+	case RTE_AESNI_GCM_SSE:
+		dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_SSE;
+		break;
+	case RTE_AESNI_GCM_AVX:
+		dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX;
+		break;
+	case RTE_AESNI_GCM_AVX2:
+		dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX2;
+		break;
+	default:
+		break;
+	}
+
 	internals = dev->data->dev_private;
 
+	internals->vector_mode = vector_mode;
+
 	internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs;
 	internals->max_nb_sessions = init_params->max_nb_sessions;
 
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 1fc047b..4e9129f 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -49,7 +49,7 @@ static const struct rte_cryptodev_capabilities aesni_gcm_pmd_capabilities[] = {
 				.key_size = {
 					.min = 16,
 					.max = 32,
-					.increment = 16
+					.increment = 8
 				},
 				.digest_size = {
 					.min = 8,
@@ -74,7 +74,7 @@ static const struct rte_cryptodev_capabilities aesni_gcm_pmd_capabilities[] = {
 				.key_size = {
 					.min = 16,
 					.max = 32,
-					.increment = 16
+					.increment = 8
 				},
 				.digest_size = {
 					.min = 8,
@@ -99,7 +99,7 @@ static const struct rte_cryptodev_capabilities aesni_gcm_pmd_capabilities[] = {
 				.key_size = {
 					.min = 16,
 					.max = 32,
-					.increment = 16
+					.increment = 8
 				},
 				.iv_size = {
 					.min = 12,
@@ -247,6 +247,7 @@ aesni_gcm_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		 int socket_id)
 {
 	struct aesni_gcm_qp *qp = NULL;
+	struct aesni_gcm_private *internals = dev->data->dev_private;
 
 	/* Free memory prior to re-allocation if needed. */
 	if (dev->data->queue_pairs[qp_id] != NULL)
@@ -264,6 +265,8 @@ aesni_gcm_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	if (aesni_gcm_pmd_qp_set_unique_name(dev, qp))
 		goto qp_setup_cleanup;
 
+	qp->ops = (const struct aesni_gcm_ops *)gcm_ops[internals->vector_mode];
+
 	qp->processed_pkts = aesni_gcm_pmd_qp_create_processed_pkts_ring(qp,
 			qp_conf->nb_descriptors, socket_id);
 	if (qp->processed_pkts == NULL)
@@ -314,15 +317,18 @@ aesni_gcm_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
 
 /** Configure a aesni gcm session from a crypto xform chain */
 static void *
-aesni_gcm_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
+aesni_gcm_pmd_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,	void *sess)
 {
+	struct aesni_gcm_private *internals = dev->data->dev_private;
+
 	if (unlikely(sess == NULL)) {
 		GCM_LOG_ERR("invalid session struct");
 		return NULL;
 	}
 
-	if (aesni_gcm_set_session_parameters(sess, xform) != 0) {
+	if (aesni_gcm_set_session_parameters(gcm_ops[internals->vector_mode],
+			sess, xform) != 0) {
 		GCM_LOG_ERR("failed configure session parameters");
 		return NULL;
 	}
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 0496b44..9a81c88 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
  *   modification, are permitted provided that the following conditions
@@ -58,6 +58,8 @@
 
 /** private data structure for each virtual AESNI GCM device */
 struct aesni_gcm_private {
+	enum aesni_gcm_vector_mode vector_mode;
+	/**< Vector mode */
 	unsigned max_nb_queue_pairs;
 	/**< Max number of queue pairs supported by device */
 	unsigned max_nb_sessions;
@@ -69,6 +71,8 @@ struct aesni_gcm_qp {
 	/**< Queue Pair Identifier */
 	char name[RTE_CRYPTODEV_NAME_LEN];
 	/**< Unique Queue Pair Name */
+	const struct aesni_gcm_ops *ops;
+	/**< Architecture dependent function pointer table of the gcm APIs */
 	struct rte_ring *processed_pkts;
 	/**< Ring for placing process packets */
 	struct rte_mempool *sess_mp;
@@ -83,11 +87,6 @@ enum aesni_gcm_operation {
 	AESNI_GCM_OP_AUTHENTICATED_DECRYPTION
 };
 
-enum aesni_gcm_key {
-	AESNI_GCM_KEY_128,
-	AESNI_GCM_KEY_256
-};
-
 /** AESNI GCM private session structure */
 struct aesni_gcm_session {
 	enum aesni_gcm_operation op;
@@ -101,6 +100,7 @@ struct aesni_gcm_session {
 
 /**
  * Setup GCM session parameters
+ * @param	ops	gcm ops function pointer table
  * @param	sess	aesni gcm session structure
  * @param	xform	crypto transform chain
  *
@@ -109,7 +109,8 @@ struct aesni_gcm_session {
  * - On failure returns error code < 0
  */
 extern int
-aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
+aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops,
+		struct aesni_gcm_session *sess,
 		const struct rte_crypto_sym_xform *xform);
 
 
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index bcaf1b3..29a4abb 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -147,7 +147,8 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT)    += -lrte_pmd_xenvirt -lxenstore
 ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)    += -lrte_pmd_aesni_mb
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)    += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
-_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)   += -lrte_pmd_aesni_gcm -lisal_crypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)   += -lrte_pmd_aesni_gcm -lcrypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPENSSL)     += -lrte_pmd_openssl -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)         += -lrte_pmd_qat -lcrypto
-- 
2.7.4

  reply	other threads:[~2017-05-26 10:15 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-26 10:16 [PATCH 0/2] AESNI GCM PMD: Migration from ISA-L to Multi-buffer library Pablo de Lara
2017-05-26 10:16 ` Pablo de Lara [this message]
2017-05-26 10:16 ` [PATCH 2/2] test/crypto: add AES GCM 192 tests Pablo de Lara
2017-06-28 11:32 ` [PATCH v2 0/4] AESNI GCM PMD: Migration from ISA-L to Multi-buffer library Pablo de Lara
2017-06-28 11:32   ` [PATCH v2 1/4] crypto/aesni_gcm: migrate " Pablo de Lara
2017-06-28 11:32   ` [PATCH v2 2/4] test/crypto: rename some tests Pablo de Lara
2017-06-28 11:32   ` [PATCH v2 3/4] test/crypto: add AES GCM 192 tests Pablo de Lara
2017-06-28 11:32   ` [PATCH v2 4/4] test/crypto: extend AES-GCM 192/256 to other PMDs Pablo de Lara
2017-06-30 14:16   ` [PATCH v2 0/4] AESNI GCM PMD: Migration from ISA-L to Multi-buffer library Sergio Gonzalez Monroy
2017-07-04 10:11     ` De Lara Guarch, Pablo
2017-07-04  0:12   ` [PATCH v3 " Pablo de Lara
2017-07-04  0:12     ` [PATCH v3 1/4] crypto/aesni_gcm: migrate " Pablo de Lara
2017-07-04  9:43       ` Declan Doherty
2017-07-04 10:17         ` De Lara Guarch, Pablo
2017-07-04  0:12     ` [PATCH v3 2/4] test/crypto: rename some tests Pablo de Lara
2017-07-04  0:12     ` [PATCH v3 3/4] test/crypto: add AES GCM 192 tests Pablo de Lara
2017-07-04  0:12     ` [PATCH v3 4/4] test/crypto: extend AES-GCM 192/256 to other PMDs Pablo de Lara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1495793766-200890-2-git-send-email-pablo.de.lara.guarch@intel.com \
    --to=pablo.de.lara.guarch@intel.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.