All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
To: dev@dpdk.org
Cc: pablo.de.lara.guarch@intel.com,
	Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>,
	Michal Kobylinski <michalx.kobylinski@intel.com>,
	Tomasz Kulasek <tomaszx.kulasek@intel.com>,
	Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Subject: [PATCH v2 1/4] libcrypto_pmd: initial implementation of SW crypto device
Date: Mon, 19 Sep 2016 10:59:39 +0200	[thread overview]
Message-ID: <1474275582-6108-2-git-send-email-michalx.k.jastrzebski@intel.com> (raw)
In-Reply-To: <1474275582-6108-1-git-send-email-michalx.k.jastrzebski@intel.com>

From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>

This code provides the initial implementation of the libcrypto
poll mode driver. All cryptography operations are using Openssl
library crypto API. Each algorithm uses EVP_ interface from
openssl API - which is recommended by Openssl maintainers.

LibCrypto PMD has support for:

Supported cipher algorithms:
RTE_CRYPTO_CIPHER_3DES_CBC
RTE_CRYPTO_CIPHER_AES_CBC
RTE_CRYPTO_CIPHER_AES_CTR
RTE_CRYPTO_CIPHER_3DES_CTR
RTE_CRYPTO_CIPHER_AES_GCM

Supported authentication algorithms:
RTE_CRYPTO_AUTH_AES_GMAC
RTE_CRYPTO_AUTH_MD5
RTE_CRYPTO_AUTH_SHA1
RTE_CRYPTO_AUTH_SHA224
RTE_CRYPTO_AUTH_SHA256
RTE_CRYPTO_AUTH_SHA384
RTE_CRYPTO_AUTH_SHA512
RTE_CRYPTO_AUTH_MD5_HMAC
RTE_CRYPTO_AUTH_SHA1_HMAC
RTE_CRYPTO_AUTH_SHA224_HMAC
RTE_CRYPTO_AUTH_SHA256_HMAC
RTE_CRYPTO_AUTH_SHA384_HMAC
RTE_CRYPTO_AUTH_SHA512_HMAC

Installation
------------
To compile libcrypto PMD It has to be enabled in the config/common_base
file and appropriate openssl packages have to be installed in the build
environment.

Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
---
v2:
- add gcm crypto cipher and authentication algorithm
- rework gmac crypto authentication algorithm
---
 MAINTAINERS                                        |    4 +
 config/common_base                                 |    6 +
 doc/guides/cryptodevs/index.rst                    |    1 +
 doc/guides/cryptodevs/libcrypto.rst                |  113 +++
 doc/guides/rel_notes/release_16_11.rst             |    5 +-
 drivers/crypto/Makefile                            |    1 +
 drivers/crypto/libcrypto/Makefile                  |   60 ++
 drivers/crypto/libcrypto/rte_libcrypto_pmd.c       | 1045 ++++++++++++++++++++
 drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c   |  708 +++++++++++++
 .../crypto/libcrypto/rte_libcrypto_pmd_private.h   |  174 ++++
 .../crypto/libcrypto/rte_pmd_libcrypto_version.map |    3 +
 mk/rte.app.mk                                      |    3 +-
 12 files changed, 2121 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/cryptodevs/libcrypto.rst
 create mode 100644 drivers/crypto/libcrypto/Makefile
 create mode 100644 drivers/crypto/libcrypto/rte_libcrypto_pmd.c
 create mode 100644 drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c
 create mode 100644 drivers/crypto/libcrypto/rte_libcrypto_pmd_private.h
 create mode 100644 drivers/crypto/libcrypto/rte_pmd_libcrypto_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 0e78941..6bd0889 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -432,6 +432,10 @@ M: Declan Doherty <declan.doherty@intel.com>
 F: drivers/crypto/null/
 F: doc/guides/cryptodevs/null.rst
 
+LibCrypto Crypto PMD
+M: Declan Doherty <declan.doherty@intel.com>
+F: drivers/crypto/libcrypto/
+F: doc/guides/cryptodevs/libcrypto.rst
 
 Packet processing
 -----------------
diff --git a/config/common_base b/config/common_base
index 7830535..42d28dd 100644
--- a/config/common_base
+++ b/config/common_base
@@ -376,6 +376,12 @@ CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
 CONFIG_RTE_LIBRTE_PMD_AESNI_MB_DEBUG=n
 
 #
+# Compile PMD for Software backed device
+#
+CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO=n
+CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO_DEBUG=n
+
+#
 # Compile PMD for AESNI GCM device
 #
 CONFIG_RTE_LIBRTE_PMD_AESNI_GCM=n
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 9616de1..adb6e98c 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -39,6 +39,7 @@ Crypto Device Drivers
     aesni_mb
     aesni_gcm
     kasumi
+    libcrypto
     null
     snow3g
     qat
diff --git a/doc/guides/cryptodevs/libcrypto.rst b/doc/guides/cryptodevs/libcrypto.rst
new file mode 100644
index 0000000..f9daa05
--- /dev/null
+++ b/doc/guides/cryptodevs/libcrypto.rst
@@ -0,0 +1,113 @@
+..  BSD LICENSE
+    Copyright(c) 2016 Intel Corporation. All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+LibCrypto Crypto Poll Mode Driver
+============================================
+
+
+This code provides the initial implementation of the libcrypto poll mode
+driver All cryptography operations are using Openssl library crypto API.
+Each algorithm uses EVP_ interface from openssl API - which is recommended
+by Openssl maintainers.
+
+For more details about openssl library please visit openssl webpage:
+https://www.openssl.org/
+
+Features
+--------
+
+LibCrypto PMD has support for:
+
+Supported cipher algorithms:
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES_CTR``
+* ``RTE_CRYPTO_CIPHER_3DES_CTR``
+* ``RTE_CRYPTO_CIPHER_AES_GCM``
+
+Supported authentication algorithms:
+* ``RTE_CRYPTO_AUTH_AES_GMAC``
+* ``RTE_CRYPTO_AUTH_MD5``
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+
+
+Installation
+------------
+To compile libcrypto PMD It has to be enabled in the config/common_base file
+and appropriate openssl packages have to be installed in the build environment.
+
+The newest openssl library version is supported:
+* 1.0.2h-fips  3 May 2016.
+Older versions that were also verified:
+* 1.0.1f 6 Jan 2014
+* 1.0.1 14 Mar 2012
+
+For Ubuntu 14.04 LTS these packages have to be installed in the build system:
+sudo apt-get install openssl
+sudo apt-get install libc6-dev-i386 (for i686-native-linuxapp-gcc target)
+
+This code was also verified on Fedora 24.
+This code was NOT yet verified on FreeBSD.
+
+Initialization
+--------------
+
+User can use app/test application to check how to use this pmd and to verify
+crypto processing.
+
+Test name is cryptodev_libcrypto_autotest.
+For performance test cryptodev_libcrypto_perftest can be used.
+
+To verify real traffic l2fwd-crypto example can be used with this command:
+sudo ./build/l2fwd-crypto -c 0x3 -n 4 --vdev "cryptodev_libcrypto_pmd"
+--vdev "cryptodev_libcrypto_pmd"-- -p 0x3 --chain CIPHER_HASH
+--cipher_op ENCRYPT --cipher_algo AES_CBC
+--cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f
+--iv 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:ff
+--auth_op GENERATE --auth_algo SHA1_HMAC
+--auth_key 11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11
+:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11
+:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11
+
+Limitations
+-----------
+* Maximum number of sessions is 2048.
+* Chained mbufs are not supported.
+* Hash only is not supported for GCM and GMAC.
+* Cipher only is not supported for GCM and GMAC.
diff --git a/doc/guides/rel_notes/release_16_11.rst b/doc/guides/rel_notes/release_16_11.rst
index 1dd0e6a..fdd3e4f 100644
--- a/doc/guides/rel_notes/release_16_11.rst
+++ b/doc/guides/rel_notes/release_16_11.rst
@@ -34,7 +34,10 @@ New Features
 
      Refer to the previous release notes for examples.
 
-     This section is a comment. Make sure to start the actual text at the margin.
+* **Added libcrypto PMD.**
+
+  A new crypto PMD has been added, which provides several ciphering and hashing.
+  All cryptography operations are using Openssl library crypto API.
 
 * ** Added support of C3xxx Device in QAT PMD.**
   Support for Device c3xxx has been enabled in QAT PMD.
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index dc4ef7f..11b0863 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -33,6 +33,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM) += aesni_gcm
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB) += aesni_mb
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += libcrypto
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_QAT) += qat
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G) += snow3g
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI) += kasumi
diff --git a/drivers/crypto/libcrypto/Makefile b/drivers/crypto/libcrypto/Makefile
new file mode 100644
index 0000000..c5f8cf2
--- /dev/null
+++ b/drivers/crypto/libcrypto/Makefile
@@ -0,0 +1,60 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_pmd_libcrypto.a
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library version
+LIBABIVER := 1
+
+# versioning export map
+EXPORT_MAP := rte_pmd_libcrypto_version.map
+
+# external library dependencies
+LDLIBS += -lcrypto
+
+# library source files
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += rte_libcrypto_pmd.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += rte_libcrypto_pmd_ops.c
+
+# library dependencies
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/libcrypto/rte_libcrypto_pmd.c b/drivers/crypto/libcrypto/rte_libcrypto_pmd.c
new file mode 100644
index 0000000..0c7a8bd
--- /dev/null
+++ b/drivers/crypto/libcrypto/rte_libcrypto_pmd.c
@@ -0,0 +1,1045 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_cpuflags.h>
+
+#include <openssl/evp.h>
+
+#include "rte_libcrypto_pmd_private.h"
+
+static int cryptodev_libcrypto_uninit(const char *name);
+
+/*----------------------------------------------------------------------------*/
+
+/**
+ * Global static parameter used to create a unique name for each
+ * LIBCRYPTO crypto device.
+ */
+static unsigned int unique_name_id;
+
+static inline int
+create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", RTE_STR(CRYPTODEV_NAME_LIBCRYPTO_PMD),
+			unique_name_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+/**
+ * Increment counter by 1
+ * Counter is 64 bit array, big-endian
+ */
+static void
+ctr_inc(uint8_t *ctr)
+{
+	uint64_t *ctr64 = (uint64_t *)ctr;
+
+	*ctr64 = __builtin_bswap64(*ctr64);
+	(*ctr64)++;
+	*ctr64 = __builtin_bswap64(*ctr64);
+}
+
+/*
+ *------------------------------------------------------------------------------
+ * Session Prepare
+ *------------------------------------------------------------------------------
+ */
+
+/** Get xform chain order */
+static enum libcrypto_chain_order
+libcrypto_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+	enum libcrypto_chain_order res = LIBCRYPTO_CHAIN_NOT_SUPPORTED;
+
+	if (xform != NULL) {
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->next == NULL)
+				res =  LIBCRYPTO_CHAIN_ONLY_AUTH;
+			else if (xform->next->type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				res =  LIBCRYPTO_CHAIN_AUTH_CIPHER;
+		}
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->next == NULL)
+				res =  LIBCRYPTO_CHAIN_ONLY_CIPHER;
+			else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+				res =  LIBCRYPTO_CHAIN_CIPHER_AUTH;
+		}
+	}
+
+	return res;
+}
+
+/** Get session cipher key from input cipher key */
+static void
+get_cipher_key(uint8_t *input_key, int keylen, uint8_t *session_key)
+{
+	memcpy(session_key, input_key, keylen);
+}
+
+/** Get key ede 24 bytes standard from input key */
+static int
+get_cipher_key_ede(uint8_t *key, int keylen, uint8_t *key_ede)
+{
+	int res = 0;
+
+	/* Initialize keys - 24 bytes: [key1-key2-key3] */
+	switch (keylen) {
+	case 24:
+		memcpy(key_ede, key, 24);
+		break;
+	case 16:
+		/* K3 = K1 */
+		memcpy(key_ede, key, 16);
+		memcpy(key_ede + 16, key, 8);
+		break;
+	case 8:
+		/* K1 = K2 = K3 (DES compatibility) */
+		memcpy(key_ede, key, 8);
+		memcpy(key_ede + 8, key, 8);
+		memcpy(key_ede + 16, key, 8);
+		break;
+	default:
+		LIBCRYPTO_LOG_ERR("Unsupported key size");
+		res = -EINVAL;
+	}
+
+	return res;
+}
+
+/** Get adequate libcrypto function for input cipher algorithm */
+static uint8_t
+get_cipher_algo(enum rte_crypto_cipher_algorithm sess_algo, size_t keylen,
+		const EVP_CIPHER **algo)
+{
+	int res = 0;
+
+	if (algo != NULL) {
+		switch (sess_algo) {
+		case RTE_CRYPTO_CIPHER_3DES_CBC:
+			switch (keylen) {
+			case 16:
+				*algo = EVP_des_ede_cbc();
+				break;
+			case 24:
+				*algo = EVP_des_ede3_cbc();
+				break;
+			default:
+				res = -EINVAL;
+			}
+			break;
+		case RTE_CRYPTO_CIPHER_3DES_CTR:
+			break;
+		case RTE_CRYPTO_CIPHER_AES_CBC:
+			switch (keylen) {
+			case 16:
+				*algo = EVP_aes_128_cbc();
+				break;
+			case 24:
+				*algo = EVP_aes_192_cbc();
+				break;
+			case 32:
+				*algo = EVP_aes_256_cbc();
+				break;
+			default:
+				res = -EINVAL;
+			}
+			break;
+		case RTE_CRYPTO_CIPHER_AES_CTR:
+			switch (keylen) {
+			case 16:
+				*algo = EVP_aes_128_ctr();
+				break;
+			case 24:
+				*algo = EVP_aes_192_ctr();
+				break;
+			case 32:
+				*algo = EVP_aes_256_ctr();
+				break;
+			default:
+				res = -EINVAL;
+			}
+			break;
+		case RTE_CRYPTO_CIPHER_AES_GCM:
+			switch (keylen) {
+			case 16:
+				*algo = EVP_aes_128_gcm();
+				break;
+			case 24:
+				*algo = EVP_aes_192_gcm();
+				break;
+			case 32:
+				*algo = EVP_aes_256_gcm();
+				break;
+			default:
+				res = -EINVAL;
+			}
+			break;
+		default:
+			res = -EINVAL;
+			break;
+		}
+	} else {
+		res = -EINVAL;
+	}
+
+	return res;
+}
+
+/** Get adequate libcrypto function for input auth algorithm */
+static uint8_t
+get_auth_algo(enum rte_crypto_auth_algorithm sessalgo,
+		const EVP_MD **algo)
+{
+	int res = 0;
+
+	if (algo != NULL) {
+		switch (sessalgo) {
+		case RTE_CRYPTO_AUTH_MD5:
+		case RTE_CRYPTO_AUTH_MD5_HMAC:
+			*algo = EVP_md5();
+			break;
+		case RTE_CRYPTO_AUTH_SHA1:
+		case RTE_CRYPTO_AUTH_SHA1_HMAC:
+			*algo = EVP_sha1();
+			break;
+		case RTE_CRYPTO_AUTH_SHA224:
+		case RTE_CRYPTO_AUTH_SHA224_HMAC:
+			*algo = EVP_sha224();
+			break;
+		case RTE_CRYPTO_AUTH_SHA256:
+		case RTE_CRYPTO_AUTH_SHA256_HMAC:
+			*algo = EVP_sha256();
+			break;
+		case RTE_CRYPTO_AUTH_SHA384:
+		case RTE_CRYPTO_AUTH_SHA384_HMAC:
+			*algo = EVP_sha384();
+			break;
+		case RTE_CRYPTO_AUTH_SHA512:
+		case RTE_CRYPTO_AUTH_SHA512_HMAC:
+			*algo = EVP_sha512();
+			break;
+		default:
+			res = -EINVAL;
+			break;
+		}
+	} else {
+		res = -EINVAL;
+	}
+
+	return res;
+}
+
+/** Set session cipher parameters */
+static int
+libcrypto_set_session_cipher_parameters(struct libcrypto_session *sess,
+		const struct rte_crypto_sym_xform *xform)
+{
+	/* Select cipher direction */
+	sess->cipher.direction = xform->cipher.op;
+	/* Select cipher key */
+	sess->cipher.key.length = xform->cipher.key.length;
+
+	/* Select cipher algo */
+	switch (xform->cipher.algo) {
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+	case RTE_CRYPTO_CIPHER_AES_GCM:
+		sess->cipher.mode = LIBCRYPTO_CIPHER_LIB;
+		sess->cipher.algo = xform->cipher.algo;
+		sess->cipher.ctx = EVP_CIPHER_CTX_new();
+
+		if (get_cipher_algo(sess->cipher.algo, sess->cipher.key.length,
+				&sess->cipher.evp_algo) != 0)
+			return -EINVAL;
+
+		get_cipher_key(xform->cipher.key.data, sess->cipher.key.length,
+			sess->cipher.key.data);
+
+		break;
+
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+		sess->cipher.mode = LIBCRYPTO_CIPHER_DES3CTR;
+		sess->cipher.ctx = EVP_CIPHER_CTX_new();
+
+		if (get_cipher_key_ede(xform->cipher.key.data,
+				sess->cipher.key.length, sess->cipher.key.data) != 0)
+			return -EINVAL;
+		break;
+
+	default:
+		sess->cipher.algo = RTE_CRYPTO_CIPHER_NULL;
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/* Set session auth parameters */
+static int
+libcrypto_set_session_auth_parameters(struct libcrypto_session *sess,
+		const struct rte_crypto_sym_xform *xform)
+{
+	/* Select auth generate/verify */
+	sess->auth.operation = xform->auth.op;
+	sess->auth.algo = xform->auth.algo;
+
+	/* Select auth algo */
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+	case RTE_CRYPTO_AUTH_AES_GCM:
+		/* Check additional condition for AES_GMAC/GCM */
+		if (sess->cipher.algo != RTE_CRYPTO_CIPHER_AES_GCM)
+			return -EINVAL;
+		sess->chain_order = LIBCRYPTO_CHAIN_COMBINED;
+		break;
+
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA512:
+		sess->auth.mode = LIBCRYPTO_AUTH_AS_AUTH;
+		if (get_auth_algo(xform->auth.algo, &sess->auth.auth.evp_algo) != 0)
+			return -EINVAL;
+		sess->auth.auth.ctx = EVP_MD_CTX_create();
+		break;
+
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		sess->auth.mode = LIBCRYPTO_AUTH_AS_HMAC;
+		sess->auth.hmac.ctx = EVP_MD_CTX_create();
+		if (get_auth_algo(xform->auth.algo, &sess->auth.hmac.evp_algo) != 0)
+			return -EINVAL;
+		sess->auth.hmac.pkey = EVP_PKEY_new_mac_key(EVP_PKEY_HMAC, NULL,
+				xform->auth.key.data, xform->auth.key.length);
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+int
+libcrypto_set_session_parameters(struct libcrypto_session *sess,
+		const struct rte_crypto_sym_xform *xform)
+{
+	const struct rte_crypto_sym_xform *cipher_xform = NULL;
+	const struct rte_crypto_sym_xform *auth_xform = NULL;
+
+	sess->chain_order = libcrypto_get_chain_order(xform);
+	switch (sess->chain_order) {
+	case LIBCRYPTO_CHAIN_ONLY_CIPHER:
+		cipher_xform = xform;
+		break;
+	case LIBCRYPTO_CHAIN_ONLY_AUTH:
+		auth_xform = xform;
+		break;
+	case LIBCRYPTO_CHAIN_CIPHER_AUTH:
+		cipher_xform = xform;
+		auth_xform = xform->next;
+		break;
+	case LIBCRYPTO_CHAIN_AUTH_CIPHER:
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* cipher_xform must be check before auth_xform */
+	if (cipher_xform) {
+		if (libcrypto_set_session_cipher_parameters(sess, cipher_xform)) {
+			LIBCRYPTO_LOG_ERR(
+				"Invalid/unsupported cipher parameters");
+			return -EINVAL;
+		}
+	}
+
+	if (auth_xform) {
+		if (libcrypto_set_session_auth_parameters(sess, auth_xform)) {
+			LIBCRYPTO_LOG_ERR(
+				"Invalid/unsupported auth parameters");
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/** Reset private session parameters */
+void
+libcrypto_reset_session(struct libcrypto_session *sess)
+{
+	EVP_CIPHER_CTX_free(sess->cipher.ctx);
+
+	switch (sess->auth.mode) {
+	case LIBCRYPTO_AUTH_AS_AUTH:
+		EVP_MD_CTX_destroy(sess->auth.auth.ctx);
+		break;
+	case LIBCRYPTO_AUTH_AS_HMAC:
+		EVP_PKEY_free(sess->auth.hmac.pkey);
+		EVP_MD_CTX_destroy(sess->auth.hmac.ctx);
+		break;
+	default:
+		break;
+	}
+}
+
+/** Provide session for operation */
+static struct libcrypto_session *
+get_session(struct libcrypto_qp *qp, struct rte_crypto_op *op)
+{
+	struct libcrypto_session *sess = NULL;
+
+	if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+		/* get existing session */
+		if (likely(op->sym->session != NULL &&
+				op->sym->session->dev_type ==
+				RTE_CRYPTODEV_LIBCRYPTO_PMD))
+			sess = (struct libcrypto_session *)
+				op->sym->session->_private;
+	} else  {
+		/* provide internal session */
+		void *_sess = NULL;
+
+		if (!rte_mempool_get(qp->sess_mp, (void **)&_sess)) {
+			sess = (struct libcrypto_session *)
+				((struct rte_cryptodev_sym_session *)_sess)
+				->_private;
+
+			if (unlikely(libcrypto_set_session_parameters(
+					sess, op->sym->xform) != 0)) {
+				rte_mempool_put(qp->sess_mp, _sess);
+				sess = NULL;
+			} else
+				op->sym->session = _sess;
+		}
+	}
+
+	if (sess == NULL)
+		op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+
+	return sess;
+}
+
+/*
+ *------------------------------------------------------------------------------
+ * Process Operations
+ *------------------------------------------------------------------------------
+ */
+
+/** Process standard libcrypto cipher encryption */
+static int
+process_libcrypto_cipher_encrypt(uint8_t *src, uint8_t *dst,
+		uint8_t *iv, uint8_t *key, int srclen,
+		EVP_CIPHER_CTX *ctx, const EVP_CIPHER *algo)
+{
+	int dstlen, totlen;
+
+	if (EVP_EncryptInit_ex(ctx, algo, NULL, key, iv) <= 0)
+		goto process_cipher_encrypt_err;
+
+	if (EVP_EncryptUpdate(ctx, dst, &dstlen, src, srclen) <= 0)
+		goto process_cipher_encrypt_err;
+
+	if (EVP_EncryptFinal_ex(ctx, dst + dstlen, &totlen) <= 0)
+		goto process_cipher_encrypt_err;
+
+	return 0;
+
+process_cipher_encrypt_err:
+	LIBCRYPTO_LOG_ERR("Process libcrypto cipher encrypt failed");
+	return -EINVAL;
+}
+
+/** Process standard libcrypto cipher decryption */
+static int
+process_libcrypto_cipher_decrypt(uint8_t *src, uint8_t *dst,
+		uint8_t *iv, uint8_t *key, int srclen,
+		EVP_CIPHER_CTX *ctx, const EVP_CIPHER *algo)
+{
+	int dstlen, totlen;
+
+	if (EVP_DecryptInit_ex(ctx, algo, NULL, key, iv) <= 0)
+		goto process_cipher_decrypt_err;
+
+	if (EVP_CIPHER_CTX_set_padding(ctx, 0) <= 0)
+		goto process_cipher_decrypt_err;
+
+	if (EVP_DecryptUpdate(ctx, dst, &dstlen, src, srclen) <= 0)
+		goto process_cipher_decrypt_err;
+
+	if (EVP_DecryptFinal_ex(ctx, dst + dstlen, &totlen) <= 0)
+		goto process_cipher_decrypt_err;
+
+	return 0;
+
+process_cipher_decrypt_err:
+	LIBCRYPTO_LOG_ERR("Process libcrypto cipher decrypt failed");
+	return -EINVAL;
+}
+
+/** Process cipher des 3 ctr encryption, decryption algorithm */
+static int
+process_libcrypto_cipher_des3ctr(uint8_t *src, uint8_t *dst,
+		uint8_t *iv, uint8_t *key, int srclen, EVP_CIPHER_CTX *ctx)
+{
+	uint8_t ebuf[8], ctr[8];
+	int unused, n;
+
+	/* We use 3DES encryption also for decryption.
+	 * IV is not important for 3DES ecb
+	 */
+	if (EVP_EncryptInit_ex(ctx, EVP_des_ede3_ecb(), NULL, key, NULL) <= 0)
+		goto process_cipher_des3ctr_err;
+
+	memcpy(ctr, iv, 8);
+	n = 0;
+
+	while (n < srclen) {
+		if (n % 8 == 0) {
+			if (EVP_EncryptUpdate(ctx, (unsigned char *)&ebuf, &unused,
+					(const unsigned char *)&ctr, 8) <= 0)
+				goto process_cipher_des3ctr_err;
+			ctr_inc(ctr);
+		}
+		dst[n] = src[n] ^ ebuf[n % 8];
+		n++;
+	}
+
+	return 0;
+
+process_cipher_des3ctr_err:
+	LIBCRYPTO_LOG_ERR("Process libcrypto cipher des 3 ede ctr failed");
+	return -EINVAL;
+}
+
+/** Process auth/encription aes-gcm algorithm */
+static int
+process_libcrypto_auth_encryption_gcm(uint8_t *src, int srclen,
+		uint8_t *aad, int aadlen, uint8_t *iv, int ivlen,
+		uint8_t *key, uint8_t *dst,	uint8_t *tag,
+		EVP_CIPHER_CTX *ctx, const EVP_CIPHER *algo)
+{
+	int len = 0, unused = 0;
+	uint8_t empty[] = {};
+
+	if (EVP_EncryptInit_ex(ctx, algo, NULL, NULL, NULL) <= 0)
+		goto process_auth_encryption_gcm_err;
+
+	if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_IVLEN, ivlen, NULL) <= 0)
+		goto process_auth_encryption_gcm_err;
+
+	if (EVP_EncryptInit_ex(ctx, NULL, NULL, key, iv) <= 0)
+		goto process_auth_encryption_gcm_err;
+
+	if (aadlen > 0) {
+		if (EVP_EncryptUpdate(ctx, NULL, &len, aad, aadlen) <= 0)
+			goto process_auth_encryption_gcm_err;
+
+		/* Workaround open ssl bug in version less then 1.0.1f */
+		if (EVP_EncryptUpdate(ctx, empty, &unused, empty, 0) <= 0)
+			goto process_auth_encryption_gcm_err;
+	}
+
+	if (srclen > 0)
+		if (EVP_EncryptUpdate(ctx, dst, &len, src, srclen) <= 0)
+			goto process_auth_encryption_gcm_err;
+
+	if (EVP_EncryptFinal_ex(ctx, dst + len, &len) <= 0)
+		goto process_auth_encryption_gcm_err;
+
+	if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_GET_TAG, 16, tag) <= 0)
+		goto process_auth_encryption_gcm_err;
+
+	return 0;
+
+process_auth_encryption_gcm_err:
+	LIBCRYPTO_LOG_ERR("Process libcrypto auth encryption gcm failed");
+	return -EINVAL;
+}
+
+static int
+process_libcrypto_auth_decryption_gcm(uint8_t *src, int srclen,
+		uint8_t *aad, int aadlen, uint8_t *iv, int ivlen,
+		uint8_t *key, uint8_t *dst, uint8_t *tag,
+		EVP_CIPHER_CTX *ctx, const EVP_CIPHER *algo)
+{
+	int len = 0, unused = 0;
+	uint8_t empty[] = {};
+
+	if (EVP_DecryptInit_ex(ctx, algo, NULL, NULL, NULL) <= 0)
+		goto process_auth_decryption_gcm_err;
+
+	if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_IVLEN, ivlen, NULL) <= 0)
+		goto process_auth_decryption_gcm_err;
+
+	if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_TAG, 16, tag) <= 0)
+		goto process_auth_decryption_gcm_err;
+
+	if (EVP_DecryptInit_ex(ctx, NULL, NULL, key, iv) <= 0)
+		goto process_auth_decryption_gcm_err;
+
+	if (aadlen > 0) {
+		if (EVP_DecryptUpdate(ctx, NULL, &len, aad, aadlen) <= 0)
+			goto process_auth_decryption_gcm_err;
+
+		/* Workaround open ssl bug in version less then 1.0.1f */
+		if (EVP_DecryptUpdate(ctx, empty, &unused, empty, 0) <= 0)
+			goto process_auth_decryption_gcm_err;
+	}
+
+	if (srclen > 0)
+		if (EVP_DecryptUpdate(ctx, dst, &len, src, srclen) <= 0)
+			goto process_auth_decryption_gcm_err;
+
+	if (EVP_DecryptFinal_ex(ctx, dst + len, &len) <= 0)
+		goto process_auth_decryption_gcm_err;
+
+	return 0;
+
+process_auth_decryption_gcm_err:
+	LIBCRYPTO_LOG_ERR("Process libcrypto auth decription gcm failed");
+	return -EINVAL;
+}
+
+/** Process standard libcrypto auth algorithms */
+static int
+process_libcrypto_auth(uint8_t *src, uint8_t *dst,
+		__rte_unused uint8_t *iv, __rte_unused EVP_PKEY * pkey,
+		int srclen, EVP_MD_CTX *ctx, const EVP_MD *algo)
+{
+	size_t dstlen;
+
+	if (EVP_DigestInit_ex(ctx, algo, NULL) <= 0)
+		goto process_auth_err;
+
+	if (EVP_DigestUpdate(ctx, (char *)src, srclen) <= 0)
+		goto process_auth_err;
+
+	if (EVP_DigestFinal_ex(ctx, dst, (unsigned int *)&dstlen) <= 0)
+		goto process_auth_err;
+
+	return 0;
+
+process_auth_err:
+	LIBCRYPTO_LOG_ERR("Process libcrypto auth failed");
+	return -EINVAL;
+}
+
+/** Process standard libcrypto auth algorithms with hmac */
+static int
+process_libcrypto_auth_hmac(uint8_t *src, uint8_t *dst,
+		__rte_unused uint8_t *iv, EVP_PKEY *pkey,
+		int srclen,	EVP_MD_CTX *ctx, const EVP_MD *algo)
+{
+	size_t dstlen;
+
+	if (EVP_DigestSignInit(ctx, NULL, algo, NULL, pkey) <= 0)
+		goto process_auth_err;
+
+	if (EVP_DigestSignUpdate(ctx, (char *)src, srclen) <= 0)
+		goto process_auth_err;
+
+	if (EVP_DigestSignFinal(ctx, dst, &dstlen) <= 0)
+		goto process_auth_err;
+
+	return 0;
+
+process_auth_err:
+	LIBCRYPTO_LOG_ERR("Process libcrypto auth failed");
+	return -EINVAL;
+}
+
+/*----------------------------------------------------------------------------*/
+
+/** Process auth/cipher operation */
+static int
+process_libcrypto_combined_op
+		(struct rte_crypto_op *op, struct libcrypto_session *sess,
+		struct rte_mbuf *mbuf_src, struct rte_mbuf *mbuf_dst)
+{
+	/* cipher */
+	uint8_t *src = NULL, *dst = NULL, *iv, *tag, *aad;
+	int srclen, ivlen, aadlen, status = -1;
+
+	iv = op->sym->cipher.iv.data;
+	ivlen = op->sym->cipher.iv.length;
+	aad = op->sym->auth.aad.data;
+	aadlen = op->sym->auth.aad.length;
+
+	tag = op->sym->auth.digest.data;
+	if (tag == NULL)
+		tag = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *,
+				op->sym->cipher.data.offset +
+				op->sym->cipher.data.length);
+
+	if (sess->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC)
+		srclen = 0;
+	else {
+		srclen = op->sym->cipher.data.length;
+		src = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *,
+				op->sym->cipher.data.offset);
+		dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *,
+				op->sym->cipher.data.offset);
+	}
+
+	if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		status = process_libcrypto_auth_encryption_gcm(
+				src, srclen, aad, aadlen, iv, ivlen,
+				sess->cipher.key.data, dst, tag,
+				sess->cipher.ctx, sess->cipher.evp_algo);
+	else
+		status = process_libcrypto_auth_decryption_gcm(
+				src, srclen, aad, aadlen, iv, ivlen,
+				sess->cipher.key.data, dst, tag,
+				sess->cipher.ctx, sess->cipher.evp_algo);
+
+	if (status == 0)
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	else
+		op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+	return status;
+}
+
+/** Process cipher operation */
+static int
+process_libcrypto_cipher_op
+		(struct rte_crypto_op *op, struct libcrypto_session *sess,
+		struct rte_mbuf *mbuf_src, struct rte_mbuf *mbuf_dst)
+{
+	uint8_t *src, *dst, *iv;
+	int srclen, status;
+
+	srclen = op->sym->cipher.data.length;
+	src = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *,
+			op->sym->cipher.data.offset);
+	dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *,
+			op->sym->cipher.data.offset);
+
+	iv = op->sym->cipher.iv.data;
+
+	if (sess->cipher.mode == LIBCRYPTO_CIPHER_LIB)
+		if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+			status = process_libcrypto_cipher_encrypt(src, dst, iv,
+					sess->cipher.key.data, srclen,
+					sess->cipher.ctx, sess->cipher.evp_algo);
+		else
+			status = process_libcrypto_cipher_decrypt(src, dst, iv,
+					sess->cipher.key.data, srclen,
+					sess->cipher.ctx, sess->cipher.evp_algo);
+	else
+		status = process_libcrypto_cipher_des3ctr(src, dst, iv,
+				sess->cipher.key.data, srclen, sess->cipher.ctx);
+
+	if (status == 0)
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	else
+		op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+	return status;
+}
+
+/** Process auth operation */
+static int
+process_libcrypto_auth_op
+		(struct rte_crypto_op *op, struct libcrypto_session *sess,
+		struct rte_mbuf *mbuf_src, struct rte_mbuf *mbuf_dst)
+{
+	uint8_t *src, *dst;
+	int srclen, status = -1;
+
+	srclen = op->sym->auth.data.length;
+	src = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *,
+			op->sym->auth.data.offset);
+	dst = op->sym->auth.digest.data;
+	if (dst == NULL) {
+		if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_GENERATE)
+			dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *,
+					op->sym->auth.data.offset +
+					op->sym->auth.data.length);
+		else
+			dst = (uint8_t *)rte_pktmbuf_append(mbuf_src,
+					op->sym->auth.digest.length);
+	}
+
+	switch (sess->auth.mode) {
+	case LIBCRYPTO_AUTH_AS_AUTH:
+		status = process_libcrypto_auth(src, dst,
+				NULL, NULL,	srclen,
+				sess->auth.auth.ctx, sess->auth.auth.evp_algo);
+		break;
+	case LIBCRYPTO_AUTH_AS_HMAC:
+		status = process_libcrypto_auth_hmac(src, dst,
+				NULL, sess->auth.hmac.pkey, srclen,
+				sess->auth.hmac.ctx, sess->auth.hmac.evp_algo);
+		break;
+	default:
+		break;
+	}
+
+	if (status == 0) {
+		op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
+			if (memcmp(dst, op->sym->auth.digest.data,
+					op->sym->auth.digest.length) != 0) {
+				op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+				status = -EINVAL;
+			}
+		}
+	} else
+		op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+	return status;
+}
+
+/** Process crypto operation for mbuf */
+static int
+process_op(const struct libcrypto_qp *qp, struct rte_crypto_op *op,
+		struct libcrypto_session *sess)
+{
+	struct rte_mbuf *msrc, *mdst;
+	int status;
+
+	msrc = op->sym->m_src;
+	mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+
+	switch (sess->chain_order) {
+	case LIBCRYPTO_CHAIN_ONLY_CIPHER:
+		status = process_libcrypto_cipher_op(op, sess, msrc, mdst);
+		break;
+	case LIBCRYPTO_CHAIN_ONLY_AUTH:
+		status = process_libcrypto_auth_op(op, sess, msrc, mdst);
+		break;
+	case LIBCRYPTO_CHAIN_CIPHER_AUTH:
+		status = process_libcrypto_cipher_op(op, sess, msrc, mdst);
+		if (status == 0)
+			status = process_libcrypto_auth_op(op, sess, mdst, mdst);
+		break;
+	case LIBCRYPTO_CHAIN_AUTH_CIPHER:
+		status = process_libcrypto_auth_op(op, sess, msrc, mdst);
+		if (status == 0)
+			status = process_libcrypto_cipher_op(op, sess, msrc, mdst);
+		break;
+	case LIBCRYPTO_CHAIN_COMBINED:
+		status = process_libcrypto_combined_op(op, sess, msrc, mdst);
+		break;
+	default:
+		status = -1;
+		break;
+	}
+
+	/* Free session if a session-less crypto op */
+	if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+		libcrypto_reset_session(sess);
+		memset(sess, 0, sizeof(struct libcrypto_session));
+		rte_mempool_put(qp->sess_mp, op->sym->session);
+		op->sym->session = NULL;
+	}
+
+	if (status != 0)
+		return -1;
+
+	return rte_ring_enqueue(qp->processed_ops, (void *)op);
+}
+
+/*
+ *------------------------------------------------------------------------------
+ * PMD Framework
+ *------------------------------------------------------------------------------
+ */
+
+/** Enqueue burst */
+static uint16_t
+libcrypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct libcrypto_session *sess;
+	struct libcrypto_qp *qp = queue_pair;
+	int i, retval;
+
+	for (i = 0; i < nb_ops; i++) {
+		sess = get_session(qp, ops[i]);
+		if (unlikely(sess == NULL))
+			goto enqueue_err;
+
+		retval = process_op(qp, ops[i], sess);
+		if (unlikely(retval < 0))
+			goto enqueue_err;
+	}
+
+	qp->stats.enqueued_count += i;
+	return i;
+
+enqueue_err:
+	qp->stats.enqueue_err_count++;
+	return i;
+}
+
+/** Dequeue burst */
+static uint16_t
+libcrypto_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+		uint16_t nb_ops)
+{
+	struct libcrypto_qp *qp = queue_pair;
+
+	unsigned int nb_dequeued = 0;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_ops,
+			(void **)ops, nb_ops);
+	qp->stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+/** Create LIBCRYPTO crypto device */
+static int
+cryptodev_libcrypto_create(const char *name,
+		struct rte_crypto_vdev_init_params *init_params)
+{
+	struct rte_cryptodev *dev;
+	char crypto_dev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct libcrypto_private *internals;
+
+	/* create a unique device name */
+	if (create_unique_device_name(crypto_dev_name,
+			RTE_CRYPTODEV_NAME_MAX_LEN) != 0) {
+		LIBCRYPTO_LOG_ERR("failed to create unique cryptodev name");
+		return -EINVAL;
+	}
+
+	dev = rte_cryptodev_pmd_virtual_dev_init(crypto_dev_name,
+			sizeof(struct libcrypto_private), init_params->socket_id);
+	if (dev == NULL) {
+		LIBCRYPTO_LOG_ERR("failed to create cryptodev vdev");
+		goto init_error;
+	}
+
+	dev->dev_type = RTE_CRYPTODEV_LIBCRYPTO_PMD;
+	dev->dev_ops = rte_libcrypto_pmd_ops;
+
+	/* register rx/tx burst functions for data path */
+	dev->dequeue_burst = libcrypto_pmd_dequeue_burst;
+	dev->enqueue_burst = libcrypto_pmd_enqueue_burst;
+
+	dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			RTE_CRYPTODEV_FF_CPU_AESNI;
+
+	/* Set vector instructions mode supported */
+	internals = dev->data->dev_private;
+
+	internals->max_nb_qpairs = init_params->max_nb_queue_pairs;
+	internals->max_nb_sessions = init_params->max_nb_sessions;
+
+	return 0;
+
+init_error:
+	LIBCRYPTO_LOG_ERR("driver %s: cryptodev_libcrypto_create failed", name);
+
+	cryptodev_libcrypto_uninit(crypto_dev_name);
+	return -EFAULT;
+}
+
+/** Initialise LIBCRYPTO crypto device */
+static int
+cryptodev_libcrypto_init(const char *name,
+		const char *input_args)
+{
+	struct rte_crypto_vdev_init_params init_params = {
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_QUEUE_PAIRS,
+		RTE_CRYPTODEV_VDEV_DEFAULT_MAX_NB_SESSIONS,
+		rte_socket_id()
+	};
+
+	rte_cryptodev_parse_vdev_init_params(&init_params, input_args);
+
+	RTE_LOG(INFO, PMD, "Initialising %s on NUMA node %d\n", name,
+			init_params.socket_id);
+	RTE_LOG(INFO, PMD, "  Max number of queue pairs = %d\n",
+			init_params.max_nb_queue_pairs);
+	RTE_LOG(INFO, PMD, "  Max number of sessions = %d\n",
+			init_params.max_nb_sessions);
+
+	return cryptodev_libcrypto_create(name, &init_params);
+}
+
+/** Uninitialise LIBCRYPTO crypto device */
+static int
+cryptodev_libcrypto_uninit(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	RTE_LOG(INFO, PMD,
+		"Closing LIBCRYPTO crypto device %s on numa socket %u\n",
+		name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_driver cryptodev_libcrypto_pmd_drv = {
+	.type = PMD_VDEV,
+	.init = cryptodev_libcrypto_init,
+	.uninit = cryptodev_libcrypto_uninit
+};
+
+PMD_REGISTER_DRIVER(cryptodev_libcrypto_pmd_drv, CRYPTODEV_NAME_LIBCRYPTO_PMD);
+DRIVER_REGISTER_PARAM_STRING(CRYPTODEV_NAME_LIBCRYPTO_PMD,
+	"max_nb_queue_pairs=<int> "
+	"max_nb_sessions=<int> "
+	"socket_id=<int>");
diff --git a/drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c b/drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c
new file mode 100644
index 0000000..ae27359
--- /dev/null
+++ b/drivers/crypto/libcrypto/rte_libcrypto_pmd_ops.c
@@ -0,0 +1,708 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "rte_libcrypto_pmd_private.h"
+
+
+static const struct rte_cryptodev_capabilities libcrypto_pmd_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* MD5 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 32,
+					.max = 32,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 */
+			.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			{.sym = {
+				.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+				{.auth = {
+					.algo = RTE_CRYPTO_AUTH_SHA256,
+					.block_size = 64,
+					.key_size = {
+						.min = 0,
+						.max = 0,
+						.increment = 0
+					},
+					.digest_size = {
+						.min = 32,
+						.max = 32,
+						.increment = 0
+					},
+					.aad_size = { 0 }
+				}, }
+			}, }
+		},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA384 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384,
+				.block_size = 128,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 128,
+					.max = 128,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512  */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512,
+				.block_size = 128,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.aad_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* AES CTR */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CTR,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* AES GCM (AUTH) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				}
+			}, }
+		}, }
+	},
+	{	/* AES GCM (CIPHER) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 12,
+					.max = 16,
+					.increment = 4
+				}
+			}, }
+		}, }
+	},
+	{	/* AES GMAC (AUTH) */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_AES_GMAC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* 3DES CTR */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,
+				.block_size = 8,
+				.key_size = {
+					.min = 16,
+					.max = 24,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+
+/** Configure device */
+static int
+libcrypto_pmd_config(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+libcrypto_pmd_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+libcrypto_pmd_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+/** Close device */
+static int
+libcrypto_pmd_close(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+
+/** Get device statistics */
+static void
+libcrypto_pmd_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct libcrypto_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->stats.enqueued_count;
+		stats->dequeued_count += qp->stats.dequeued_count;
+
+		stats->enqueue_err_count += qp->stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+libcrypto_pmd_stats_reset(struct rte_cryptodev *dev)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct libcrypto_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->stats, 0, sizeof(qp->stats));
+	}
+}
+
+
+/** Get device info */
+static void
+libcrypto_pmd_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info)
+{
+	struct libcrypto_private *internals = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->dev_type = dev->dev_type;
+		dev_info->feature_flags = dev->feature_flags;
+		dev_info->capabilities = libcrypto_pmd_capabilities;
+		dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
+		dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
+	}
+}
+
+/** Release queue pair */
+static int
+libcrypto_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	if (dev->data->queue_pairs[qp_id] != NULL) {
+		rte_free(dev->data->queue_pairs[qp_id]);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on it's name, dev_id and qp_id */
+static int
+libcrypto_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+		struct libcrypto_qp *qp)
+{
+	unsigned int n = snprintf(qp->name, sizeof(qp->name),
+			"libcrypto_pmd_%u_qp_%u",
+			dev->data->dev_id, qp->id);
+
+	if (n > sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+
+/** Create a ring to place processed operations on */
+static struct rte_ring *
+libcrypto_pmd_qp_create_processed_ops_ring(struct libcrypto_qp *qp,
+		unsigned int ring_size, int socket_id)
+{
+	struct rte_ring *r;
+
+	r = rte_ring_lookup(qp->name);
+	if (r) {
+		if (r->prod.size >= ring_size) {
+			LIBCRYPTO_LOG_INFO(
+				"Reusing existing ring %s for processed ops",
+				 qp->name);
+			return r;
+		}
+
+		LIBCRYPTO_LOG_ERR(
+			"Unable to reuse existing ring %s for processed ops",
+			 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			RING_F_SP_ENQ | RING_F_SC_DEQ);
+}
+
+
+/** Setup a queue pair */
+static int
+libcrypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		 int socket_id)
+{
+	struct libcrypto_qp *qp = NULL;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		libcrypto_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("LIBCRYPTO PMD Queue Pair", sizeof(*qp),
+					RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return -ENOMEM;
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (libcrypto_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_ops = libcrypto_pmd_qp_create_processed_ops_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_ops == NULL)
+		goto qp_setup_cleanup;
+
+	qp->sess_mp = dev->data->session_pool;
+
+	memset(&qp->stats, 0, sizeof(qp->stats));
+
+	return 0;
+
+qp_setup_cleanup:
+	if (qp)
+		rte_free(qp);
+
+	return -1;
+}
+
+/** Start queue pair */
+static int
+libcrypto_pmd_qp_start(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Stop queue pair */
+static int
+libcrypto_pmd_qp_stop(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused uint16_t queue_pair_id)
+{
+	return -ENOTSUP;
+}
+
+/** Return the number of allocated queue pairs */
+static uint32_t
+libcrypto_pmd_qp_count(struct rte_cryptodev *dev)
+{
+	return dev->data->nb_queue_pairs;
+}
+
+/** Returns the size of the session structure */
+static unsigned
+libcrypto_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct libcrypto_session);
+}
+
+/** Configure the session from a crypto xform chain */
+static void *
+libcrypto_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
+		struct rte_crypto_sym_xform *xform,	void *sess)
+{
+	if (unlikely(sess == NULL)) {
+		LIBCRYPTO_LOG_ERR("invalid session struct");
+		return NULL;
+	}
+
+	if (libcrypto_set_session_parameters(
+			sess, xform) != 0) {
+		LIBCRYPTO_LOG_ERR("failed configure session parameters");
+		return NULL;
+	}
+
+	return sess;
+}
+
+
+/** Clear the memory of session so it doesn't leave key material behind */
+static void
+libcrypto_pmd_session_clear(struct rte_cryptodev *dev __rte_unused, void *sess)
+{
+	/*
+	 * Current just resetting the whole data structure, need to investigate
+	 * whether a more selective reset of key would be more performant
+	 */
+	if (sess) {
+		libcrypto_reset_session(sess);
+		memset(sess, 0, sizeof(struct libcrypto_session));
+	}
+}
+
+struct rte_cryptodev_ops libcrypto_pmd_ops = {
+		.dev_configure		= libcrypto_pmd_config,
+		.dev_start		= libcrypto_pmd_start,
+		.dev_stop		= libcrypto_pmd_stop,
+		.dev_close		= libcrypto_pmd_close,
+
+		.stats_get		= libcrypto_pmd_stats_get,
+		.stats_reset		= libcrypto_pmd_stats_reset,
+
+		.dev_infos_get		= libcrypto_pmd_info_get,
+
+		.queue_pair_setup	= libcrypto_pmd_qp_setup,
+		.queue_pair_release	= libcrypto_pmd_qp_release,
+		.queue_pair_start	= libcrypto_pmd_qp_start,
+		.queue_pair_stop	= libcrypto_pmd_qp_stop,
+		.queue_pair_count	= libcrypto_pmd_qp_count,
+
+		.session_get_size	= libcrypto_pmd_session_get_size,
+		.session_configure	= libcrypto_pmd_session_configure,
+		.session_clear		= libcrypto_pmd_session_clear
+};
+
+struct rte_cryptodev_ops *rte_libcrypto_pmd_ops = &libcrypto_pmd_ops;
diff --git a/drivers/crypto/libcrypto/rte_libcrypto_pmd_private.h b/drivers/crypto/libcrypto/rte_libcrypto_pmd_private.h
new file mode 100644
index 0000000..dbef57f
--- /dev/null
+++ b/drivers/crypto/libcrypto/rte_libcrypto_pmd_private.h
@@ -0,0 +1,174 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _LIBCRYPTO_PMD_PRIVATE_H_
+#define _LIBCRYPTO_PMD_PRIVATE_H_
+
+#include <openssl/evp.h>
+#include <openssl/des.h>
+
+
+#define LIBCRYPTO_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n",  \
+			RTE_STR(CRYPTODEV_NAME_LIBCRYPTO_PMD), \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_LIBCRYPTO_DEBUG
+#define LIBCRYPTO_LOG_INFO(fmt, args...) \
+	RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			RTE_STR(CRYPTODEV_NAME_LIBCRYPTO_PMD), \
+			__func__, __LINE__, ## args)
+
+#define LIBCRYPTO_LOG_DBG(fmt, args...) \
+	RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
+			RTE_STR(CRYPTODEV_NAME_LIBCRYPTO_PMD), \
+			__func__, __LINE__, ## args)
+#else
+#define LIBCRYPTO_LOG_INFO(fmt, args...)
+#define LIBCRYPTO_LOG_DBG(fmt, args...)
+#endif
+
+
+/** LIBCRYPTO operation order mode enumerator */
+enum libcrypto_chain_order {
+	LIBCRYPTO_CHAIN_ONLY_CIPHER,
+	LIBCRYPTO_CHAIN_ONLY_AUTH,
+	LIBCRYPTO_CHAIN_CIPHER_AUTH,
+	LIBCRYPTO_CHAIN_AUTH_CIPHER,
+	LIBCRYPTO_CHAIN_COMBINED,
+	LIBCRYPTO_CHAIN_NOT_SUPPORTED
+};
+
+/** LIBCRYPTO cipher mode enumerator */
+enum libcrypto_cipher_mode {
+	LIBCRYPTO_CIPHER_LIB,
+	LIBCRYPTO_CIPHER_DES3CTR,
+};
+
+/** LIBCRYPTO auth mode enumerator */
+enum libcrypto_auth_mode {
+	LIBCRYPTO_AUTH_AS_AUTH,
+	LIBCRYPTO_AUTH_AS_HMAC,
+};
+
+/** private data structure for each LIBCRYPTO crypto device */
+struct libcrypto_private {
+	unsigned int max_nb_qpairs;
+	/**< Max number of queue pairs */
+	unsigned int max_nb_sessions;
+	/**< Max number of sessions */
+};
+
+/** LIBCRYPTO crypto queue pair */
+struct libcrypto_qp {
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_LEN];
+	/**< Unique Queue Pair Name */
+	struct rte_ring *processed_ops;
+	/**< Ring for placing process packets */
+	struct rte_mempool *sess_mp;
+	/**< Session Mempool */
+	struct rte_cryptodev_stats stats;
+	/**< Queue pair statistics */
+} __rte_cache_aligned;
+
+/** LIBCRYPTO crypto private session structure */
+struct libcrypto_session {
+	enum libcrypto_chain_order chain_order;
+	/**< chain order mode */
+
+	/** Cipher Parameters */
+	struct {
+		enum rte_crypto_cipher_operation direction;
+		/**< cipher operation direction */
+		enum libcrypto_cipher_mode mode;
+		/**< cipher operation mode */
+		enum rte_crypto_cipher_algorithm algo;
+		/**< cipher algorithm */
+
+		struct {
+			uint8_t data[32];
+			/**< key data */
+			size_t length;
+			/**< key length in bytes */
+		} key;
+
+		const EVP_CIPHER *evp_algo;
+		/**< pointer to EVP algorithm function */
+		EVP_CIPHER_CTX *ctx;
+		/**< pointer to EVP context structure */
+	} cipher;
+
+	/** Authentication Parameters */
+	struct {
+		enum rte_crypto_auth_operation operation;
+		/**< auth operation generate or verify */
+		enum libcrypto_auth_mode mode;
+		/**< auth operation mode */
+		enum rte_crypto_auth_algorithm algo;
+		/**< cipher algorithm */
+
+		union {
+			struct {
+				const EVP_MD *evp_algo;
+				/**< pointer to EVP algorithm function */
+				EVP_MD_CTX *ctx;
+				/**< pointer to EVP context structure */
+			} auth;
+
+			struct {
+				EVP_PKEY *pkey;
+				/**< pointer to EVP key */
+				const EVP_MD *evp_algo;
+				/**< pointer to EVP algorithm function */
+				EVP_MD_CTX *ctx;
+				/**< pointer to EVP context structure */
+			} hmac;
+		};
+	} auth;
+
+} __rte_cache_aligned;
+
+/** Set and validate LIBCRYPTO crypto session parameters */
+extern int
+libcrypto_set_session_parameters(struct libcrypto_session *sess,
+		const struct rte_crypto_sym_xform *xform);
+
+/** Reset LIBCRYPTO crypto session parameters */
+extern void
+libcrypto_reset_session(struct libcrypto_session *sess);
+
+/** device specific operations function pointer structure */
+extern struct rte_cryptodev_ops *rte_libcrypto_pmd_ops;
+
+#endif /* _LIBCRYPTO_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/libcrypto/rte_pmd_libcrypto_version.map b/drivers/crypto/libcrypto/rte_pmd_libcrypto_version.map
new file mode 100644
index 0000000..cc5829e
--- /dev/null
+++ b/drivers/crypto/libcrypto/rte_pmd_libcrypto_version.map
@@ -0,0 +1,3 @@
+DPDK_16.11 {
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 1a0095b..67c0aa9 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -135,7 +135,8 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -lrte_pmd_aesni_mb
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_MB)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)  += -lrte_pmd_aesni_gcm -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)  += -L$(AESNI_MULTI_BUFFER_LIB_PATH) -lIPSec_MB
-_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_LIBCRYPTO)    += -lrte_pmd_libcrypto -lcrypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO)+= -lrte_pmd_null_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)        += -lrte_pmd_qat -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -lrte_pmd_snow3g
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)     += -L$(LIBSSO_SNOW3G_PATH)/build -lsso_snow3g
-- 
1.9.1

  reply	other threads:[~2016-09-19  9:03 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-19  8:59 [PATCH v2 0/4] new crypto software based device Michal Jastrzebski
2016-09-19  8:59 ` Michal Jastrzebski [this message]
2016-09-19 22:52   ` [PATCH v2 1/4] libcrypto_pmd: initial implementation of SW crypto device De Lara Guarch, Pablo
2016-09-19  8:59 ` [PATCH v2 2/4] lib/cryptodev: added support to libcrypto PMD Michal Jastrzebski
2016-09-19 22:59   ` De Lara Guarch, Pablo
2016-09-19  8:59 ` [PATCH v2 3/4] app/test: added tests for " Michal Jastrzebski
2016-09-19  8:59 ` [PATCH v2 4/4] examples/l2fwd-crypto: updated example " Michal Jastrzebski
2016-09-19 23:58   ` De Lara Guarch, Pablo
2016-09-29 14:14 ` [PATCH v3 0/3] new crypto software based device Slawomir Mrozowicz
2016-09-29 14:14   ` [PATCH v3 1/3] libcrypto_pmd: initial implementation of SW crypto device Slawomir Mrozowicz
2016-09-29 14:14   ` [PATCH v3 2/3] app/test: added tests for libcrypto PMD Slawomir Mrozowicz
2016-09-29 14:39   ` [PATCH v3 3/3] examples/l2fwd-crypto: updated example " Slawomir Mrozowicz
2016-09-30 16:17   ` [PATCH v4 0/5] new crypto software based device Slawomir Mrozowicz
2016-09-30 16:17     ` [PATCH v4 1/5] libcrypto_pmd: initial implementation of SW crypto device Slawomir Mrozowicz
2016-09-30 16:32     ` [PATCH v4 2/5] app/test: cryptodev AES tests rework Slawomir Mrozowicz
2016-09-30 16:32       ` [PATCH v4 3/5] app/test: added tests for libcrypto PMD Slawomir Mrozowicz
2016-09-30 16:32       ` [PATCH v4 4/5] app/test: added big data GMAC test for libcrypto Slawomir Mrozowicz
2016-09-30 17:18         ` Thomas Monjalon
2016-10-03  8:12           ` Mrozowicz, SlawomirX
2016-09-30 16:32       ` [PATCH v4 5/5] examples/l2fwd-crypto: updated example for libcrypto PMD Slawomir Mrozowicz
2016-10-03 14:26     ` [PATCH v5 0/4] new crypto software based device Slawomir Mrozowicz
2016-10-03 14:45       ` [PATCH v5 1/4] libcrypto_pmd: initial implementation of SW crypto device Slawomir Mrozowicz
2016-10-03 15:02       ` [PATCH v5 2/4] app/test: cryptodev AES tests rework Slawomir Mrozowicz
2016-10-03 15:17       ` [PATCH v5 3/4] app/test: added tests for libcrypto PMD Slawomir Mrozowicz
2016-10-03 15:28       ` [PATCH v5 4/4] examples/l2fwd-crypto: updated example " Slawomir Mrozowicz
2016-10-03 22:36       ` [PATCH v5 0/4] new crypto software based device De Lara Guarch, Pablo
2016-10-04 15:11       ` [PATCH v6 " Slawomir Mrozowicz
2016-10-04 15:11         ` [PATCH v6 1/4] libcrypto_pmd: initial implementation of SW crypto device Slawomir Mrozowicz
2016-10-04 15:11         ` [PATCH v6 2/4] app/test: cryptodev AES tests rework Slawomir Mrozowicz
2016-10-04 15:11         ` [PATCH v6 3/4] app/test: added tests for libcrypto PMD Slawomir Mrozowicz
2016-10-06 13:30           ` Trahe, Fiona
2016-10-06 13:53             ` Azarewicz, PiotrX T
2016-10-04 15:11         ` [PATCH v6 4/4] examples/l2fwd-crypto: updated example " Slawomir Mrozowicz
2016-10-04 23:36         ` [PATCH v6 0/4] new crypto software based device De Lara Guarch, Pablo
2016-10-05  0:05           ` De Lara Guarch, Pablo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1474275582-6108-2-git-send-email-michalx.k.jastrzebski@intel.com \
    --to=michalx.k.jastrzebski@intel.com \
    --cc=danielx.t.mrzyglod@intel.com \
    --cc=dev@dpdk.org \
    --cc=michalx.kobylinski@intel.com \
    --cc=pablo.de.lara.guarch@intel.com \
    --cc=slawomirx.mrozowicz@intel.com \
    --cc=tomaszx.kulasek@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.