All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD
@ 2017-08-25 14:57 Radu Nicolau
  2017-08-25 14:57 ` [RFC PATCH 1/5] mbuff: added security offload flags Radu Nicolau
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Radu Nicolau @ 2017-08-25 14:57 UTC (permalink / raw)
  To: dev; +Cc: Radu Nicolau

This RFC is an update for the existing RFC
 
IPSec Inline and look aside crypto offload
http://dpdk.org/ml/archives/dev/2017-August/072900.html

It provides a few updates on the general rte_security APIs, adds security ops 
struct pointer to the net devices, implements the support on the IXGBE PMD and 
support in the IPsec sample application.

Radu Nicolau (5):
  mbuff: added security offload flags
  ethdev: added security ops struct pointer
  rte_security: updates and enabled security operations for ethdev
  ixgbe: enable inline ipsec
  examples/ipsec-secgw: enabled inline ipsec

 config/common_base                             |   1 +
 drivers/net/ixgbe/Makefile                     |   4 +-
 drivers/net/ixgbe/ixgbe_ethdev.c               |   3 +
 drivers/net/ixgbe/ixgbe_ethdev.h               |  10 +-
 drivers/net/ixgbe/ixgbe_ipsec.c                | 617 +++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ipsec.h                | 142 ++++++
 drivers/net/ixgbe/ixgbe_rxtx.c                 |  33 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c         |  44 ++
 examples/ipsec-secgw/esp.c                     |  26 +-
 examples/ipsec-secgw/ipsec.c                   |  61 ++-
 examples/ipsec-secgw/ipsec.h                   |   2 +
 examples/ipsec-secgw/sa.c                      |  65 ++-
 lib/Makefile                                   |   1 +
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |   4 +-
 lib/librte_cryptodev/rte_cryptodev_version.map |  10 +
 lib/librte_cryptodev/rte_security.c            |  34 +-
 lib/librte_cryptodev/rte_security.h            |  12 +-
 lib/librte_ether/rte_ethdev.h                  |   1 +
 lib/librte_mbuf/rte_mbuf.h                     |  19 +-
 19 files changed, 1036 insertions(+), 53 deletions(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
 create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h

-- 
2.7.5

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC PATCH 1/5] mbuff: added security offload flags
  2017-08-25 14:57 [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Radu Nicolau
@ 2017-08-25 14:57 ` Radu Nicolau
  2017-08-25 14:57 ` [RFC PATCH 2/5] ethdev: added security ops struct pointer Radu Nicolau
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Radu Nicolau @ 2017-08-25 14:57 UTC (permalink / raw)
  To: dev; +Cc: Radu Nicolau

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
 lib/librte_mbuf/rte_mbuf.h | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index eaed7ee..6a77270 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -189,11 +189,27 @@ extern "C" {
  */
 #define PKT_RX_TIMESTAMP     (1ULL << 17)
 
+/**
+ * Indicate that security offload processing was applied on the RX packet.
+ */
+#define PKT_RX_SECURITY_OFFLOAD		(1ULL << 18)
+
+/**
+ * Indicate that security offload processing failed on the RX packet.
+ */
+#define PKT_RX_SECURITY_OFFLOAD_FAILED  (1ULL << 19)
+
+
 /* add new RX flags here */
 
 /* add new TX flags here */
 
 /**
+ * Request security offload processing on the TX packet.
+ */
+#define PKT_TX_SECURITY_OFFLOAD (1ULL << 43)
+
+/**
  * Offload the MACsec. This flag must be set by the application to enable
  * this offload feature for a packet to be transmitted.
  */
@@ -316,7 +332,8 @@ extern "C" {
 		PKT_TX_QINQ_PKT |        \
 		PKT_TX_VLAN_PKT |        \
 		PKT_TX_TUNNEL_MASK |	 \
-		PKT_TX_MACSEC)
+		PKT_TX_MACSEC |		 \
+		PKT_TX_SECURITY_OFFLOAD)
 
 #define __RESERVED           (1ULL << 61) /**< reserved for future mbuf use */
 
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 2/5] ethdev: added security ops struct pointer
  2017-08-25 14:57 [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Radu Nicolau
  2017-08-25 14:57 ` [RFC PATCH 1/5] mbuff: added security offload flags Radu Nicolau
@ 2017-08-25 14:57 ` Radu Nicolau
  2017-08-25 14:57 ` [RFC PATCH 3/5] rte_security: updates and enabled security operations for ethdev Radu Nicolau
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Radu Nicolau @ 2017-08-25 14:57 UTC (permalink / raw)
  To: dev; +Cc: Radu Nicolau

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
 lib/librte_ether/rte_ethdev.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 0adf327..b4a02d7 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -1649,6 +1649,7 @@ struct rte_eth_dev {
 	 */
 	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
 	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	struct rte_security_ops *sec_ops;  /**< Security ops */
 } __rte_cache_aligned;
 
 struct rte_eth_dev_sriov {
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 3/5] rte_security: updates and enabled security operations for ethdev
  2017-08-25 14:57 [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Radu Nicolau
  2017-08-25 14:57 ` [RFC PATCH 1/5] mbuff: added security offload flags Radu Nicolau
  2017-08-25 14:57 ` [RFC PATCH 2/5] ethdev: added security ops struct pointer Radu Nicolau
@ 2017-08-25 14:57 ` Radu Nicolau
  2017-08-29 12:14   ` Akhil Goyal
  2017-08-25 14:57 ` [RFC PATCH 4/5] ixgbe: enable inline ipsec Radu Nicolau
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Radu Nicolau @ 2017-08-25 14:57 UTC (permalink / raw)
  To: dev; +Cc: Radu Nicolau

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
 lib/Makefile                                   |  1 +
 lib/librte_cryptodev/rte_cryptodev_pmd.h       |  4 +--
 lib/librte_cryptodev/rte_cryptodev_version.map | 10 ++++++++
 lib/librte_cryptodev/rte_security.c            | 34 +++++++++++++++++---------
 lib/librte_cryptodev/rte_security.h            | 12 ++++++---
 5 files changed, 44 insertions(+), 17 deletions(-)

diff --git a/lib/Makefile b/lib/Makefile
index 86caba1..08a1767 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -51,6 +51,7 @@ DEPDIRS-librte_ether += librte_mbuf
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
 DEPDIRS-librte_cryptodev += librte_kvargs
+DEPDIRS-librte_cryptodev += librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
 DEPDIRS-librte_eventdev := librte_eal librte_ring
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 219fba6..ab3ecf7 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -371,7 +371,7 @@ struct rte_cryptodev_ops {
  *  - Returns -ENOTSUP if crypto device does not support the crypto transform.
  *  - Returns -ENOMEM if the private session could not be allocated.
  */
-typedef int (*security_configure_session_t)(struct rte_cryptodev *dev,
+typedef int (*security_configure_session_t)(void *dev,
 		struct rte_security_sess_conf *conf,
 		struct rte_security_session *sess,
 		struct rte_mempool *mp);
@@ -382,7 +382,7 @@ typedef int (*security_configure_session_t)(struct rte_cryptodev *dev,
  * @param	dev		Crypto device pointer
  * @param	sess		Security session structure
  */
-typedef void (*security_free_session_t)(struct rte_cryptodev *dev,
+typedef void (*security_free_session_t)(void *dev,
 		struct rte_security_session *sess);
 
 /** Security operations function pointer table */
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index e9ba88a..20b553e 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -79,3 +79,13 @@ DPDK_17.08 {
 	rte_crypto_aead_operation_strings;
 
 } DPDK_17.05;
+
+DPDK_17.11 {
+	global:
+
+	rte_security_session_create;
+	rte_security_session_init;
+	rte_security_attach_session;
+	rte_security_session_free;
+
+} DPDK_17.08;
diff --git a/lib/librte_cryptodev/rte_security.c b/lib/librte_cryptodev/rte_security.c
index 7c73c93..5f35355 100644
--- a/lib/librte_cryptodev/rte_security.c
+++ b/lib/librte_cryptodev/rte_security.c
@@ -86,8 +86,12 @@ rte_security_session_init(uint16_t dev_id,
 			return -EINVAL;
 		cdev = rte_cryptodev_pmd_get_dev(dev_id);
 		index = cdev->driver_id;
+		if (cdev == NULL || sess == NULL || cdev->sec_ops == NULL
+				|| cdev->sec_ops->session_configure == NULL)
+			return -EINVAL;
 		if (sess->sess_private_data[index] == NULL) {
-			ret = cdev->sec_ops->session_configure(cdev, conf, sess, mp);
+			ret = cdev->sec_ops->session_configure((void *)cdev,
+					conf, sess, mp);
 			if (ret < 0) {
 				CDEV_LOG_ERR(
 					"cdev_id %d failed to configure session details",
@@ -100,14 +104,18 @@ rte_security_session_init(uint16_t dev_id,
 	case RTE_SECURITY_SESS_ETH_PROTO_OFFLOAD:
 		dev = &rte_eth_devices[dev_id];
 		index = dev->data->port_id;
+		if (dev == NULL || sess == NULL || dev->sec_ops == NULL
+				|| dev->sec_ops->session_configure == NULL)
+			return -EINVAL;
 		if (sess->sess_private_data[index] == NULL) {
-//			ret = dev->sec_ops->session_configure(dev, conf, sess, mp);
-//			if (ret < 0) {
-//				CDEV_LOG_ERR(
-//					"dev_id %d failed to configure session details",
-//					dev_id);
-//				return ret;
-//			}
+			ret = dev->sec_ops->session_configure((void *)dev,
+					conf, sess, mp);
+			if (ret < 0) {
+				CDEV_LOG_ERR(
+					"dev_id %d failed to configure session details",
+					dev_id);
+				return ret;
+			}
 		}
 		break;
 	default:
@@ -152,16 +160,18 @@ rte_security_session_clear(uint8_t dev_id,
 	switch (action_type) {
 	case RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD:
 		cdev =  rte_cryptodev_pmd_get_dev(dev_id);
-		if (cdev == NULL || sess == NULL)
+		if (cdev == NULL || sess == NULL || cdev->sec_ops == NULL
+				|| cdev->sec_ops->session_clear == NULL)
 			return -EINVAL;
-		cdev->sec_ops->session_clear(cdev, sess);
+		cdev->sec_ops->session_clear((void *)cdev, sess);
 		break;
 	case RTE_SECURITY_SESS_ETH_INLINE_CRYPTO:
 	case RTE_SECURITY_SESS_ETH_PROTO_OFFLOAD:
 		dev = &rte_eth_devices[dev_id];
-		if (dev == NULL || sess == NULL)
+		if (dev == NULL || sess == NULL || dev->sec_ops == NULL
+				|| dev->sec_ops->session_clear == NULL)
 			return -EINVAL;
-//		dev->dev_ops->session_clear(dev, sess);
+		dev->sec_ops->session_clear((void *)dev, sess);
 		break;
 	default:
 		return -EINVAL;
diff --git a/lib/librte_cryptodev/rte_security.h b/lib/librte_cryptodev/rte_security.h
index 9747d5e..0c8b358 100644
--- a/lib/librte_cryptodev/rte_security.h
+++ b/lib/librte_cryptodev/rte_security.h
@@ -20,7 +20,7 @@ extern "C" {
 #include <rte_memory.h>
 #include <rte_mempool.h>
 #include <rte_common.h>
-#include <rte_crypto.h>
+#include "rte_crypto.h"
 
 /** IPSec protocol mode */
 enum rte_security_conf_ipsec_sa_mode {
@@ -70,9 +70,9 @@ struct rte_security_ipsec_tunnel_param {
 		} ipv4; /**< IPv4 header parameters */
 
 		struct {
-			struct in6_addr *src_addr;
+			struct in6_addr src_addr;
 			/**< IPv6 source address */
-			struct in6_addr *dst_addr;
+			struct in6_addr dst_addr;
 			/**< IPv6 destination address */
 			uint8_t dscp;
 			/**< IPv6 Differentiated Services Code Point */
@@ -171,6 +171,12 @@ struct rte_security_ipsec_xform {
 		uint8_t *data;  /**< pointer to key data */
 		size_t length;   /**< key length in bytes */
 	} auth_key;
+	enum rte_crypto_aead_algorithm aead_alg;
+	/**< AEAD Algorithm */
+	struct {
+		uint8_t *data;  /**< pointer to key data */
+		size_t length;   /**< key length in bytes */
+	} aead_key;
 	uint32_t salt;	/**< salt for this SA */
 	enum rte_security_conf_ipsec_sa_mode mode;
 	/**< IPsec SA Mode - transport/tunnel */
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 4/5] ixgbe: enable inline ipsec
  2017-08-25 14:57 [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Radu Nicolau
                   ` (2 preceding siblings ...)
  2017-08-25 14:57 ` [RFC PATCH 3/5] rte_security: updates and enabled security operations for ethdev Radu Nicolau
@ 2017-08-25 14:57 ` Radu Nicolau
  2017-08-28 17:47   ` Ananyev, Konstantin
  2017-08-25 14:57 ` [RFC PATCH 5/5] examples/ipsec-secgw: enabled " Radu Nicolau
  2017-08-29 13:00 ` [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Boris Pismenny
  5 siblings, 1 reply; 13+ messages in thread
From: Radu Nicolau @ 2017-08-25 14:57 UTC (permalink / raw)
  To: dev; +Cc: Radu Nicolau

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
 config/common_base                     |   1 +
 drivers/net/ixgbe/Makefile             |   4 +-
 drivers/net/ixgbe/ixgbe_ethdev.c       |   3 +
 drivers/net/ixgbe/ixgbe_ethdev.h       |  10 +-
 drivers/net/ixgbe/ixgbe_ipsec.c        | 617 +++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ipsec.h        | 142 ++++++++
 drivers/net/ixgbe/ixgbe_rxtx.c         |  33 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c |  44 +++
 8 files changed, 850 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
 create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h

diff --git a/config/common_base b/config/common_base
index 5e97a08..2084609 100644
--- a/config/common_base
+++ b/config/common_base
@@ -179,6 +179,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
 CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
 CONFIG_RTE_IXGBE_INC_VECTOR=y
 CONFIG_RTE_LIBRTE_IXGBE_BYPASS=n
+CONFIG_RTE_LIBRTE_IXGBE_IPSEC=y
 
 #
 # Compile burst-oriented I40E PMD driver
diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index 5e57cb3..1180900 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -118,11 +118,13 @@ SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
 else
 SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_sse.c
 endif
-
 ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_BYPASS),y)
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
 endif
+ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_IPSEC),y)
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ipsec.c
+endif
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
 SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 22171d8..73de5e6 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1135,6 +1135,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &ixgbe_eth_dev_ops;
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+	eth_dev->sec_ops = &ixgbe_security_ops;
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
 	eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
 	eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index caa50c8..d1a84e2 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -38,6 +38,9 @@
 #include "base/ixgbe_dcb_82599.h"
 #include "base/ixgbe_dcb_82598.h"
 #include "ixgbe_bypass.h"
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+#include "ixgbe_ipsec.h"
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
 #include <rte_time.h>
 #include <rte_hash.h>
 #include <rte_pci.h>
@@ -529,7 +532,9 @@ struct ixgbe_adapter {
 	struct ixgbe_filter_info    filter;
 	struct ixgbe_l2_tn_info     l2_tn;
 	struct ixgbe_bw_conf        bw_conf;
-
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+	struct ixgbe_ipsec          ipsec;
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
 	bool rx_bulk_alloc_allowed;
 	bool rx_vec_allowed;
 	struct rte_timecounter      systime_tc;
@@ -586,6 +591,9 @@ struct ixgbe_adapter {
 #define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->tm_conf)
 
+#define IXGBE_DEV_PRIVATE_TO_IPSEC(adapter)\
+	(&((struct ixgbe_adapter *)adapter)->ipsec)
+
 /*
  * RX/TX function prototypes
  */
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
new file mode 100644
index 0000000..d866cd8
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -0,0 +1,617 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	 notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	 notice, this list of conditions and the following disclaimer in
+ *	 the documentation and/or other materials provided with the
+ *	 distribution.
+ *	 * Neither the name of Intel Corporation nor the names of its
+ *	 contributors may be used to endorse or promote products derived
+ *	 from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_ethdev.h>
+#include <rte_ethdev_pci.h>
+#include <rte_security.h>
+#include <rte_ip.h>
+#include <rte_jhash.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "base/ixgbe_type.h"
+#include "base/ixgbe_api.h"
+#include "ixgbe_ethdev.h"
+#include "ixgbe_ipsec.h"
+
+
+#define IXGBE_WAIT_RW(__reg, __rw)			\
+{							\
+	IXGBE_WRITE_REG(hw, (__reg), reg);		\
+	while ((IXGBE_READ_REG(hw, (__reg))) & (__rw))	\
+	;						\
+}
+#define IXGBE_WAIT_RREAD  IXGBE_WAIT_RW(IXGBE_IPSRXIDX, IPSRXIDX_READ)
+#define IXGBE_WAIT_RWRITE IXGBE_WAIT_RW(IXGBE_IPSRXIDX, IPSRXIDX_WRITE)
+#define IXGBE_WAIT_TREAD  IXGBE_WAIT_RW(IXGBE_IPSTXIDX, IPSRXIDX_READ)
+#define IXGBE_WAIT_TWRITE IXGBE_WAIT_RW(IXGBE_IPSTXIDX, IPSRXIDX_WRITE)
+
+#define CMP_IP(a, b)	\
+		((a).ipv6[0] == (b).ipv6[0] && (a).ipv6[1] == (b).ipv6[1] && \
+		(a).ipv6[2] == (b).ipv6[2] && (a).ipv6[3] == (b).ipv6[3])
+
+
+static void
+ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	int i = 0;
+
+	/* clear Rx IP table*/
+	for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+		uint16_t index = i << 3;
+		uint32_t reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+		IXGBE_WAIT_RWRITE;
+	}
+
+	/* clear Rx SPI and Rx/Tx SA tables*/
+	for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+		uint32_t index = i << 3;
+		uint32_t reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+		IXGBE_WAIT_RWRITE;
+		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+		IXGBE_WAIT_RWRITE;
+		reg = IPSRXIDX_WRITE | index;
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+		IXGBE_WAIT_TWRITE;
+	}
+}
+
+static int
+ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_link link;
+	uint32_t reg;
+
+	/* Halt the data paths */
+	reg = IXGBE_SECTXCTRL_TX_DIS;
+	IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL, reg);
+	reg = IXGBE_SECRXCTRL_RX_DIS;
+	IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, reg);
+
+	/* Wait for Tx path to empty */
+	do {
+		rte_eth_link_get_nowait(dev->data->port_id, &link);
+		if (link.link_status != ETH_LINK_UP) {
+			/* Fix for HSD:4426139
+			 * If the Tx FIFO has data but no link,
+			 * we can't clear the Tx Sec block. So set MAC
+			 * loopback before block clear
+			 */
+			reg = IXGBE_READ_REG(hw, IXGBE_MACC);
+			reg |= IXGBE_MACC_FLU;
+			IXGBE_WRITE_REG(hw, IXGBE_MACC, reg);
+
+			reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+			reg |= IXGBE_HLREG0_LPBK;
+			IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+			struct timespec time;
+			time.tv_sec = 0;
+			time.tv_nsec = 1000000 * 3;
+			nanosleep(&time, NULL);
+		}
+
+		reg = IXGBE_READ_REG(hw, IXGBE_SECTXSTAT);
+
+		rte_eth_link_get_nowait(dev->data->port_id, &link);
+		if (link.link_status != ETH_LINK_UP) {
+			reg = IXGBE_READ_REG(hw, IXGBE_MACC);
+			reg &= ~(IXGBE_MACC_FLU);
+			IXGBE_WRITE_REG(hw, IXGBE_MACC, reg);
+
+			reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+			reg &= ~(IXGBE_HLREG0_LPBK);
+			IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+		}
+	} while (!(reg & IXGBE_SECTXSTAT_SECTX_RDY));
+
+	/* Wait for Rx path to empty*/
+	do {
+		reg = IXGBE_READ_REG(hw, IXGBE_SECRXSTAT);
+	} while (!(reg & IXGBE_SECRXSTAT_SECRX_RDY));
+
+	/* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
+	IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
+
+	/* IFG needs to be set to 3 when we are using security. Otherwise a Tx
+	 * hang will occur with heavy traffic.
+	 */
+	reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
+	reg = (reg & 0xFFFFFFF0) | 0x3;
+	IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
+
+	reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
+	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
+
+	/* Enable the Tx crypto engine and restart the Tx data path;
+	 * set the STORE_FORWARD bit for IPSec.
+	 */
+	IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL, IXGBE_SECTXCTRL_STORE_FORWARD);
+
+	/* Enable the Rx crypto engine and restart the Rx data path*/
+	IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
+
+	/* Test if crypto was enabled */
+	reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
+	if (reg != IXGBE_SECTXCTRL_STORE_FORWARD) {
+		PMD_DRV_LOG(ERR, "Error enabling Tx Crypto");
+		return -1;
+	}
+	reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
+	if (reg != 0) {
+		PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+		return -1;
+	}
+
+	ixgbe_crypto_clear_ipsec_tables(dev);
+
+	/* create hash table*/
+	{
+		struct ixgbe_ipsec *internals = IXGBE_DEV_PRIVATE_TO_IPSEC(
+				dev->data->dev_private);
+		struct rte_hash_parameters params = { 0 };
+		params.entries = IPSEC_MAX_SA_COUNT;
+		params.key_len = sizeof(uint32_t);
+		params.hash_func = rte_jhash;
+		params.hash_func_init_val = 0;
+		params.socket_id = rte_socket_id();
+		params.name = "tx_spi_sai_hash";
+		internals->tx_spi_sai_hash = rte_hash_create(&params);
+	}
+
+	return 0;
+}
+
+
+static int
+ixgbe_crypto_add_sa(struct ixgbe_crypto_session *sess)
+{
+	struct rte_eth_dev *dev = sess->dev;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
+			dev->data->dev_private);
+	uint32_t reg;
+	int sa_index = -1;
+
+	if (!(priv->flags & IS_INITIALIZED)) {
+		if (ixgbe_crypto_enable_ipsec(dev) == 0)
+			priv->flags |= IS_INITIALIZED;
+	}
+
+	if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+		int i, ip_index = -1;
+
+		/* Find a match in the IP table*/
+		for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+			if (CMP_IP(priv->rx_ip_table[i].ip,
+				 sess->dst_ip)) {
+				ip_index = i;
+				break;
+			}
+		}
+		/* If no match, find a free entry in the IP table*/
+		if (ip_index < 0) {
+			for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+				if (priv->rx_ip_table[i].ref_count == 0) {
+					ip_index = i;
+					break;
+				}
+			}
+		}
+
+		/* Fail if no match and no free entries*/
+		if (ip_index < 0) {
+			PMD_DRV_LOG(ERR, "No free entry left "
+					"in the Rx IP table\n");
+			return -1;
+		}
+
+		/* Find a free entry in the SA table*/
+		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+			if (priv->rx_sa_table[i].used == 0) {
+				sa_index = i;
+				break;
+			}
+		}
+		/* Fail if no free entries*/
+		if (sa_index < 0) {
+			PMD_DRV_LOG(ERR, "No free entry left in "
+					"the Rx SA table\n");
+			return -1;
+		}
+
+		priv->rx_ip_table[ip_index].ip.ipv6[0] =
+				rte_cpu_to_be_32(sess->dst_ip.ipv6[0]);
+		priv->rx_ip_table[ip_index].ip.ipv6[1] =
+				rte_cpu_to_be_32(sess->dst_ip.ipv6[1]);
+		priv->rx_ip_table[ip_index].ip.ipv6[2] =
+				rte_cpu_to_be_32(sess->dst_ip.ipv6[2]);
+		priv->rx_ip_table[ip_index].ip.ipv6[3] =
+				rte_cpu_to_be_32(sess->dst_ip.ipv6[3]);
+		priv->rx_ip_table[ip_index].ref_count++;
+
+		priv->rx_sa_table[sa_index].spi =
+				rte_cpu_to_be_32(sess->spi);
+		priv->rx_sa_table[sa_index].ip_index = ip_index;
+		priv->rx_sa_table[sa_index].key[3] =
+				rte_cpu_to_be_32(*(uint32_t *)&sess->key[0]);
+		priv->rx_sa_table[sa_index].key[2] =
+				rte_cpu_to_be_32(*(uint32_t *)&sess->key[4]);
+		priv->rx_sa_table[sa_index].key[1] =
+				rte_cpu_to_be_32(*(uint32_t *)&sess->key[8]);
+		priv->rx_sa_table[sa_index].key[0] =
+				rte_cpu_to_be_32(*(uint32_t *)&sess->key[12]);
+		priv->rx_sa_table[sa_index].salt =
+				rte_cpu_to_be_32(sess->salt);
+		priv->rx_sa_table[sa_index].mode = IPSRXMOD_VALID;
+		if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION)
+			priv->rx_sa_table[sa_index].mode |=
+					(IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
+		if (sess->dst_ip.type == IPv6)
+			priv->rx_sa_table[sa_index].mode |= IPSRXMOD_IPV6;
+		priv->rx_sa_table[sa_index].used = 1;
+
+		/* write IP table entry*/
+		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
+				| IPSRXIDX_TABLE_IP | (ip_index << 3);
+		if (priv->rx_ip_table[ip_index].ip.type == IPv4) {
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
+					priv->rx_ip_table[ip_index].ip.ipv4);
+		} else {
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0),
+					priv->rx_ip_table[ip_index].ip.ipv6[0]);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1),
+					priv->rx_ip_table[ip_index].ip.ipv6[1]);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2),
+					priv->rx_ip_table[ip_index].ip.ipv6[2]);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
+					priv->rx_ip_table[ip_index].ip.ipv6[3]);
+		}
+		IXGBE_WAIT_RWRITE;
+
+		/* write SPI table entry*/
+		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
+				| IPSRXIDX_TABLE_SPI | (sa_index << 3);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI,
+				priv->rx_sa_table[sa_index].spi);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX,
+				priv->rx_sa_table[sa_index].ip_index);
+		IXGBE_WAIT_RWRITE;
+
+		/* write Key table entry*/
+		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
+				| IPSRXIDX_TABLE_KEY | (sa_index << 3);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0),
+				priv->rx_sa_table[sa_index].key[0]);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1),
+				priv->rx_sa_table[sa_index].key[1]);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2),
+				priv->rx_sa_table[sa_index].key[2]);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3),
+				priv->rx_sa_table[sa_index].key[3]);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT,
+				priv->rx_sa_table[sa_index].salt);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD,
+				priv->rx_sa_table[sa_index].mode);
+		IXGBE_WAIT_RWRITE;
+
+	} else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
+		int i;
+
+		/* Find a free entry in the SA table*/
+		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+			if (priv->tx_sa_table[i].used == 0) {
+				sa_index = i;
+				break;
+			}
+		}
+		/* Fail if no free entries*/
+		if (sa_index < 0) {
+			PMD_DRV_LOG(ERR, "No free entry left in "
+					"the Tx SA table\n");
+			return -1;
+		}
+
+		priv->tx_sa_table[sa_index].spi =
+				rte_cpu_to_be_32(sess->spi);
+		priv->tx_sa_table[sa_index].key[3] =
+				rte_cpu_to_be_32(*(uint32_t *)&sess->key[0]);
+		priv->tx_sa_table[sa_index].key[2] =
+				rte_cpu_to_be_32(*(uint32_t *)&sess->key[4]);
+		priv->tx_sa_table[sa_index].key[1] =
+				rte_cpu_to_be_32(*(uint32_t *)&sess->key[8]);
+		priv->tx_sa_table[sa_index].key[0] =
+				rte_cpu_to_be_32(*(uint32_t *)&sess->key[12]);
+		priv->tx_sa_table[sa_index].salt =
+				rte_cpu_to_be_32(sess->salt);
+
+		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0),
+				priv->tx_sa_table[sa_index].key[0]);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1),
+				priv->tx_sa_table[sa_index].key[1]);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2),
+				priv->tx_sa_table[sa_index].key[2]);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3),
+				priv->tx_sa_table[sa_index].key[3]);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT,
+				priv->tx_sa_table[sa_index].salt);
+		IXGBE_WAIT_TWRITE;
+
+		rte_hash_add_key_data(priv->tx_spi_sai_hash,
+				&priv->tx_sa_table[sa_index].spi,
+				(void *)(uint64_t)sa_index);
+		priv->tx_sa_table[i].used = 1;
+		sess->sa_index = sa_index;
+	}
+
+	return sa_index;
+}
+
+static int
+ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
+		     struct ixgbe_crypto_session *sess)
+{
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
+			dev->data->dev_private);
+	uint32_t reg;
+	int sa_index = -1;
+
+	if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+		int i, ip_index = -1;
+
+		/* Find a match in the IP table*/
+		for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+			if (CMP_IP(priv->rx_ip_table[i].ip, sess->dst_ip)) {
+				ip_index = i;
+				break;
+			}
+		}
+
+		/* Fail if no match*/
+		if (ip_index < 0) {
+			PMD_DRV_LOG(ERR, "Entry not found in the Rx IP table\n");
+			return -1;
+		}
+
+		/* Find a free entry in the SA table*/
+		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+			if (priv->rx_sa_table[i].spi ==
+					rte_cpu_to_be_32(sess->spi)) {
+				sa_index = i;
+				break;
+			}
+		}
+		/* Fail if no match*/
+		if (sa_index < 0) {
+			PMD_DRV_LOG(ERR, "Entry not found in the Rx SA table\n");
+			return -1;
+		}
+
+		/* Disable and clear Rx SPI and key table table entryes*/
+		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
+		IXGBE_WAIT_RWRITE;
+		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
+		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
+		IXGBE_WAIT_RWRITE;
+		priv->rx_sa_table[sa_index].used = 0;
+
+		/* If last used then clear the IP table entry*/
+		priv->rx_ip_table[ip_index].ref_count--;
+		if (priv->rx_ip_table[ip_index].ref_count == 0) {
+			reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP
+					| (ip_index << 3);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
+		}
+		} else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
+			int i;
+
+			/* Find a match in the SA table*/
+			for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+				if (priv->tx_sa_table[i].spi ==
+						rte_cpu_to_be_32(sess->spi)) {
+					sa_index = i;
+					break;
+				}
+			}
+			/* Fail if no match entries*/
+			if (sa_index < 0) {
+				PMD_DRV_LOG(ERR, "Entry not found in the "
+						"Tx SA table\n");
+				return -1;
+			}
+			reg = IPSRXIDX_WRITE | (sa_index << 3);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
+			IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
+			IXGBE_WAIT_TWRITE;
+
+			priv->tx_sa_table[sa_index].used = 0;
+			rte_hash_del_key(priv->tx_spi_sai_hash,
+					&priv->tx_sa_table[sa_index].spi);
+		}
+
+	return 0;
+}
+
+static int
+ixgbe_crypto_create_session(void *dev,
+		struct rte_security_sess_conf *sess_conf,
+		struct rte_security_session *sess,
+		struct rte_mempool *mempool)
+{
+	struct ixgbe_crypto_session *session = NULL;
+	struct rte_security_ipsec_xform *ipsec_xform = sess_conf->ipsec_xform;
+
+	if (rte_mempool_get(mempool, (void **)&session)) {
+		PMD_DRV_LOG(ERR, "Cannot get object from session mempool");
+		return -ENOMEM;
+	}
+	if (ipsec_xform->aead_alg != RTE_CRYPTO_AEAD_AES_GCM) {
+		PMD_DRV_LOG(ERR, "Unsupported IPsec mode\n");
+		return -ENOTSUP;
+	}
+
+	session->op = (ipsec_xform->op == RTE_SECURITY_IPSEC_OP_DECAP) ?
+			IXGBE_OP_AUTHENTICATED_DECRYPTION :
+			IXGBE_OP_AUTHENTICATED_ENCRYPTION;
+	session->key = ipsec_xform->aead_key.data;
+	memcpy(&session->salt,
+	     &ipsec_xform->aead_key.data[ipsec_xform->aead_key.length], 4);
+	session->spi = ipsec_xform->spi;
+
+	if (ipsec_xform->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+		uint32_t sip = ipsec_xform->tunnel.ipv4.src_ip.s_addr;
+		uint32_t dip = ipsec_xform->tunnel.ipv4.dst_ip.s_addr;
+		session->src_ip.type = IPv4;
+		session->dst_ip.type = IPv4;
+		session->src_ip.ipv4 = rte_cpu_to_be_32(sip);
+		session->dst_ip.ipv4 = rte_cpu_to_be_32(dip);
+
+	} else {
+		uint32_t *sip = (uint32_t *)&ipsec_xform->tunnel.ipv6.src_addr;
+		uint32_t *dip = (uint32_t *)&ipsec_xform->tunnel.ipv6.dst_addr;
+		session->src_ip.type = IPv6;
+		session->dst_ip.type = IPv6;
+		session->src_ip.ipv6[0] = rte_cpu_to_be_32(sip[0]);
+		session->src_ip.ipv6[1] = rte_cpu_to_be_32(sip[1]);
+		session->src_ip.ipv6[2] = rte_cpu_to_be_32(sip[2]);
+		session->src_ip.ipv6[3] = rte_cpu_to_be_32(sip[3]);
+		session->dst_ip.ipv6[0] = rte_cpu_to_be_32(dip[0]);
+		session->dst_ip.ipv6[1] = rte_cpu_to_be_32(dip[1]);
+		session->dst_ip.ipv6[2] = rte_cpu_to_be_32(dip[2]);
+		session->dst_ip.ipv6[3] = rte_cpu_to_be_32(dip[3]);
+	}
+
+	session->dev = (struct rte_eth_dev *)dev;
+	set_sec_session_private_data(sess, 0, session);
+
+	if (ixgbe_crypto_add_sa(session)) {
+		PMD_DRV_LOG(ERR, "Failed to add SA\n");
+		return -EPERM;
+	}
+
+	return 0;
+}
+
+static void
+ixgbe_crypto_remove_session(void *dev,
+		struct rte_security_session *session)
+{
+	struct ixgbe_crypto_session *sess =
+		(struct ixgbe_crypto_session *)
+		get_sec_session_private_data(session, 0);
+	if (dev != sess->dev) {
+		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+		return;
+	}
+
+	if (ixgbe_crypto_remove_sa(dev, sess)) {
+		PMD_DRV_LOG(ERR, "Failed to remove session\n");
+		return;
+	}
+
+	rte_free(session);
+}
+
+uint64_t
+ixgbe_crypto_get_txdesc_flags(uint16_t port_id, struct rte_mbuf *mb) {
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct ixgbe_ipsec *priv =
+			IXGBE_DEV_PRIVATE_TO_IPSEC(dev->data->dev_private);
+	struct ipv4_hdr *ip4 =
+			rte_pktmbuf_mtod_offset(mb, struct ipv4_hdr*,
+						sizeof(struct ether_hdr));
+	uint32_t spi = 0;
+	uintptr_t sa_index;
+	struct ixgbe_crypto_tx_desc_metadata mdata = {0};
+
+	if (ip4->version_ihl == 0x45)
+		spi = *rte_pktmbuf_mtod_offset(mb, uint32_t*,
+					sizeof(struct ether_hdr) +
+					sizeof(struct ipv4_hdr));
+	else
+		spi = *rte_pktmbuf_mtod_offset(mb, uint32_t*,
+					sizeof(struct ether_hdr) +
+					sizeof(struct ipv6_hdr));
+
+	if (priv->tx_spi_sai_hash &&
+			rte_hash_lookup_data(priv->tx_spi_sai_hash, &spi,
+					(void **)&sa_index) == 0) {
+		mdata.enc = 1;
+		mdata.sa_idx = (uint32_t)sa_index;
+		mdata.pad_len = *rte_pktmbuf_mtod_offset(mb, uint8_t *,
+					rte_pktmbuf_pkt_len(mb) - 18);
+	}
+
+	return mdata.data;
+}
+
+
+struct rte_security_ops ixgbe_security_ops = {
+		.session_configure = ixgbe_crypto_create_session,
+		.session_clear = ixgbe_crypto_remove_session,
+};
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.h b/drivers/net/ixgbe/ixgbe_ipsec.h
new file mode 100644
index 0000000..fd479eb
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_ipsec.h
@@ -0,0 +1,142 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef IXGBE_IPSEC_H_
+#define IXGBE_IPSEC_H_
+
+#include <rte_security.h>
+
+#define IPSRXIDX_RX_EN                                    0x00000001
+#define IPSRXIDX_TABLE_IP                                 0x00000002
+#define IPSRXIDX_TABLE_SPI                                0x00000004
+#define IPSRXIDX_TABLE_KEY                                0x00000006
+#define IPSRXIDX_WRITE                                    0x80000000
+#define IPSRXIDX_READ                                     0x40000000
+#define IPSRXMOD_VALID                                    0x00000001
+#define IPSRXMOD_PROTO                                    0x00000004
+#define IPSRXMOD_DECRYPT                                  0x00000008
+#define IPSRXMOD_IPV6                                     0x00000010
+#define IXGBE_ADVTXD_POPTS_IPSEC                          0x00000400
+#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP                 0x00002000
+#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN               0x00004000
+#define IXGBE_RXDADV_IPSEC_STATUS_SECP                    0x00020000
+#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK                 0x18000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL         0x08000000
+#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH           0x10000000
+#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED    0x18000000
+
+#define IPSEC_MAX_RX_IP_COUNT           128
+#define IPSEC_MAX_SA_COUNT              1024
+
+enum ixgbe_operation {
+	IXGBE_OP_AUTHENTICATED_ENCRYPTION, IXGBE_OP_AUTHENTICATED_DECRYPTION
+};
+
+enum ixgbe_gcm_key {
+	IXGBE_GCM_KEY_128, IXGBE_GCM_KEY_256
+};
+
+/**
+ * Generic IP address structure
+ * TODO: Find better location for this rte_net.h possibly.
+ **/
+struct ipaddr {
+	enum ipaddr_type {
+		IPv4, IPv6
+	} type;
+	/**< IP Address Type - IPv4/IPv6 */
+
+	union {
+		uint32_t ipv4;
+		uint32_t ipv6[4];
+	};
+};
+
+/** inline crypto crypto private session structure */
+struct ixgbe_crypto_session {
+	enum ixgbe_operation op;
+	uint8_t *key;
+	uint32_t salt;
+	uint32_t sa_index;
+	uint32_t spi;
+	struct ipaddr src_ip;
+	struct ipaddr dst_ip;
+	struct rte_eth_dev *dev;
+} __rte_cache_aligned;
+
+struct ixgbe_crypto_rx_ip_table {
+	struct ipaddr ip;
+	uint16_t ref_count;
+};
+struct ixgbe_crypto_rx_sa_table {
+	uint32_t spi;
+	uint32_t ip_index;
+	uint32_t key[4];
+	uint32_t salt;
+	uint8_t mode;
+	uint8_t used;
+};
+
+struct ixgbe_crypto_tx_sa_table {
+	uint32_t spi;
+	uint32_t key[4];
+	uint32_t salt;
+	uint8_t used;
+};
+
+struct ixgbe_crypto_tx_desc_metadata {
+	union {
+		uint64_t data;
+		struct {
+			uint32_t sa_idx;
+			uint8_t pad_len;
+			uint8_t enc;
+		};
+	};
+};
+
+struct ixgbe_ipsec {
+#define IS_INITIALIZED (1 << 0)
+	uint8_t flags;
+	struct ixgbe_crypto_rx_ip_table rx_ip_table[IPSEC_MAX_RX_IP_COUNT];
+	struct ixgbe_crypto_rx_sa_table rx_sa_table[IPSEC_MAX_SA_COUNT];
+	struct ixgbe_crypto_tx_sa_table tx_sa_table[IPSEC_MAX_SA_COUNT];
+	struct rte_hash *tx_spi_sai_hash;
+};
+
+extern struct rte_security_ops ixgbe_security_ops;
+
+uint64_t ixgbe_crypto_get_txdesc_flags(uint16_t port_id, struct rte_mbuf *mb);
+
+
+#endif /*IXGBE_IPSEC_H_*/
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 64bff25..76be27a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -395,7 +395,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
 static inline void
 ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
-		uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
+		uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
+		__rte_unused struct rte_mbuf *mb)
 {
 	uint32_t type_tucmd_mlhl;
 	uint32_t mss_l4len_idx = 0;
@@ -479,6 +480,20 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		seqnum_seed |= tx_offload.l2_len
 			       << IXGBE_ADVTXD_TUNNEL_LEN;
 	}
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+	if (mb->ol_flags & PKT_TX_SECURITY_OFFLOAD) {
+		struct ixgbe_crypto_tx_desc_metadata mdata = {
+			.data = ixgbe_crypto_get_txdesc_flags(txq->port_id, mb),
+		};
+		seqnum_seed |=
+			(IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata.sa_idx);
+		type_tucmd_mlhl |= mdata.enc ?
+			(IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
+				IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
+		type_tucmd_mlhl |=
+			(mdata.pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
+	}
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
 
 	txq->ctx_cache[ctx_idx].flags = ol_flags;
 	txq->ctx_cache[ctx_idx].tx_offload.data[0]  =
@@ -855,7 +870,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 				}
 
 				ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
-					tx_offload);
+					tx_offload, tx_pkt);
 
 				txe->last_id = tx_last;
 				tx_id = txe->next_id;
@@ -872,7 +887,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
 		}
 
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+		olinfo_status |= ((pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT) |
+				(((ol_flags & PKT_TX_SECURITY_OFFLOAD) != 0)
+					* IXGBE_ADVTXD_POPTS_IPSEC));
+#else /* RTE_LIBRTE_IXGBE_IPSEC */
 		olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
 
 		m_seg = tx_pkt;
 		do {
@@ -1447,6 +1468,14 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 		pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
 	}
 
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+	if (rx_status & IXGBE_RXD_STAT_SECP) {
+		pkt_flags |= PKT_RX_SECURITY_OFFLOAD;
+		if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
+			pkt_flags |= PKT_RX_SECURITY_OFFLOAD_FAILED;
+	}
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
+
 	return pkt_flags;
 }
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index e704a7f..8673a01 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -128,6 +128,9 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 {
 	__m128i ptype0, ptype1, vtag0, vtag1, csum;
 	__m128i rearm0, rearm1, rearm2, rearm3;
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+	__m128i sterr0, sterr1, sterr2, sterr3, tmp1, tmp2;
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
 
 	/* mask everything except rss type */
 	const __m128i rsstype_msk = _mm_set_epi16(
@@ -174,6 +177,19 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
 
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+	const __m128i ipsec_sterr_msk = _mm_set_epi32(
+		0, IXGBE_RXD_STAT_SECP | IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG,
+		0, 0);
+	const __m128i ipsec_proc_msk  = _mm_set_epi32(
+		0, IXGBE_RXD_STAT_SECP, 0, 0);
+	const __m128i ipsec_err_flag  = _mm_set_epi32(
+		0, PKT_RX_SECURITY_OFFLOAD_FAILED | PKT_RX_SECURITY_OFFLOAD,
+		0, 0);
+	const __m128i ipsec_proc_flag = _mm_set_epi32(
+		0, PKT_RX_SECURITY_OFFLOAD, 0, 0);
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
+
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
 	ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
 	vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
@@ -221,6 +237,34 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	rearm2 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 4), 0x10);
 	rearm3 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 2), 0x10);
 
+#ifdef RTE_LIBRTE_IXGBE_IPSEC
+	/*inline ipsec, extract the flags from the descriptors*/
+	sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
+	sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
+	sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
+	sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
+	tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
+	tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
+	sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+				_mm_and_si128(tmp2, ipsec_proc_flag));
+	tmp1 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
+	tmp2 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
+	sterr1 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+				_mm_and_si128(tmp2, ipsec_proc_flag));
+	tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
+	tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
+	sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+				_mm_and_si128(tmp2, ipsec_proc_flag));
+	tmp1 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
+	tmp2 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
+	sterr3 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
+				_mm_and_si128(tmp2, ipsec_proc_flag));
+	rearm0 = _mm_or_si128(rearm0, sterr0);
+	rearm1 = _mm_or_si128(rearm1, sterr1);
+	rearm2 = _mm_or_si128(rearm2, sterr2);
+	rearm3 = _mm_or_si128(rearm3, sterr3);
+#endif /* RTE_LIBRTE_IXGBE_IPSEC */
+
 	/* write the rearm data and the olflags in one write */
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
 			offsetof(struct rte_mbuf, rearm_data) + 8);
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC PATCH 5/5] examples/ipsec-secgw: enabled inline ipsec
  2017-08-25 14:57 [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Radu Nicolau
                   ` (3 preceding siblings ...)
  2017-08-25 14:57 ` [RFC PATCH 4/5] ixgbe: enable inline ipsec Radu Nicolau
@ 2017-08-25 14:57 ` Radu Nicolau
  2017-08-29 12:04   ` Akhil Goyal
  2017-08-29 13:00 ` [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Boris Pismenny
  5 siblings, 1 reply; 13+ messages in thread
From: Radu Nicolau @ 2017-08-25 14:57 UTC (permalink / raw)
  To: dev; +Cc: Radu Nicolau

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
---
 examples/ipsec-secgw/esp.c   | 26 ++++++++++++++++--
 examples/ipsec-secgw/ipsec.c | 61 +++++++++++++++++++++++++++++++++++------
 examples/ipsec-secgw/ipsec.h |  2 ++
 examples/ipsec-secgw/sa.c    | 65 +++++++++++++++++++++++++++++++-------------
 4 files changed, 123 insertions(+), 31 deletions(-)

diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 70bb81f..77ab232 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -58,6 +58,9 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
 	struct rte_crypto_sym_op *sym_cop;
 	int32_t payload_len, ip_hdr_len;
 
+	if (sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO)
+		return 0;
+
 	RTE_ASSERT(m != NULL);
 	RTE_ASSERT(sa != NULL);
 	RTE_ASSERT(cop != NULL);
@@ -175,6 +178,16 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
 	RTE_ASSERT(sa != NULL);
 	RTE_ASSERT(cop != NULL);
 
+
+	if (sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO) {
+		if (m->ol_flags & PKT_RX_SECURITY_OFFLOAD
+				&& m->ol_flags & PKT_RX_SECURITY_OFFLOAD_FAILED)
+			cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+		else
+			cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+	}
+
+
 	if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
 		RTE_LOG(ERR, IPSEC_ESP, "failed crypto op\n");
 		return -1;
@@ -321,6 +334,9 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
 	esp->spi = rte_cpu_to_be_32(sa->spi);
 	esp->seq = rte_cpu_to_be_32((uint32_t)sa->seq);
 
+	if (sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO)
+		return 0;
+
 	uint64_t *iv = (uint64_t *)(esp + 1);
 
 	sym_cop = get_sym_cop(cop);
@@ -419,9 +435,13 @@ esp_outbound_post(struct rte_mbuf *m __rte_unused,
 	RTE_ASSERT(sa != NULL);
 	RTE_ASSERT(cop != NULL);
 
-	if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
-		RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
-		return -1;
+	if (sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO) {
+		m->ol_flags |= PKT_TX_SECURITY_OFFLOAD;
+	} else {
+		if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
+			RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
+			return -1;
+		}
 	}
 
 	return 0;
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index c8fde1c..b14b23d 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -58,13 +58,17 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
 	key.cipher_algo = (uint8_t)sa->cipher_algo;
 	key.auth_algo = (uint8_t)sa->auth_algo;
 
-	ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
-			(void **)&cdev_id_qp);
-	if (ret < 0) {
-		RTE_LOG(ERR, IPSEC, "No cryptodev: core %u, cipher_algo %u, "
-				"auth_algo %u\n", key.lcore_id, key.cipher_algo,
-				key.auth_algo);
-		return -1;
+	if (sa->type == RTE_SECURITY_SESS_NONE) {
+		ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
+				(void **)&cdev_id_qp);
+		if (ret < 0) {
+			RTE_LOG(ERR, IPSEC, "No cryptodev: core %u, "
+					"cipher_algo %u, "
+					"auth_algo %u\n",
+					key.lcore_id, key.cipher_algo,
+					key.auth_algo);
+			return -1;
+		}
 	}
 
 	RTE_LOG_DP(DEBUG, IPSEC, "Create session for SA spi %u on cryptodev "
@@ -79,7 +83,8 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
 				sa->crypto_session, sa->xforms,
 				ipsec_ctx->session_pool);
 
-		rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id, &cdev_info);
+		rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
+				&cdev_info);
 		if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
 			ret = rte_cryptodev_queue_pair_attach_sym_session(
 					ipsec_ctx->tbl[cdev_id_qp].id,
@@ -146,6 +151,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
 	struct ipsec_mbuf_metadata *priv;
 	struct rte_crypto_sym_op *sym_cop;
 	struct ipsec_sa *sa;
+	struct cdev_qp *cqp;
 
 	for (i = 0; i < nb_pkts; i++) {
 		if (unlikely(sas[i] == NULL)) {
@@ -202,8 +208,31 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
 			}
 			break;
 		case RTE_SECURITY_SESS_ETH_PROTO_OFFLOAD:
-		case RTE_SECURITY_SESS_ETH_INLINE_CRYPTO:
 			break;
+		case RTE_SECURITY_SESS_ETH_INLINE_CRYPTO:
+			priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+			priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+			rte_prefetch0(&priv->sym_cop);
+
+			if ((unlikely(sa->sec_session == NULL)) &&
+					create_session(ipsec_ctx, sa)) {
+				rte_pktmbuf_free(pkts[i]);
+				continue;
+			}
+
+			rte_security_attach_session(&priv->cop,
+					sa->sec_session);
+
+			ret = xform_func(pkts[i], sa, &priv->cop);
+			if (unlikely(ret)) {
+				rte_pktmbuf_free(pkts[i]);
+				continue;
+			}
+
+			cqp = &ipsec_ctx->tbl[sa->cdev_id_qp];
+			cqp->ol_pkts[cqp->ol_pkts_cnt++] = pkts[i];
+			continue;
 		}
 
 		RTE_ASSERT(sa->cdev_id_qp < ipsec_ctx->nb_qps);
@@ -228,6 +257,20 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
 		if (ipsec_ctx->last_qp == ipsec_ctx->nb_qps)
 			ipsec_ctx->last_qp %= ipsec_ctx->nb_qps;
 
+
+		while (cqp->ol_pkts_cnt > 0 && nb_pkts < max_pkts) {
+			pkt = cqp->ol_pkts[--cqp->ol_pkts_cnt];
+			rte_prefetch0(pkt);
+			priv = get_priv(pkt);
+			sa = priv->sa;
+			ret = xform_func(pkt, sa, &priv->cop);
+			if (unlikely(ret)) {
+				rte_pktmbuf_free(pkt);
+				continue;
+			}
+			pkts[nb_pkts++] = pkt;
+		}
+
 		if (cqp->in_flight == 0)
 			continue;
 
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index 6291d86..685304b 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -142,6 +142,8 @@ struct cdev_qp {
 	uint16_t in_flight;
 	uint16_t len;
 	struct rte_crypto_op *buf[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
+	struct rte_mbuf *ol_pkts[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
+	uint16_t ol_pkts_cnt;
 };
 
 struct ipsec_ctx {
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 851262b..11b31d0 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -613,11 +613,13 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
 		if (status->status < 0)
 			return;
 	} else {
-		APP_CHECK(cipher_algo_p == 1, status, "missing cipher or AEAD options");
+		APP_CHECK(cipher_algo_p == 1, status,
+			  "missing cipher or AEAD options");
 		if (status->status < 0)
 			return;
 
-		APP_CHECK(auth_algo_p == 1, status, "missing auth or AEAD options");
+		APP_CHECK(auth_algo_p == 1, status,
+			"missing auth or AEAD options");
 		if (status->status < 0)
 			return;
 	}
@@ -763,14 +765,31 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 			sa->dst.ip.ip4 = rte_cpu_to_be_32(sa->dst.ip.ip4);
 		}
 
-		if (sa->type == RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD) {
-			sa_ctx->xf[idx].c.cipher_alg = sa->cipher_algo;
-			sa_ctx->xf[idx].c.auth_alg = sa->auth_algo;
-			sa_ctx->xf[idx].c.cipher_key.data = sa->cipher_key;
-			sa_ctx->xf[idx].c.auth_key.data = sa->auth_key;
-			sa_ctx->xf[idx].c.cipher_key.length =
+		if (sa->type == RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD ||
+			sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO) {
+
+			if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
+				sa_ctx->xf[idx].c.aead_alg =
+						sa->aead_algo;
+				sa_ctx->xf[idx].c.aead_key.data =
+						sa->cipher_key;
+				sa_ctx->xf[idx].c.aead_key.length =
+						sa->cipher_key_len;
+
+			} else {
+				sa_ctx->xf[idx].c.cipher_alg = sa->cipher_algo;
+				sa_ctx->xf[idx].c.auth_alg = sa->auth_algo;
+				sa_ctx->xf[idx].c.cipher_key.data =
+						sa->cipher_key;
+				sa_ctx->xf[idx].c.auth_key.data =
+						sa->auth_key;
+				sa_ctx->xf[idx].c.cipher_key.length =
 						sa->cipher_key_len;
-			sa_ctx->xf[idx].c.auth_key.length = sa->auth_key_len;
+				sa_ctx->xf[idx].c.auth_key.length =
+						sa->auth_key_len;
+				sa_ctx->xf[idx].c.salt = sa->salt;
+			}
+
 			sa_ctx->xf[idx].c.op = (inbound == 1)?
 						RTE_SECURITY_IPSEC_OP_DECAP :
 						RTE_SECURITY_IPSEC_OP_ENCAP;
@@ -835,9 +854,11 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 			}
 
 			if (inbound) {
-				sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+				sa_ctx->xf[idx].b.type =
+						RTE_CRYPTO_SYM_XFORM_CIPHER;
 				sa_ctx->xf[idx].b.cipher.algo = sa->cipher_algo;
-				sa_ctx->xf[idx].b.cipher.key.data = sa->cipher_key;
+				sa_ctx->xf[idx].b.cipher.key.data =
+						sa->cipher_key;
 				sa_ctx->xf[idx].b.cipher.key.length =
 					sa->cipher_key_len;
 				sa_ctx->xf[idx].b.cipher.op =
@@ -846,7 +867,8 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 				sa_ctx->xf[idx].b.cipher.iv.offset = IV_OFFSET;
 				sa_ctx->xf[idx].b.cipher.iv.length = iv_length;
 
-				sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+				sa_ctx->xf[idx].a.type =
+						RTE_CRYPTO_SYM_XFORM_AUTH;
 				sa_ctx->xf[idx].a.auth.algo = sa->auth_algo;
 				sa_ctx->xf[idx].a.auth.key.data = sa->auth_key;
 				sa_ctx->xf[idx].a.auth.key.length =
@@ -856,9 +878,11 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 				sa_ctx->xf[idx].a.auth.op =
 					RTE_CRYPTO_AUTH_OP_VERIFY;
 			} else { /* outbound */
-				sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+				sa_ctx->xf[idx].a.type =
+					RTE_CRYPTO_SYM_XFORM_CIPHER;
 				sa_ctx->xf[idx].a.cipher.algo = sa->cipher_algo;
-				sa_ctx->xf[idx].a.cipher.key.data = sa->cipher_key;
+				sa_ctx->xf[idx].a.cipher.key.data =
+					sa->cipher_key;
 				sa_ctx->xf[idx].a.cipher.key.length =
 					sa->cipher_key_len;
 				sa_ctx->xf[idx].a.cipher.op =
@@ -867,9 +891,12 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
 				sa_ctx->xf[idx].a.cipher.iv.offset = IV_OFFSET;
 				sa_ctx->xf[idx].a.cipher.iv.length = iv_length;
 
-				sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_AUTH;
-				sa_ctx->xf[idx].b.auth.algo = sa->auth_algo;
-				sa_ctx->xf[idx].b.auth.key.data = sa->auth_key;
+				sa_ctx->xf[idx].b.type =
+					RTE_CRYPTO_SYM_XFORM_AUTH;
+				sa_ctx->xf[idx].b.auth.algo =
+					sa->auth_algo;
+				sa_ctx->xf[idx].b.auth.key.data =
+						sa->auth_key;
 				sa_ctx->xf[idx].b.auth.key.length =
 					sa->auth_key_len;
 				sa_ctx->xf[idx].b.auth.digest_length =
@@ -991,8 +1018,8 @@ single_inbound_lookup(struct ipsec_sa *sadb, struct rte_mbuf *pkt,
 	case IP6_TUNNEL:
 		src6_addr = RTE_PTR_ADD(ip, offsetof(struct ip6_hdr, ip6_src));
 		if ((ip->ip_v == IP6_VERSION) &&
-				!memcmp(&sa->src.ip.ip6.ip6, src6_addr, 16) &&
-				!memcmp(&sa->dst.ip.ip6.ip6, src6_addr + 16, 16))
+			!memcmp(&sa->src.ip.ip6.ip6, src6_addr, 16) &&
+			!memcmp(&sa->dst.ip.ip6.ip6, src6_addr + 16, 16))
 			*sa_ret = sa;
 		break;
 	case TRANSPORT:
-- 
2.7.5

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH 4/5] ixgbe: enable inline ipsec
  2017-08-25 14:57 ` [RFC PATCH 4/5] ixgbe: enable inline ipsec Radu Nicolau
@ 2017-08-28 17:47   ` Ananyev, Konstantin
  2017-08-29 13:06     ` Radu Nicolau
  0 siblings, 1 reply; 13+ messages in thread
From: Ananyev, Konstantin @ 2017-08-28 17:47 UTC (permalink / raw)
  To: Nicolau, Radu, dev; +Cc: Nicolau, Radu


Hi Radu,
Few questions comments from me below.
Thanks
Konstantin

> 
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> ---
>  config/common_base                     |   1 +
>  drivers/net/ixgbe/Makefile             |   4 +-
>  drivers/net/ixgbe/ixgbe_ethdev.c       |   3 +
>  drivers/net/ixgbe/ixgbe_ethdev.h       |  10 +-
>  drivers/net/ixgbe/ixgbe_ipsec.c        | 617 +++++++++++++++++++++++++++++++++
>  drivers/net/ixgbe/ixgbe_ipsec.h        | 142 ++++++++
>  drivers/net/ixgbe/ixgbe_rxtx.c         |  33 +-
>  drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c |  44 +++
>  8 files changed, 850 insertions(+), 4 deletions(-)
>  create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
>  create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
> 
> diff --git a/config/common_base b/config/common_base
> index 5e97a08..2084609 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -179,6 +179,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
>  CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
>  CONFIG_RTE_IXGBE_INC_VECTOR=y
>  CONFIG_RTE_LIBRTE_IXGBE_BYPASS=n
> +CONFIG_RTE_LIBRTE_IXGBE_IPSEC=y
> 
>  #
>  # Compile burst-oriented I40E PMD driver
> diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
> index 5e57cb3..1180900 100644
> --- a/drivers/net/ixgbe/Makefile
> +++ b/drivers/net/ixgbe/Makefile
> @@ -118,11 +118,13 @@ SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
>  else
>  SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_sse.c
>  endif
> -
>  ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_BYPASS),y)
>  SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
>  SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
>  endif
> +ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_IPSEC),y)
> +SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ipsec.c
> +endif
>  SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
>  SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
> 
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 22171d8..73de5e6 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -1135,6 +1135,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
>  	PMD_INIT_FUNC_TRACE();
> 
>  	eth_dev->dev_ops = &ixgbe_eth_dev_ops;
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +	eth_dev->sec_ops = &ixgbe_security_ops;
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */

Do we  really need a new config macro here?
Can't we make enable/disable IPsec for ixgbe configurable at device startup stage
(as we doing for other RX/TX offloads)?

>  	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
>  	eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
>  	eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
> index caa50c8..d1a84e2 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> @@ -38,6 +38,9 @@
>  #include "base/ixgbe_dcb_82599.h"
>  #include "base/ixgbe_dcb_82598.h"
>  #include "ixgbe_bypass.h"
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +#include "ixgbe_ipsec.h"
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>  #include <rte_time.h>
>  #include <rte_hash.h>
>  #include <rte_pci.h>
> @@ -529,7 +532,9 @@ struct ixgbe_adapter {
>  	struct ixgbe_filter_info    filter;
>  	struct ixgbe_l2_tn_info     l2_tn;
>  	struct ixgbe_bw_conf        bw_conf;
> -
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +	struct ixgbe_ipsec          ipsec;
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>  	bool rx_bulk_alloc_allowed;
>  	bool rx_vec_allowed;
>  	struct rte_timecounter      systime_tc;
> @@ -586,6 +591,9 @@ struct ixgbe_adapter {
>  #define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
>  	(&((struct ixgbe_adapter *)adapter)->tm_conf)
> 
> +#define IXGBE_DEV_PRIVATE_TO_IPSEC(adapter)\
> +	(&((struct ixgbe_adapter *)adapter)->ipsec)
> +
>  /*
>   * RX/TX function prototypes
>   */
> diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
> new file mode 100644
> index 0000000..d866cd8
> --- /dev/null
> +++ b/drivers/net/ixgbe/ixgbe_ipsec.c
> @@ -0,0 +1,617 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *	 * Redistributions of source code must retain the above copyright
> + *	 notice, this list of conditions and the following disclaimer.
> + *	 * Redistributions in binary form must reproduce the above copyright
> + *	 notice, this list of conditions and the following disclaimer in
> + *	 the documentation and/or other materials provided with the
> + *	 distribution.
> + *	 * Neither the name of Intel Corporation nor the names of its
> + *	 contributors may be used to endorse or promote products derived
> + *	 from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_ethdev.h>
> +#include <rte_ethdev_pci.h>
> +#include <rte_security.h>
> +#include <rte_ip.h>
> +#include <rte_jhash.h>
> +#include <rte_cryptodev_pmd.h>
> +
> +#include "base/ixgbe_type.h"
> +#include "base/ixgbe_api.h"
> +#include "ixgbe_ethdev.h"
> +#include "ixgbe_ipsec.h"
> +
> +
> +#define IXGBE_WAIT_RW(__reg, __rw)			\
> +{							\
> +	IXGBE_WRITE_REG(hw, (__reg), reg);		\
> +	while ((IXGBE_READ_REG(hw, (__reg))) & (__rw))	\
> +	;						\
> +}
> +#define IXGBE_WAIT_RREAD  IXGBE_WAIT_RW(IXGBE_IPSRXIDX, IPSRXIDX_READ)
> +#define IXGBE_WAIT_RWRITE IXGBE_WAIT_RW(IXGBE_IPSRXIDX, IPSRXIDX_WRITE)
> +#define IXGBE_WAIT_TREAD  IXGBE_WAIT_RW(IXGBE_IPSTXIDX, IPSRXIDX_READ)
> +#define IXGBE_WAIT_TWRITE IXGBE_WAIT_RW(IXGBE_IPSTXIDX, IPSRXIDX_WRITE)
> +
> +#define CMP_IP(a, b)	\
> +		((a).ipv6[0] == (b).ipv6[0] && (a).ipv6[1] == (b).ipv6[1] && \
> +		(a).ipv6[2] == (b).ipv6[2] && (a).ipv6[3] == (b).ipv6[3])
> +
> +
> +static void
> +ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
> +{
> +	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	int i = 0;
> +
> +	/* clear Rx IP table*/
> +	for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> +		uint16_t index = i << 3;
> +		uint32_t reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
> +		IXGBE_WAIT_RWRITE;
> +	}
> +
> +	/* clear Rx SPI and Rx/Tx SA tables*/
> +	for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> +		uint32_t index = i << 3;
> +		uint32_t reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
> +		IXGBE_WAIT_RWRITE;
> +		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
> +		IXGBE_WAIT_RWRITE;
> +		reg = IPSRXIDX_WRITE | index;
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
> +		IXGBE_WAIT_TWRITE;
> +	}
> +}
> +
> +static int
> +ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
> +{
> +	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct rte_eth_link link;
> +	uint32_t reg;
> +
> +	/* Halt the data paths */
> +	reg = IXGBE_SECTXCTRL_TX_DIS;
> +	IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL, reg);
> +	reg = IXGBE_SECRXCTRL_RX_DIS;
> +	IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, reg);
> +
> +	/* Wait for Tx path to empty */
> +	do {
> +		rte_eth_link_get_nowait(dev->data->port_id, &link);
> +		if (link.link_status != ETH_LINK_UP) {
> +			/* Fix for HSD:4426139
> +			 * If the Tx FIFO has data but no link,
> +			 * we can't clear the Tx Sec block. So set MAC
> +			 * loopback before block clear
> +			 */
> +			reg = IXGBE_READ_REG(hw, IXGBE_MACC);
> +			reg |= IXGBE_MACC_FLU;
> +			IXGBE_WRITE_REG(hw, IXGBE_MACC, reg);
> +
> +			reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> +			reg |= IXGBE_HLREG0_LPBK;
> +			IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
> +			struct timespec time;
> +			time.tv_sec = 0;
> +			time.tv_nsec = 1000000 * 3;
> +			nanosleep(&time, NULL);
> +		}
> +
> +		reg = IXGBE_READ_REG(hw, IXGBE_SECTXSTAT);
> +
> +		rte_eth_link_get_nowait(dev->data->port_id, &link);
> +		if (link.link_status != ETH_LINK_UP) {
> +			reg = IXGBE_READ_REG(hw, IXGBE_MACC);
> +			reg &= ~(IXGBE_MACC_FLU);
> +			IXGBE_WRITE_REG(hw, IXGBE_MACC, reg);
> +
> +			reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> +			reg &= ~(IXGBE_HLREG0_LPBK);
> +			IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
> +		}
> +	} while (!(reg & IXGBE_SECTXSTAT_SECTX_RDY));
> +
> +	/* Wait for Rx path to empty*/
> +	do {
> +		reg = IXGBE_READ_REG(hw, IXGBE_SECRXSTAT);
> +	} while (!(reg & IXGBE_SECRXSTAT_SECRX_RDY));
> +
> +	/* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
> +	IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
> +
> +	/* IFG needs to be set to 3 when we are using security. Otherwise a Tx
> +	 * hang will occur with heavy traffic.
> +	 */
> +	reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
> +	reg = (reg & 0xFFFFFFF0) | 0x3;
> +	IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
> +
> +	reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> +	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
> +	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
> +
> +	/* Enable the Tx crypto engine and restart the Tx data path;
> +	 * set the STORE_FORWARD bit for IPSec.
> +	 */
> +	IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL, IXGBE_SECTXCTRL_STORE_FORWARD);
> +
> +	/* Enable the Rx crypto engine and restart the Rx data path*/
> +	IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
> +
> +	/* Test if crypto was enabled */
> +	reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
> +	if (reg != IXGBE_SECTXCTRL_STORE_FORWARD) {
> +		PMD_DRV_LOG(ERR, "Error enabling Tx Crypto");
> +		return -1;
> +	}
> +	reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
> +	if (reg != 0) {
> +		PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
> +		return -1;
> +	}
> +
> +	ixgbe_crypto_clear_ipsec_tables(dev);
> +
> +	/* create hash table*/
> +	{
> +		struct ixgbe_ipsec *internals = IXGBE_DEV_PRIVATE_TO_IPSEC(
> +				dev->data->dev_private);
> +		struct rte_hash_parameters params = { 0 };
> +		params.entries = IPSEC_MAX_SA_COUNT;
> +		params.key_len = sizeof(uint32_t);
> +		params.hash_func = rte_jhash;
> +		params.hash_func_init_val = 0;
> +		params.socket_id = rte_socket_id();
> +		params.name = "tx_spi_sai_hash";
> +		internals->tx_spi_sai_hash = rte_hash_create(&params);
> +	}
> +
> +	return 0;
> +}
> +
> +
> +static int
> +ixgbe_crypto_add_sa(struct ixgbe_crypto_session *sess)
> +{
> +	struct rte_eth_dev *dev = sess->dev;
> +	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
> +			dev->data->dev_private);
> +	uint32_t reg;
> +	int sa_index = -1;
> +
> +	if (!(priv->flags & IS_INITIALIZED)) {
> +		if (ixgbe_crypto_enable_ipsec(dev) == 0)
> +			priv->flags |= IS_INITIALIZED;
> +	}
> +
> +	if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
> +		int i, ip_index = -1;
> +
> +		/* Find a match in the IP table*/
> +		for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> +			if (CMP_IP(priv->rx_ip_table[i].ip,
> +				 sess->dst_ip)) {
> +				ip_index = i;
> +				break;
> +			}
> +		}
> +		/* If no match, find a free entry in the IP table*/
> +		if (ip_index < 0) {
> +			for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> +				if (priv->rx_ip_table[i].ref_count == 0) {
> +					ip_index = i;
> +					break;
> +				}
> +			}
> +		}
> +
> +		/* Fail if no match and no free entries*/
> +		if (ip_index < 0) {
> +			PMD_DRV_LOG(ERR, "No free entry left "
> +					"in the Rx IP table\n");
> +			return -1;
> +		}
> +
> +		/* Find a free entry in the SA table*/
> +		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> +			if (priv->rx_sa_table[i].used == 0) {
> +				sa_index = i;
> +				break;
> +			}
> +		}
> +		/* Fail if no free entries*/
> +		if (sa_index < 0) {
> +			PMD_DRV_LOG(ERR, "No free entry left in "
> +					"the Rx SA table\n");
> +			return -1;
> +		}
> +
> +		priv->rx_ip_table[ip_index].ip.ipv6[0] =
> +				rte_cpu_to_be_32(sess->dst_ip.ipv6[0]);
> +		priv->rx_ip_table[ip_index].ip.ipv6[1] =
> +				rte_cpu_to_be_32(sess->dst_ip.ipv6[1]);
> +		priv->rx_ip_table[ip_index].ip.ipv6[2] =
> +				rte_cpu_to_be_32(sess->dst_ip.ipv6[2]);
> +		priv->rx_ip_table[ip_index].ip.ipv6[3] =
> +				rte_cpu_to_be_32(sess->dst_ip.ipv6[3]);
> +		priv->rx_ip_table[ip_index].ref_count++;
> +
> +		priv->rx_sa_table[sa_index].spi =
> +				rte_cpu_to_be_32(sess->spi);
> +		priv->rx_sa_table[sa_index].ip_index = ip_index;
> +		priv->rx_sa_table[sa_index].key[3] =
> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[0]);
> +		priv->rx_sa_table[sa_index].key[2] =
> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[4]);
> +		priv->rx_sa_table[sa_index].key[1] =
> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[8]);
> +		priv->rx_sa_table[sa_index].key[0] =
> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[12]);
> +		priv->rx_sa_table[sa_index].salt =
> +				rte_cpu_to_be_32(sess->salt);
> +		priv->rx_sa_table[sa_index].mode = IPSRXMOD_VALID;
> +		if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION)
> +			priv->rx_sa_table[sa_index].mode |=
> +					(IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
> +		if (sess->dst_ip.type == IPv6)
> +			priv->rx_sa_table[sa_index].mode |= IPSRXMOD_IPV6;
> +		priv->rx_sa_table[sa_index].used = 1;
> +
> +		/* write IP table entry*/
> +		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
> +				| IPSRXIDX_TABLE_IP | (ip_index << 3);
> +		if (priv->rx_ip_table[ip_index].ip.type == IPv4) {
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
> +					priv->rx_ip_table[ip_index].ip.ipv4);
> +		} else {
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0),
> +					priv->rx_ip_table[ip_index].ip.ipv6[0]);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1),
> +					priv->rx_ip_table[ip_index].ip.ipv6[1]);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2),
> +					priv->rx_ip_table[ip_index].ip.ipv6[2]);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
> +					priv->rx_ip_table[ip_index].ip.ipv6[3]);
> +		}
> +		IXGBE_WAIT_RWRITE;
> +
> +		/* write SPI table entry*/
> +		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
> +				| IPSRXIDX_TABLE_SPI | (sa_index << 3);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI,
> +				priv->rx_sa_table[sa_index].spi);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX,
> +				priv->rx_sa_table[sa_index].ip_index);
> +		IXGBE_WAIT_RWRITE;
> +
> +		/* write Key table entry*/
> +		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
> +				| IPSRXIDX_TABLE_KEY | (sa_index << 3);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0),
> +				priv->rx_sa_table[sa_index].key[0]);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1),
> +				priv->rx_sa_table[sa_index].key[1]);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2),
> +				priv->rx_sa_table[sa_index].key[2]);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3),
> +				priv->rx_sa_table[sa_index].key[3]);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT,
> +				priv->rx_sa_table[sa_index].salt);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD,
> +				priv->rx_sa_table[sa_index].mode);
> +		IXGBE_WAIT_RWRITE;
> +
> +	} else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
> +		int i;
> +
> +		/* Find a free entry in the SA table*/
> +		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> +			if (priv->tx_sa_table[i].used == 0) {
> +				sa_index = i;
> +				break;
> +			}
> +		}
> +		/* Fail if no free entries*/
> +		if (sa_index < 0) {
> +			PMD_DRV_LOG(ERR, "No free entry left in "
> +					"the Tx SA table\n");
> +			return -1;
> +		}
> +
> +		priv->tx_sa_table[sa_index].spi =
> +				rte_cpu_to_be_32(sess->spi);
> +		priv->tx_sa_table[sa_index].key[3] =
> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[0]);
> +		priv->tx_sa_table[sa_index].key[2] =
> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[4]);
> +		priv->tx_sa_table[sa_index].key[1] =
> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[8]);
> +		priv->tx_sa_table[sa_index].key[0] =
> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[12]);
> +		priv->tx_sa_table[sa_index].salt =
> +				rte_cpu_to_be_32(sess->salt);
> +
> +		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0),
> +				priv->tx_sa_table[sa_index].key[0]);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1),
> +				priv->tx_sa_table[sa_index].key[1]);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2),
> +				priv->tx_sa_table[sa_index].key[2]);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3),
> +				priv->tx_sa_table[sa_index].key[3]);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT,
> +				priv->tx_sa_table[sa_index].salt);
> +		IXGBE_WAIT_TWRITE;
> +
> +		rte_hash_add_key_data(priv->tx_spi_sai_hash,
> +				&priv->tx_sa_table[sa_index].spi,
> +				(void *)(uint64_t)sa_index);
> +		priv->tx_sa_table[i].used = 1;
> +		sess->sa_index = sa_index;
> +	}
> +
> +	return sa_index;
> +}
> +
> +static int
> +ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
> +		     struct ixgbe_crypto_session *sess)
> +{
> +	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +	struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
> +			dev->data->dev_private);
> +	uint32_t reg;
> +	int sa_index = -1;
> +
> +	if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
> +		int i, ip_index = -1;
> +
> +		/* Find a match in the IP table*/
> +		for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
> +			if (CMP_IP(priv->rx_ip_table[i].ip, sess->dst_ip)) {
> +				ip_index = i;
> +				break;
> +			}
> +		}
> +
> +		/* Fail if no match*/
> +		if (ip_index < 0) {
> +			PMD_DRV_LOG(ERR, "Entry not found in the Rx IP table\n");
> +			return -1;
> +		}
> +
> +		/* Find a free entry in the SA table*/
> +		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> +			if (priv->rx_sa_table[i].spi ==
> +					rte_cpu_to_be_32(sess->spi)) {
> +				sa_index = i;
> +				break;
> +			}
> +		}
> +		/* Fail if no match*/
> +		if (sa_index < 0) {
> +			PMD_DRV_LOG(ERR, "Entry not found in the Rx SA table\n");
> +			return -1;
> +		}
> +
> +		/* Disable and clear Rx SPI and key table table entryes*/
> +		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
> +		IXGBE_WAIT_RWRITE;
> +		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
> +		IXGBE_WAIT_RWRITE;
> +		priv->rx_sa_table[sa_index].used = 0;
> +
> +		/* If last used then clear the IP table entry*/
> +		priv->rx_ip_table[ip_index].ref_count--;
> +		if (priv->rx_ip_table[ip_index].ref_count == 0) {
> +			reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP
> +					| (ip_index << 3);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
> +		}
> +		} else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
> +			int i;
> +
> +			/* Find a match in the SA table*/
> +			for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
> +				if (priv->tx_sa_table[i].spi ==
> +						rte_cpu_to_be_32(sess->spi)) {
> +					sa_index = i;
> +					break;
> +				}
> +			}
> +			/* Fail if no match entries*/
> +			if (sa_index < 0) {
> +				PMD_DRV_LOG(ERR, "Entry not found in the "
> +						"Tx SA table\n");
> +				return -1;
> +			}
> +			reg = IPSRXIDX_WRITE | (sa_index << 3);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
> +			IXGBE_WAIT_TWRITE;
> +
> +			priv->tx_sa_table[sa_index].used = 0;
> +			rte_hash_del_key(priv->tx_spi_sai_hash,
> +					&priv->tx_sa_table[sa_index].spi);
> +		}
> +
> +	return 0;
> +}
> +
> +static int
> +ixgbe_crypto_create_session(void *dev,
> +		struct rte_security_sess_conf *sess_conf,
> +		struct rte_security_session *sess,
> +		struct rte_mempool *mempool)
> +{
> +	struct ixgbe_crypto_session *session = NULL;
> +	struct rte_security_ipsec_xform *ipsec_xform = sess_conf->ipsec_xform;
> +
> +	if (rte_mempool_get(mempool, (void **)&session)) {
> +		PMD_DRV_LOG(ERR, "Cannot get object from session mempool");
> +		return -ENOMEM;
> +	}
> +	if (ipsec_xform->aead_alg != RTE_CRYPTO_AEAD_AES_GCM) {
> +		PMD_DRV_LOG(ERR, "Unsupported IPsec mode\n");
> +		return -ENOTSUP;
> +	}
> +
> +	session->op = (ipsec_xform->op == RTE_SECURITY_IPSEC_OP_DECAP) ?
> +			IXGBE_OP_AUTHENTICATED_DECRYPTION :
> +			IXGBE_OP_AUTHENTICATED_ENCRYPTION;
> +	session->key = ipsec_xform->aead_key.data;
> +	memcpy(&session->salt,
> +	     &ipsec_xform->aead_key.data[ipsec_xform->aead_key.length], 4);
> +	session->spi = ipsec_xform->spi;
> +
> +	if (ipsec_xform->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
> +		uint32_t sip = ipsec_xform->tunnel.ipv4.src_ip.s_addr;
> +		uint32_t dip = ipsec_xform->tunnel.ipv4.dst_ip.s_addr;
> +		session->src_ip.type = IPv4;
> +		session->dst_ip.type = IPv4;
> +		session->src_ip.ipv4 = rte_cpu_to_be_32(sip);
> +		session->dst_ip.ipv4 = rte_cpu_to_be_32(dip);
> +
> +	} else {
> +		uint32_t *sip = (uint32_t *)&ipsec_xform->tunnel.ipv6.src_addr;
> +		uint32_t *dip = (uint32_t *)&ipsec_xform->tunnel.ipv6.dst_addr;
> +		session->src_ip.type = IPv6;
> +		session->dst_ip.type = IPv6;
> +		session->src_ip.ipv6[0] = rte_cpu_to_be_32(sip[0]);
> +		session->src_ip.ipv6[1] = rte_cpu_to_be_32(sip[1]);
> +		session->src_ip.ipv6[2] = rte_cpu_to_be_32(sip[2]);
> +		session->src_ip.ipv6[3] = rte_cpu_to_be_32(sip[3]);
> +		session->dst_ip.ipv6[0] = rte_cpu_to_be_32(dip[0]);
> +		session->dst_ip.ipv6[1] = rte_cpu_to_be_32(dip[1]);
> +		session->dst_ip.ipv6[2] = rte_cpu_to_be_32(dip[2]);
> +		session->dst_ip.ipv6[3] = rte_cpu_to_be_32(dip[3]);
> +	}
> +
> +	session->dev = (struct rte_eth_dev *)dev;
> +	set_sec_session_private_data(sess, 0, session);
> +
> +	if (ixgbe_crypto_add_sa(session)) {
> +		PMD_DRV_LOG(ERR, "Failed to add SA\n");
> +		return -EPERM;
> +	}
> +
> +	return 0;
> +}
> +
> +static void
> +ixgbe_crypto_remove_session(void *dev,
> +		struct rte_security_session *session)
> +{
> +	struct ixgbe_crypto_session *sess =
> +		(struct ixgbe_crypto_session *)
> +		get_sec_session_private_data(session, 0);
> +	if (dev != sess->dev) {
> +		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
> +		return;
> +	}
> +
> +	if (ixgbe_crypto_remove_sa(dev, sess)) {
> +		PMD_DRV_LOG(ERR, "Failed to remove session\n");
> +		return;
> +	}
> +
> +	rte_free(session);
> +}
> +
> +uint64_t
> +ixgbe_crypto_get_txdesc_flags(uint16_t port_id, struct rte_mbuf *mb) {
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	struct ixgbe_ipsec *priv =
> +			IXGBE_DEV_PRIVATE_TO_IPSEC(dev->data->dev_private);
> +	struct ipv4_hdr *ip4 =
> +			rte_pktmbuf_mtod_offset(mb, struct ipv4_hdr*,
> +						sizeof(struct ether_hdr));
> +	uint32_t spi = 0;
> +	uintptr_t sa_index;
> +	struct ixgbe_crypto_tx_desc_metadata mdata = {0};
> +
> +	if (ip4->version_ihl == 0x45)
> +		spi = *rte_pktmbuf_mtod_offset(mb, uint32_t*,
> +					sizeof(struct ether_hdr) +
> +					sizeof(struct ipv4_hdr));
> +	else
> +		spi = *rte_pktmbuf_mtod_offset(mb, uint32_t*,
> +					sizeof(struct ether_hdr) +
> +					sizeof(struct ipv6_hdr));

Instead of using hardcoded values for L2/L3 len, why not to require user to setup mbuf's
l2_len/l3_len fields properly (as we do for other TX offloads)?

> +
> +	if (priv->tx_spi_sai_hash &&
> +			rte_hash_lookup_data(priv->tx_spi_sai_hash, &spi,
> +					(void **)&sa_index) == 0) {
> +		mdata.enc = 1;
> +		mdata.sa_idx = (uint32_t)sa_index;
> +		mdata.pad_len = *rte_pktmbuf_mtod_offset(mb, uint8_t *,
> +					rte_pktmbuf_pkt_len(mb) - 18);
> +	}

Might be worse to introduce some uint64_t security_id inside mbuf's second cache line.
Then you can move whole functionality into tx_prepare().
I suppose current implementation will hit TX performance quite badly.

> +
> +	return mdata.data;
> +}
> +
> +
> +struct rte_security_ops ixgbe_security_ops = {
> +		.session_configure = ixgbe_crypto_create_session,
> +		.session_clear = ixgbe_crypto_remove_session,
> +};
> diff --git a/drivers/net/ixgbe/ixgbe_ipsec.h b/drivers/net/ixgbe/ixgbe_ipsec.h
> new file mode 100644
> index 0000000..fd479eb
> --- /dev/null
> +++ b/drivers/net/ixgbe/ixgbe_ipsec.h
> @@ -0,0 +1,142 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef IXGBE_IPSEC_H_
> +#define IXGBE_IPSEC_H_
> +
> +#include <rte_security.h>
> +
> +#define IPSRXIDX_RX_EN                                    0x00000001
> +#define IPSRXIDX_TABLE_IP                                 0x00000002
> +#define IPSRXIDX_TABLE_SPI                                0x00000004
> +#define IPSRXIDX_TABLE_KEY                                0x00000006
> +#define IPSRXIDX_WRITE                                    0x80000000
> +#define IPSRXIDX_READ                                     0x40000000
> +#define IPSRXMOD_VALID                                    0x00000001
> +#define IPSRXMOD_PROTO                                    0x00000004
> +#define IPSRXMOD_DECRYPT                                  0x00000008
> +#define IPSRXMOD_IPV6                                     0x00000010
> +#define IXGBE_ADVTXD_POPTS_IPSEC                          0x00000400
> +#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP                 0x00002000
> +#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN               0x00004000
> +#define IXGBE_RXDADV_IPSEC_STATUS_SECP                    0x00020000
> +#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK                 0x18000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL         0x08000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH           0x10000000
> +#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED    0x18000000
> +
> +#define IPSEC_MAX_RX_IP_COUNT           128
> +#define IPSEC_MAX_SA_COUNT              1024
> +
> +enum ixgbe_operation {
> +	IXGBE_OP_AUTHENTICATED_ENCRYPTION, IXGBE_OP_AUTHENTICATED_DECRYPTION
> +};
> +
> +enum ixgbe_gcm_key {
> +	IXGBE_GCM_KEY_128, IXGBE_GCM_KEY_256
> +};
> +
> +/**
> + * Generic IP address structure
> + * TODO: Find better location for this rte_net.h possibly.
> + **/
> +struct ipaddr {
> +	enum ipaddr_type {
> +		IPv4, IPv6
> +	} type;
> +	/**< IP Address Type - IPv4/IPv6 */
> +
> +	union {
> +		uint32_t ipv4;
> +		uint32_t ipv6[4];
> +	};
> +};
> +
> +/** inline crypto crypto private session structure */
> +struct ixgbe_crypto_session {
> +	enum ixgbe_operation op;
> +	uint8_t *key;
> +	uint32_t salt;
> +	uint32_t sa_index;
> +	uint32_t spi;
> +	struct ipaddr src_ip;
> +	struct ipaddr dst_ip;
> +	struct rte_eth_dev *dev;
> +} __rte_cache_aligned;
> +
> +struct ixgbe_crypto_rx_ip_table {
> +	struct ipaddr ip;
> +	uint16_t ref_count;
> +};
> +struct ixgbe_crypto_rx_sa_table {
> +	uint32_t spi;
> +	uint32_t ip_index;
> +	uint32_t key[4];
> +	uint32_t salt;
> +	uint8_t mode;
> +	uint8_t used;
> +};
> +
> +struct ixgbe_crypto_tx_sa_table {
> +	uint32_t spi;
> +	uint32_t key[4];
> +	uint32_t salt;
> +	uint8_t used;
> +};
> +
> +struct ixgbe_crypto_tx_desc_metadata {
> +	union {
> +		uint64_t data;
> +		struct {
> +			uint32_t sa_idx;
> +			uint8_t pad_len;
> +			uint8_t enc;
> +		};
> +	};
> +};
> +
> +struct ixgbe_ipsec {
> +#define IS_INITIALIZED (1 << 0)
> +	uint8_t flags;
> +	struct ixgbe_crypto_rx_ip_table rx_ip_table[IPSEC_MAX_RX_IP_COUNT];
> +	struct ixgbe_crypto_rx_sa_table rx_sa_table[IPSEC_MAX_SA_COUNT];
> +	struct ixgbe_crypto_tx_sa_table tx_sa_table[IPSEC_MAX_SA_COUNT];
> +	struct rte_hash *tx_spi_sai_hash;
> +};
> +
> +extern struct rte_security_ops ixgbe_security_ops;
> +
> +uint64_t ixgbe_crypto_get_txdesc_flags(uint16_t port_id, struct rte_mbuf *mb);
> +
> +
> +#endif /*IXGBE_IPSEC_H_*/
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index 64bff25..76be27a 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c

The patch seems not complete...
I think you need new DEV_RX_OFFLOAD_*/ DEV_TX_OFFLOAD_* flags to advertisize new
offload capabilities.
Plus changes in dev_configure() and/or rx(/tx)_queue_setup() to allow user to enable/disable
these offloads at setup stage and select proper rx/tx function.
Also, I think you'll need new fields in ixgbe_tx_offload and realted mask, plus changes
in the code that manages txq->ctx_cache.

> @@ -395,7 +395,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
>  static inline void
>  ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
>  		volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
> -		uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
> +		uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
> +		__rte_unused struct rte_mbuf *mb)
>  {
>  	uint32_t type_tucmd_mlhl;
>  	uint32_t mss_l4len_idx = 0;
> @@ -479,6 +480,20 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
>  		seqnum_seed |= tx_offload.l2_len
>  			       << IXGBE_ADVTXD_TUNNEL_LEN;
>  	}
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +	if (mb->ol_flags & PKT_TX_SECURITY_OFFLOAD) {
> +		struct ixgbe_crypto_tx_desc_metadata mdata = {
> +			.data = ixgbe_crypto_get_txdesc_flags(txq->port_id, mb),
> +		};
> +		seqnum_seed |=
> +			(IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata.sa_idx);
> +		type_tucmd_mlhl |= mdata.enc ?
> +			(IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
> +				IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
> +		type_tucmd_mlhl |=
> +			(mdata.pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
> +	}
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
> 
>  	txq->ctx_cache[ctx_idx].flags = ol_flags;
>  	txq->ctx_cache[ctx_idx].tx_offload.data[0]  =
> @@ -855,7 +870,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>  				}
> 
>  				ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
> -					tx_offload);
> +					tx_offload, tx_pkt);
> 
>  				txe->last_id = tx_last;
>  				tx_id = txe->next_id;
> @@ -872,7 +887,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>  			olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
>  		}
> 
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +		olinfo_status |= ((pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT) |
> +				(((ol_flags & PKT_TX_SECURITY_OFFLOAD) != 0)
> +					* IXGBE_ADVTXD_POPTS_IPSEC));
> +#else /* RTE_LIBRTE_IXGBE_IPSEC */
>  		olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
> 
>  		m_seg = tx_pkt;
>  		do {
> @@ -1447,6 +1468,14 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
>  		pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
>  	}
> 
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +	if (rx_status & IXGBE_RXD_STAT_SECP) {
> +		pkt_flags |= PKT_RX_SECURITY_OFFLOAD;
> +		if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
> +			pkt_flags |= PKT_RX_SECURITY_OFFLOAD_FAILED;
> +	}
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
> +
>  	return pkt_flags;
>  }
> 
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> index e704a7f..8673a01 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
> @@ -128,6 +128,9 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
>  {
>  	__m128i ptype0, ptype1, vtag0, vtag1, csum;
>  	__m128i rearm0, rearm1, rearm2, rearm3;
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +	__m128i sterr0, sterr1, sterr2, sterr3, tmp1, tmp2;
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
> 
>  	/* mask everything except rss type */
>  	const __m128i rsstype_msk = _mm_set_epi16(
> @@ -174,6 +177,19 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
>  		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
>  		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
> 
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +	const __m128i ipsec_sterr_msk = _mm_set_epi32(
> +		0, IXGBE_RXD_STAT_SECP | IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG,
> +		0, 0);
> +	const __m128i ipsec_proc_msk  = _mm_set_epi32(
> +		0, IXGBE_RXD_STAT_SECP, 0, 0);
> +	const __m128i ipsec_err_flag  = _mm_set_epi32(
> +		0, PKT_RX_SECURITY_OFFLOAD_FAILED | PKT_RX_SECURITY_OFFLOAD,
> +		0, 0);
> +	const __m128i ipsec_proc_flag = _mm_set_epi32(
> +		0, PKT_RX_SECURITY_OFFLOAD, 0, 0);
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
> +
>  	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
>  	ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
>  	vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
> @@ -221,6 +237,34 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
>  	rearm2 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 4), 0x10);
>  	rearm3 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 2), 0x10);
> 
> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
> +	/*inline ipsec, extract the flags from the descriptors*/
> +	sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
> +	sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
> +	sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
> +	sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
> +	tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
> +	tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
> +	sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> +				_mm_and_si128(tmp2, ipsec_proc_flag));
> +	tmp1 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
> +	tmp2 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
> +	sterr1 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> +				_mm_and_si128(tmp2, ipsec_proc_flag));
> +	tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
> +	tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
> +	sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> +				_mm_and_si128(tmp2, ipsec_proc_flag));
> +	tmp1 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
> +	tmp2 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
> +	sterr3 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
> +				_mm_and_si128(tmp2, ipsec_proc_flag));
> +	rearm0 = _mm_or_si128(rearm0, sterr0);
> +	rearm1 = _mm_or_si128(rearm1, sterr1);
> +	rearm2 = _mm_or_si128(rearm2, sterr2);
> +	rearm3 = _mm_or_si128(rearm3, sterr3);
> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
> +

Wonder what is the performance drop (if any) when ipsec RX is on?
Would it make sense to introduce a new RX function for that case?

>  	/* write the rearm data and the olflags in one write */
>  	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
>  			offsetof(struct rte_mbuf, rearm_data) + 8);
> --
> 2.7.5

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH 5/5] examples/ipsec-secgw: enabled inline ipsec
  2017-08-25 14:57 ` [RFC PATCH 5/5] examples/ipsec-secgw: enabled " Radu Nicolau
@ 2017-08-29 12:04   ` Akhil Goyal
  0 siblings, 0 replies; 13+ messages in thread
From: Akhil Goyal @ 2017-08-29 12:04 UTC (permalink / raw)
  To: Radu Nicolau, dev; +Cc: hemant.agrawal, Boris Pismenny, Declan Doherty

Hi Radu,
On 8/25/2017 8:27 PM, Radu Nicolau wrote:
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> ---
>   examples/ipsec-secgw/esp.c   | 26 ++++++++++++++++--
>   examples/ipsec-secgw/ipsec.c | 61 +++++++++++++++++++++++++++++++++++------
>   examples/ipsec-secgw/ipsec.h |  2 ++
>   examples/ipsec-secgw/sa.c    | 65 +++++++++++++++++++++++++++++++-------------
>   4 files changed, 123 insertions(+), 31 deletions(-)
> 
> diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
> index 70bb81f..77ab232 100644
> --- a/examples/ipsec-secgw/esp.c
> +++ b/examples/ipsec-secgw/esp.c
> @@ -58,6 +58,9 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
>   	struct rte_crypto_sym_op *sym_cop;
>   	int32_t payload_len, ip_hdr_len;
>   
> +	if (sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO)
> +		return 0;
> +
>   	RTE_ASSERT(m != NULL);
>   	RTE_ASSERT(sa != NULL);
>   	RTE_ASSERT(cop != NULL);
> @@ -175,6 +178,16 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
>   	RTE_ASSERT(sa != NULL);
>   	RTE_ASSERT(cop != NULL);
>   
> +
> +	if (sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO) {
> +		if (m->ol_flags & PKT_RX_SECURITY_OFFLOAD
> +				&& m->ol_flags & PKT_RX_SECURITY_OFFLOAD_FAILED)
> +			cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
> +		else
> +			cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
> +	}
> +
> +
>   	if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
>   		RTE_LOG(ERR, IPSEC_ESP, "failed crypto op\n");
>   		return -1;
> @@ -321,6 +334,9 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
>   	esp->spi = rte_cpu_to_be_32(sa->spi);
>   	esp->seq = rte_cpu_to_be_32((uint32_t)sa->seq);
>   
> +	if (sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO)
> +		return 0;
> +
>   	uint64_t *iv = (uint64_t *)(esp + 1);
>   
>   	sym_cop = get_sym_cop(cop);
> @@ -419,9 +435,13 @@ esp_outbound_post(struct rte_mbuf *m __rte_unused,
>   	RTE_ASSERT(sa != NULL);
>   	RTE_ASSERT(cop != NULL);
>   
> -	if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
> -		RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
> -		return -1;
> +	if (sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO) {
> +		m->ol_flags |= PKT_TX_SECURITY_OFFLOAD;
> +	} else {
> +		if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
> +			RTE_LOG(ERR, IPSEC_ESP, "Failed crypto op\n");
> +			return -1;
> +		}
>   	}
>   
>   	return 0;
> diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
> index c8fde1c..b14b23d 100644
> --- a/examples/ipsec-secgw/ipsec.c
> +++ b/examples/ipsec-secgw/ipsec.c
> @@ -58,13 +58,17 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
>   	key.cipher_algo = (uint8_t)sa->cipher_algo;
>   	key.auth_algo = (uint8_t)sa->auth_algo;
>   
> -	ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
> -			(void **)&cdev_id_qp);
> -	if (ret < 0) {
> -		RTE_LOG(ERR, IPSEC, "No cryptodev: core %u, cipher_algo %u, "
> -				"auth_algo %u\n", key.lcore_id, key.cipher_algo,
> -				key.auth_algo);
> -		return -1;
> +	if (sa->type == RTE_SECURITY_SESS_NONE) {
I believe cdev_id_qp will be needed by all the sa->type.
I  can see that it is being used by ipsec inline case, but it is not 
getting set anywhere.

> +		ret = rte_hash_lookup_data(ipsec_ctx->cdev_map, &key,
> +				(void **)&cdev_id_qp);
> +		if (ret < 0) {
> +			RTE_LOG(ERR, IPSEC, "No cryptodev: core %u, "
> +					"cipher_algo %u, "
> +					"auth_algo %u\n",
> +					key.lcore_id, key.cipher_algo,
> +					key.auth_algo);
> +			return -1;
> +		}
>   	}
>   
>   	RTE_LOG_DP(DEBUG, IPSEC, "Create session for SA spi %u on cryptodev "
> @@ -79,7 +83,8 @@ create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
>   				sa->crypto_session, sa->xforms,
>   				ipsec_ctx->session_pool);
>   
> -		rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id, &cdev_info);
> +		rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
> +				&cdev_info);
>   		if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
>   			ret = rte_cryptodev_queue_pair_attach_sym_session(
>   					ipsec_ctx->tbl[cdev_id_qp].id,
> @@ -146,6 +151,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
>   	struct ipsec_mbuf_metadata *priv;
>   	struct rte_crypto_sym_op *sym_cop;
>   	struct ipsec_sa *sa;
> +	struct cdev_qp *cqp;
>   
>   	for (i = 0; i < nb_pkts; i++) {
>   		if (unlikely(sas[i] == NULL)) {
> @@ -202,8 +208,31 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
>   			}
>   			break;
>   		case RTE_SECURITY_SESS_ETH_PROTO_OFFLOAD:
> -		case RTE_SECURITY_SESS_ETH_INLINE_CRYPTO:
>   			break;
> +		case RTE_SECURITY_SESS_ETH_INLINE_CRYPTO:
> +			priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> +			priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> +
> +			rte_prefetch0(&priv->sym_cop);
> +
> +			if ((unlikely(sa->sec_session == NULL)) &&
> +					create_session(ipsec_ctx, sa)) {
> +				rte_pktmbuf_free(pkts[i]);
> +				continue;
> +			}
> +
> +			rte_security_attach_session(&priv->cop,
> +					sa->sec_session);
> +
> +			ret = xform_func(pkts[i], sa, &priv->cop);
I believe you are returning from the xform_func (esp_inbound and 
esp_outbound) without doing anything. It is better that you do not call 
it from here. It will save some of your cycles.
> +			if (unlikely(ret)) {
> +				rte_pktmbuf_free(pkts[i]);
> +				continue;
> +			}
> +
> +			cqp = &ipsec_ctx->tbl[sa->cdev_id_qp];
> +			cqp->ol_pkts[cqp->ol_pkts_cnt++] = pkts[i];
> +			continue;
>   		}
>   
>   		RTE_ASSERT(sa->cdev_id_qp < ipsec_ctx->nb_qps);
> @@ -228,6 +257,20 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
>   		if (ipsec_ctx->last_qp == ipsec_ctx->nb_qps)
>   			ipsec_ctx->last_qp %= ipsec_ctx->nb_qps;
>   
> +
> +		while (cqp->ol_pkts_cnt > 0 && nb_pkts < max_pkts) {
> +			pkt = cqp->ol_pkts[--cqp->ol_pkts_cnt];
> +			rte_prefetch0(pkt);
> +			priv = get_priv(pkt);
> +			sa = priv->sa;
> +			ret = xform_func(pkt, sa, &priv->cop);
> +			if (unlikely(ret)) {
> +				rte_pktmbuf_free(pkt);
> +				continue;
> +			}
> +			pkts[nb_pkts++] = pkt;
> +		}
> +
>   		if (cqp->in_flight == 0)
>   			continue;
>   
> diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
> index 6291d86..685304b 100644
> --- a/examples/ipsec-secgw/ipsec.h
> +++ b/examples/ipsec-secgw/ipsec.h
> @@ -142,6 +142,8 @@ struct cdev_qp {
>   	uint16_t in_flight;
>   	uint16_t len;
>   	struct rte_crypto_op *buf[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
> +	struct rte_mbuf *ol_pkts[MAX_PKT_BURST] __rte_aligned(sizeof(void *));
> +	uint16_t ol_pkts_cnt;
>   };
>   
>   struct ipsec_ctx {
> diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
> index 851262b..11b31d0 100644
> --- a/examples/ipsec-secgw/sa.c
> +++ b/examples/ipsec-secgw/sa.c
> @@ -613,11 +613,13 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens,
>   		if (status->status < 0)
>   			return;
>   	} else {
> -		APP_CHECK(cipher_algo_p == 1, status, "missing cipher or AEAD options");
> +		APP_CHECK(cipher_algo_p == 1, status,
> +			  "missing cipher or AEAD options");
>   		if (status->status < 0)
>   			return;
>   
> -		APP_CHECK(auth_algo_p == 1, status, "missing auth or AEAD options");
> +		APP_CHECK(auth_algo_p == 1, status,
> +			"missing auth or AEAD options");
>   		if (status->status < 0)
>   			return;
>   	}
> @@ -763,14 +765,31 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
>   			sa->dst.ip.ip4 = rte_cpu_to_be_32(sa->dst.ip.ip4);
>   		}
>   
> -		if (sa->type == RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD) {
> -			sa_ctx->xf[idx].c.cipher_alg = sa->cipher_algo;
> -			sa_ctx->xf[idx].c.auth_alg = sa->auth_algo;
> -			sa_ctx->xf[idx].c.cipher_key.data = sa->cipher_key;
> -			sa_ctx->xf[idx].c.auth_key.data = sa->auth_key;
> -			sa_ctx->xf[idx].c.cipher_key.length =
> +		if (sa->type == RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD ||
> +			sa->type == RTE_SECURITY_SESS_ETH_INLINE_CRYPTO) {
> +
> +			if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) {
> +				sa_ctx->xf[idx].c.aead_alg =
> +						sa->aead_algo;
> +				sa_ctx->xf[idx].c.aead_key.data =
> +						sa->cipher_key;
> +				sa_ctx->xf[idx].c.aead_key.length =
> +						sa->cipher_key_len;
> +
> +			} else {
> +				sa_ctx->xf[idx].c.cipher_alg = sa->cipher_algo;
> +				sa_ctx->xf[idx].c.auth_alg = sa->auth_algo;
> +				sa_ctx->xf[idx].c.cipher_key.data =
> +						sa->cipher_key;
> +				sa_ctx->xf[idx].c.auth_key.data =
> +						sa->auth_key;
> +				sa_ctx->xf[idx].c.cipher_key.length =
>   						sa->cipher_key_len;
> -			sa_ctx->xf[idx].c.auth_key.length = sa->auth_key_len;
> +				sa_ctx->xf[idx].c.auth_key.length =
> +						sa->auth_key_len;
> +				sa_ctx->xf[idx].c.salt = sa->salt;
> +			}
> +
>   			sa_ctx->xf[idx].c.op = (inbound == 1)?
>   						RTE_SECURITY_IPSEC_OP_DECAP :
>   						RTE_SECURITY_IPSEC_OP_ENCAP;
> @@ -835,9 +854,11 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
>   			}
>   
>   			if (inbound) {
> -				sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> +				sa_ctx->xf[idx].b.type =
> +						RTE_CRYPTO_SYM_XFORM_CIPHER;
We are not touching this code here. Is it not recommended to handle the 
checkpatch errors separately. Similar comment for other changes in this 
patch also.
>   				sa_ctx->xf[idx].b.cipher.algo = sa->cipher_algo;
> -				sa_ctx->xf[idx].b.cipher.key.data = sa->cipher_key;
> +				sa_ctx->xf[idx].b.cipher.key.data =
> +						sa->cipher_key;
>   				sa_ctx->xf[idx].b.cipher.key.length =
>   					sa->cipher_key_len;
>   				sa_ctx->xf[idx].b.cipher.op =
> @@ -846,7 +867,8 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
>   				sa_ctx->xf[idx].b.cipher.iv.offset = IV_OFFSET;
>   				sa_ctx->xf[idx].b.cipher.iv.length = iv_length;
>   
> -				sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AUTH;
> +				sa_ctx->xf[idx].a.type =
> +						RTE_CRYPTO_SYM_XFORM_AUTH;
>   				sa_ctx->xf[idx].a.auth.algo = sa->auth_algo;
>   				sa_ctx->xf[idx].a.auth.key.data = sa->auth_key;
>   				sa_ctx->xf[idx].a.auth.key.length =
> @@ -856,9 +878,11 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
>   				sa_ctx->xf[idx].a.auth.op =
>   					RTE_CRYPTO_AUTH_OP_VERIFY;
>   			} else { /* outbound */
> -				sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> +				sa_ctx->xf[idx].a.type =
> +					RTE_CRYPTO_SYM_XFORM_CIPHER;
>   				sa_ctx->xf[idx].a.cipher.algo = sa->cipher_algo;
> -				sa_ctx->xf[idx].a.cipher.key.data = sa->cipher_key;
> +				sa_ctx->xf[idx].a.cipher.key.data =
> +					sa->cipher_key;
>   				sa_ctx->xf[idx].a.cipher.key.length =
>   					sa->cipher_key_len;
>   				sa_ctx->xf[idx].a.cipher.op =
> @@ -867,9 +891,12 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
>   				sa_ctx->xf[idx].a.cipher.iv.offset = IV_OFFSET;
>   				sa_ctx->xf[idx].a.cipher.iv.length = iv_length;
>   
> -				sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_AUTH;
> -				sa_ctx->xf[idx].b.auth.algo = sa->auth_algo;
> -				sa_ctx->xf[idx].b.auth.key.data = sa->auth_key;
> +				sa_ctx->xf[idx].b.type =
> +					RTE_CRYPTO_SYM_XFORM_AUTH;
> +				sa_ctx->xf[idx].b.auth.algo =
> +					sa->auth_algo;
> +				sa_ctx->xf[idx].b.auth.key.data =
> +						sa->auth_key;
>   				sa_ctx->xf[idx].b.auth.key.length =
>   					sa->auth_key_len;
>   				sa_ctx->xf[idx].b.auth.digest_length =
> @@ -991,8 +1018,8 @@ single_inbound_lookup(struct ipsec_sa *sadb, struct rte_mbuf *pkt,
>   	case IP6_TUNNEL:
>   		src6_addr = RTE_PTR_ADD(ip, offsetof(struct ip6_hdr, ip6_src));
>   		if ((ip->ip_v == IP6_VERSION) &&
> -				!memcmp(&sa->src.ip.ip6.ip6, src6_addr, 16) &&
> -				!memcmp(&sa->dst.ip.ip6.ip6, src6_addr + 16, 16))
> +			!memcmp(&sa->src.ip.ip6.ip6, src6_addr, 16) &&
> +			!memcmp(&sa->dst.ip.ip6.ip6, src6_addr + 16, 16))
>   			*sa_ret = sa;
>   		break;
>   	case TRANSPORT:
> 
Regards,
Akhil

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH 3/5] rte_security: updates and enabled security operations for ethdev
  2017-08-25 14:57 ` [RFC PATCH 3/5] rte_security: updates and enabled security operations for ethdev Radu Nicolau
@ 2017-08-29 12:14   ` Akhil Goyal
  2017-08-29 13:13     ` Radu Nicolau
  0 siblings, 1 reply; 13+ messages in thread
From: Akhil Goyal @ 2017-08-29 12:14 UTC (permalink / raw)
  To: Radu Nicolau, dev; +Cc: hemant.agrawal, Declan Doherty, Boris Pismenny

Hi Radu,
On 8/25/2017 8:27 PM, Radu Nicolau wrote:
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> ---
>   lib/Makefile                                   |  1 +
>   lib/librte_cryptodev/rte_cryptodev_pmd.h       |  4 +--
>   lib/librte_cryptodev/rte_cryptodev_version.map | 10 ++++++++
>   lib/librte_cryptodev/rte_security.c            | 34 +++++++++++++++++---------
>   lib/librte_cryptodev/rte_security.h            | 12 ++++++---
>   5 files changed, 44 insertions(+), 17 deletions(-)
> 
> diff --git a/lib/Makefile b/lib/Makefile
> index 86caba1..08a1767 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -51,6 +51,7 @@ DEPDIRS-librte_ether += librte_mbuf
>   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
>   DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
>   DEPDIRS-librte_cryptodev += librte_kvargs
> +DEPDIRS-librte_cryptodev += librte_ether
Is the shared build working now?
>   DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
>   DEPDIRS-librte_eventdev := librte_eal librte_ring
>   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
> diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> index 219fba6..ab3ecf7 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
> +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
> @@ -371,7 +371,7 @@ struct rte_cryptodev_ops {
>    *  - Returns -ENOTSUP if crypto device does not support the crypto transform.
>    *  - Returns -ENOMEM if the private session could not be allocated.
>    */
> -typedef int (*security_configure_session_t)(struct rte_cryptodev *dev,
> +typedef int (*security_configure_session_t)(void *dev,
>   		struct rte_security_sess_conf *conf,
>   		struct rte_security_session *sess,
>   		struct rte_mempool *mp);
> @@ -382,7 +382,7 @@ typedef int (*security_configure_session_t)(struct rte_cryptodev *dev,
>    * @param	dev		Crypto device pointer
>    * @param	sess		Security session structure
>    */
> -typedef void (*security_free_session_t)(struct rte_cryptodev *dev,
> +typedef void (*security_free_session_t)(void *dev,
>   		struct rte_security_session *sess);
>   
>   /** Security operations function pointer table */
> diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
> index e9ba88a..20b553e 100644
> --- a/lib/librte_cryptodev/rte_cryptodev_version.map
> +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
> @@ -79,3 +79,13 @@ DPDK_17.08 {
>   	rte_crypto_aead_operation_strings;
>   
>   } DPDK_17.05;
> +
> +DPDK_17.11 {
> +	global:
> +
> +	rte_security_session_create;
> +	rte_security_session_init;
> +	rte_security_attach_session;
> +	rte_security_session_free;
> +
> +} DPDK_17.08;
> diff --git a/lib/librte_cryptodev/rte_security.c b/lib/librte_cryptodev/rte_security.c
> index 7c73c93..5f35355 100644
> --- a/lib/librte_cryptodev/rte_security.c
> +++ b/lib/librte_cryptodev/rte_security.c
> @@ -86,8 +86,12 @@ rte_security_session_init(uint16_t dev_id,
>   			return -EINVAL;
>   		cdev = rte_cryptodev_pmd_get_dev(dev_id);
>   		index = cdev->driver_id;
> +		if (cdev == NULL || sess == NULL || cdev->sec_ops == NULL
> +				|| cdev->sec_ops->session_configure == NULL)
> +			return -EINVAL;
>   		if (sess->sess_private_data[index] == NULL) {
> -			ret = cdev->sec_ops->session_configure(cdev, conf, sess, mp);
> +			ret = cdev->sec_ops->session_configure((void *)cdev,
> +					conf, sess, mp);
>   			if (ret < 0) {
>   				CDEV_LOG_ERR(
>   					"cdev_id %d failed to configure session details",
> @@ -100,14 +104,18 @@ rte_security_session_init(uint16_t dev_id,
>   	case RTE_SECURITY_SESS_ETH_PROTO_OFFLOAD:
>   		dev = &rte_eth_devices[dev_id];
>   		index = dev->data->port_id;
> +		if (dev == NULL || sess == NULL || dev->sec_ops == NULL
> +				|| dev->sec_ops->session_configure == NULL)
> +			return -EINVAL;
>   		if (sess->sess_private_data[index] == NULL) {
> -//			ret = dev->sec_ops->session_configure(dev, conf, sess, mp);
> -//			if (ret < 0) {
> -//				CDEV_LOG_ERR(
> -//					"dev_id %d failed to configure session details",
> -//					dev_id);
> -//				return ret;
> -//			}
> +			ret = dev->sec_ops->session_configure((void *)dev,
> +					conf, sess, mp);
> +			if (ret < 0) {
> +				CDEV_LOG_ERR(
> +					"dev_id %d failed to configure session details",
> +					dev_id);
> +				return ret;
> +			}
>   		}
>   		break;
>   	default:
> @@ -152,16 +160,18 @@ rte_security_session_clear(uint8_t dev_id,
>   	switch (action_type) {
>   	case RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD:
>   		cdev =  rte_cryptodev_pmd_get_dev(dev_id);
> -		if (cdev == NULL || sess == NULL)
> +		if (cdev == NULL || sess == NULL || cdev->sec_ops == NULL
> +				|| cdev->sec_ops->session_clear == NULL)
>   			return -EINVAL;
> -		cdev->sec_ops->session_clear(cdev, sess);
> +		cdev->sec_ops->session_clear((void *)cdev, sess);
>   		break;
>   	case RTE_SECURITY_SESS_ETH_INLINE_CRYPTO:
>   	case RTE_SECURITY_SESS_ETH_PROTO_OFFLOAD:
>   		dev = &rte_eth_devices[dev_id];
> -		if (dev == NULL || sess == NULL)
> +		if (dev == NULL || sess == NULL || dev->sec_ops == NULL
> +				|| dev->sec_ops->session_clear == NULL)
>   			return -EINVAL;
> -//		dev->dev_ops->session_clear(dev, sess);
> +		dev->sec_ops->session_clear((void *)dev, sess);
>   		break;
>   	default:
>   		return -EINVAL;
> diff --git a/lib/librte_cryptodev/rte_security.h b/lib/librte_cryptodev/rte_security.h
> index 9747d5e..0c8b358 100644
> --- a/lib/librte_cryptodev/rte_security.h
> +++ b/lib/librte_cryptodev/rte_security.h
> @@ -20,7 +20,7 @@ extern "C" {
>   #include <rte_memory.h>
>   #include <rte_mempool.h>
>   #include <rte_common.h>
> -#include <rte_crypto.h>
> +#include "rte_crypto.h"
>   
>   /** IPSec protocol mode */
>   enum rte_security_conf_ipsec_sa_mode {
> @@ -70,9 +70,9 @@ struct rte_security_ipsec_tunnel_param {
>   		} ipv4; /**< IPv4 header parameters */
>   
>   		struct {
> -			struct in6_addr *src_addr;
> +			struct in6_addr src_addr;
>   			/**< IPv6 source address */
> -			struct in6_addr *dst_addr;
> +			struct in6_addr dst_addr;
>   			/**< IPv6 destination address */
>   			uint8_t dscp;
>   			/**< IPv6 Differentiated Services Code Point */
> @@ -171,6 +171,12 @@ struct rte_security_ipsec_xform {
>   		uint8_t *data;  /**< pointer to key data */
>   		size_t length;   /**< key length in bytes */
>   	} auth_key;
> +	enum rte_crypto_aead_algorithm aead_alg;
> +	/**< AEAD Algorithm */
> +	struct {
> +		uint8_t *data;  /**< pointer to key data */
> +		size_t length;   /**< key length in bytes */
> +	} aead_key;
I believe it would be better to use a union here.
union {
	struct {
		enum rte_crypto_cipher_algorithm cipher_alg;
	        /**< Cipher Algorithm */
         	struct {
                 	uint8_t *data;  /**< pointer to key data */
	                size_t length;   /**< key length in bytes */
         	} cipher_key;
         	enum rte_crypto_auth_algorithm auth_alg;
         	/**< Authentication Algorithm */
         	struct {
                 	uint8_t *data;  /**< pointer to key data */
                 	size_t length;   /**< key length in bytes */
         	} auth_key;
	};
	struct {
		enum rte_crypto_aead_algorithm aead_alg;
		/**< AEAD Algorithm */
		struct {
			uint8_t *data;  /**< pointer to key data */
			size_t length;   /**< key length in bytes */
		} aead_key;
	};
};

>   	uint32_t salt;	/**< salt for this SA */
>   	enum rte_security_conf_ipsec_sa_mode mode;
>   	/**< IPsec SA Mode - transport/tunnel */
> 

Thanks for the updates. I missed some of the checks.

-Akhil

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD
  2017-08-25 14:57 [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Radu Nicolau
                   ` (4 preceding siblings ...)
  2017-08-25 14:57 ` [RFC PATCH 5/5] examples/ipsec-secgw: enabled " Radu Nicolau
@ 2017-08-29 13:00 ` Boris Pismenny
  5 siblings, 0 replies; 13+ messages in thread
From: Boris Pismenny @ 2017-08-29 13:00 UTC (permalink / raw)
  To: Radu Nicolau, dev

Hi Radu,

In the our previous RFC, I got the impression that we had a consensus about
using rte_flow for inline and full protocol offload to net PMDs. However, this
patchset doesn't follow this convention. The rte_flow API allows for all future
encapsulations of these protocols to be expressed without re-implementing
rte_flow in rte_security.

Is there any reason not to use rte_flow?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH 4/5] ixgbe: enable inline ipsec
  2017-08-28 17:47   ` Ananyev, Konstantin
@ 2017-08-29 13:06     ` Radu Nicolau
  0 siblings, 0 replies; 13+ messages in thread
From: Radu Nicolau @ 2017-08-29 13:06 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev

Hi,

I will try to address all issues in the next iteration; and some 
comments inline.

Regards,

Radu


On 8/28/2017 6:47 PM, Ananyev, Konstantin wrote:
> Hi Radu,
> Few questions comments from me below.
> Thanks
> Konstantin
>
>> Signed-off-by: Radu Nicolau<radu.nicolau@intel.com>
>> ---
>>   config/common_base                     |   1 +
>>   drivers/net/ixgbe/Makefile             |   4 +-
>>   drivers/net/ixgbe/ixgbe_ethdev.c       |   3 +
>>   drivers/net/ixgbe/ixgbe_ethdev.h       |  10 +-
>>   drivers/net/ixgbe/ixgbe_ipsec.c        | 617 +++++++++++++++++++++++++++++++++
>>   drivers/net/ixgbe/ixgbe_ipsec.h        | 142 ++++++++
>>   drivers/net/ixgbe/ixgbe_rxtx.c         |  33 +-
>>   drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c |  44 +++
>>   8 files changed, 850 insertions(+), 4 deletions(-)
>>   create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.c
>>   create mode 100644 drivers/net/ixgbe/ixgbe_ipsec.h
>>
>> diff --git a/config/common_base b/config/common_base
>> index 5e97a08..2084609 100644
>> --- a/config/common_base
>> +++ b/config/common_base
>> @@ -179,6 +179,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
>>   CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
>>   CONFIG_RTE_IXGBE_INC_VECTOR=y
>>   CONFIG_RTE_LIBRTE_IXGBE_BYPASS=n
>> +CONFIG_RTE_LIBRTE_IXGBE_IPSEC=y
>>
>>   #
>>   # Compile burst-oriented I40E PMD driver
>> diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
>> index 5e57cb3..1180900 100644
>> --- a/drivers/net/ixgbe/Makefile
>> +++ b/drivers/net/ixgbe/Makefile
>> @@ -118,11 +118,13 @@ SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_neon.c
>>   else
>>   SRCS-$(CONFIG_RTE_IXGBE_INC_VECTOR) += ixgbe_rxtx_vec_sse.c
>>   endif
>> -
>>   ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_BYPASS),y)
>>   SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
>>   SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
>>   endif
>> +ifeq ($(CONFIG_RTE_LIBRTE_IXGBE_IPSEC),y)
>> +SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_ipsec.c
>> +endif
>>   SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
>>   SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
>>
>> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
>> index 22171d8..73de5e6 100644
>> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
>> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
>> @@ -1135,6 +1135,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
>>   	PMD_INIT_FUNC_TRACE();
>>
>>   	eth_dev->dev_ops = &ixgbe_eth_dev_ops;
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +	eth_dev->sec_ops = &ixgbe_security_ops;
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
> Do we  really need a new config macro here?
> Can't we make enable/disable IPsec for ixgbe configurable at device startup stage
> (as we doing for other RX/TX offloads)?
I suppose so, the only reason that I've added the config macro is to 
have as little as possible impact when inline ipsec is not needed.
>
>>   	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
>>   	eth_dev->tx_pkt_burst = &ixgbe_xmit_pkts;
>>   	eth_dev->tx_pkt_prepare = &ixgbe_prep_pkts;
>> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
>> index caa50c8..d1a84e2 100644
>> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
>> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
>> @@ -38,6 +38,9 @@
>>   #include "base/ixgbe_dcb_82599.h"
>>   #include "base/ixgbe_dcb_82598.h"
>>   #include "ixgbe_bypass.h"
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +#include "ixgbe_ipsec.h"
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>>   #include <rte_time.h>
>>   #include <rte_hash.h>
>>   #include <rte_pci.h>
>> @@ -529,7 +532,9 @@ struct ixgbe_adapter {
>>   	struct ixgbe_filter_info    filter;
>>   	struct ixgbe_l2_tn_info     l2_tn;
>>   	struct ixgbe_bw_conf        bw_conf;
>> -
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +	struct ixgbe_ipsec          ipsec;
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>>   	bool rx_bulk_alloc_allowed;
>>   	bool rx_vec_allowed;
>>   	struct rte_timecounter      systime_tc;
>> @@ -586,6 +591,9 @@ struct ixgbe_adapter {
>>   #define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
>>   	(&((struct ixgbe_adapter *)adapter)->tm_conf)
>>
>> +#define IXGBE_DEV_PRIVATE_TO_IPSEC(adapter)\
>> +	(&((struct ixgbe_adapter *)adapter)->ipsec)
>> +
>>   /*
>>    * RX/TX function prototypes
>>    */
>> diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
>> new file mode 100644
>> index 0000000..d866cd8
>> --- /dev/null
>> +++ b/drivers/net/ixgbe/ixgbe_ipsec.c
>> @@ -0,0 +1,617 @@
>> +/*-
>> + *   BSD LICENSE
>> + *
>> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
>> + *   All rights reserved.
>> + *
>> + *   Redistribution and use in source and binary forms, with or without
>> + *   modification, are permitted provided that the following conditions
>> + *   are met:
>> + *
>> + *	 * Redistributions of source code must retain the above copyright
>> + *	 notice, this list of conditions and the following disclaimer.
>> + *	 * Redistributions in binary form must reproduce the above copyright
>> + *	 notice, this list of conditions and the following disclaimer in
>> + *	 the documentation and/or other materials provided with the
>> + *	 distribution.
>> + *	 * Neither the name of Intel Corporation nor the names of its
>> + *	 contributors may be used to endorse or promote products derived
>> + *	 from this software without specific prior written permission.
>> + *
>> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
>> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
>> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
>> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
>> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
>> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
>> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
>> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
>> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
>> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
>> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>> + */
>> +
>> +#include <rte_ethdev.h>
>> +#include <rte_ethdev_pci.h>
>> +#include <rte_security.h>
>> +#include <rte_ip.h>
>> +#include <rte_jhash.h>
>> +#include <rte_cryptodev_pmd.h>
>> +
>> +#include "base/ixgbe_type.h"
>> +#include "base/ixgbe_api.h"
>> +#include "ixgbe_ethdev.h"
>> +#include "ixgbe_ipsec.h"
>> +
>> +
>> +#define IXGBE_WAIT_RW(__reg, __rw)			\
>> +{							\
>> +	IXGBE_WRITE_REG(hw, (__reg), reg);		\
>> +	while ((IXGBE_READ_REG(hw, (__reg))) & (__rw))	\
>> +	;						\
>> +}
>> +#define IXGBE_WAIT_RREAD  IXGBE_WAIT_RW(IXGBE_IPSRXIDX, IPSRXIDX_READ)
>> +#define IXGBE_WAIT_RWRITE IXGBE_WAIT_RW(IXGBE_IPSRXIDX, IPSRXIDX_WRITE)
>> +#define IXGBE_WAIT_TREAD  IXGBE_WAIT_RW(IXGBE_IPSTXIDX, IPSRXIDX_READ)
>> +#define IXGBE_WAIT_TWRITE IXGBE_WAIT_RW(IXGBE_IPSTXIDX, IPSRXIDX_WRITE)
>> +
>> +#define CMP_IP(a, b)	\
>> +		((a).ipv6[0] == (b).ipv6[0] && (a).ipv6[1] == (b).ipv6[1] && \
>> +		(a).ipv6[2] == (b).ipv6[2] && (a).ipv6[3] == (b).ipv6[3])
>> +
>> +
>> +static void
>> +ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
>> +{
>> +	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>> +	int i = 0;
>> +
>> +	/* clear Rx IP table*/
>> +	for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
>> +		uint16_t index = i << 3;
>> +		uint32_t reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP | index;
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
>> +		IXGBE_WAIT_RWRITE;
>> +	}
>> +
>> +	/* clear Rx SPI and Rx/Tx SA tables*/
>> +	for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
>> +		uint32_t index = i << 3;
>> +		uint32_t reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | index;
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
>> +		IXGBE_WAIT_RWRITE;
>> +		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | index;
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
>> +		IXGBE_WAIT_RWRITE;
>> +		reg = IPSRXIDX_WRITE | index;
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
>> +		IXGBE_WAIT_TWRITE;
>> +	}
>> +}
>> +
>> +static int
>> +ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
>> +{
>> +	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>> +	struct rte_eth_link link;
>> +	uint32_t reg;
>> +
>> +	/* Halt the data paths */
>> +	reg = IXGBE_SECTXCTRL_TX_DIS;
>> +	IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL, reg);
>> +	reg = IXGBE_SECRXCTRL_RX_DIS;
>> +	IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, reg);
>> +
>> +	/* Wait for Tx path to empty */
>> +	do {
>> +		rte_eth_link_get_nowait(dev->data->port_id, &link);
>> +		if (link.link_status != ETH_LINK_UP) {
>> +			/* Fix for HSD:4426139
>> +			 * If the Tx FIFO has data but no link,
>> +			 * we can't clear the Tx Sec block. So set MAC
>> +			 * loopback before block clear
>> +			 */
>> +			reg = IXGBE_READ_REG(hw, IXGBE_MACC);
>> +			reg |= IXGBE_MACC_FLU;
>> +			IXGBE_WRITE_REG(hw, IXGBE_MACC, reg);
>> +
>> +			reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>> +			reg |= IXGBE_HLREG0_LPBK;
>> +			IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
>> +			struct timespec time;
>> +			time.tv_sec = 0;
>> +			time.tv_nsec = 1000000 * 3;
>> +			nanosleep(&time, NULL);
>> +		}
>> +
>> +		reg = IXGBE_READ_REG(hw, IXGBE_SECTXSTAT);
>> +
>> +		rte_eth_link_get_nowait(dev->data->port_id, &link);
>> +		if (link.link_status != ETH_LINK_UP) {
>> +			reg = IXGBE_READ_REG(hw, IXGBE_MACC);
>> +			reg &= ~(IXGBE_MACC_FLU);
>> +			IXGBE_WRITE_REG(hw, IXGBE_MACC, reg);
>> +
>> +			reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>> +			reg &= ~(IXGBE_HLREG0_LPBK);
>> +			IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
>> +		}
>> +	} while (!(reg & IXGBE_SECTXSTAT_SECTX_RDY));
>> +
>> +	/* Wait for Rx path to empty*/
>> +	do {
>> +		reg = IXGBE_READ_REG(hw, IXGBE_SECRXSTAT);
>> +	} while (!(reg & IXGBE_SECRXSTAT_SECRX_RDY));
>> +
>> +	/* Set IXGBE_SECTXBUFFAF to 0x15 as required in the datasheet*/
>> +	IXGBE_WRITE_REG(hw, IXGBE_SECTXBUFFAF, 0x15);
>> +
>> +	/* IFG needs to be set to 3 when we are using security. Otherwise a Tx
>> +	 * hang will occur with heavy traffic.
>> +	 */
>> +	reg = IXGBE_READ_REG(hw, IXGBE_SECTXMINIFG);
>> +	reg = (reg & 0xFFFFFFF0) | 0x3;
>> +	IXGBE_WRITE_REG(hw, IXGBE_SECTXMINIFG, reg);
>> +
>> +	reg = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>> +	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
>> +	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
>> +
>> +	/* Enable the Tx crypto engine and restart the Tx data path;
>> +	 * set the STORE_FORWARD bit for IPSec.
>> +	 */
>> +	IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL, IXGBE_SECTXCTRL_STORE_FORWARD);
>> +
>> +	/* Enable the Rx crypto engine and restart the Rx data path*/
>> +	IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
>> +
>> +	/* Test if crypto was enabled */
>> +	reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
>> +	if (reg != IXGBE_SECTXCTRL_STORE_FORWARD) {
>> +		PMD_DRV_LOG(ERR, "Error enabling Tx Crypto");
>> +		return -1;
>> +	}
>> +	reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
>> +	if (reg != 0) {
>> +		PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
>> +		return -1;
>> +	}
>> +
>> +	ixgbe_crypto_clear_ipsec_tables(dev);
>> +
>> +	/* create hash table*/
>> +	{
>> +		struct ixgbe_ipsec *internals = IXGBE_DEV_PRIVATE_TO_IPSEC(
>> +				dev->data->dev_private);
>> +		struct rte_hash_parameters params = { 0 };
>> +		params.entries = IPSEC_MAX_SA_COUNT;
>> +		params.key_len = sizeof(uint32_t);
>> +		params.hash_func = rte_jhash;
>> +		params.hash_func_init_val = 0;
>> +		params.socket_id = rte_socket_id();
>> +		params.name = "tx_spi_sai_hash";
>> +		internals->tx_spi_sai_hash = rte_hash_create(&params);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +
>> +static int
>> +ixgbe_crypto_add_sa(struct ixgbe_crypto_session *sess)
>> +{
>> +	struct rte_eth_dev *dev = sess->dev;
>> +	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>> +	struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
>> +			dev->data->dev_private);
>> +	uint32_t reg;
>> +	int sa_index = -1;
>> +
>> +	if (!(priv->flags & IS_INITIALIZED)) {
>> +		if (ixgbe_crypto_enable_ipsec(dev) == 0)
>> +			priv->flags |= IS_INITIALIZED;
>> +	}
>> +
>> +	if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
>> +		int i, ip_index = -1;
>> +
>> +		/* Find a match in the IP table*/
>> +		for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
>> +			if (CMP_IP(priv->rx_ip_table[i].ip,
>> +				 sess->dst_ip)) {
>> +				ip_index = i;
>> +				break;
>> +			}
>> +		}
>> +		/* If no match, find a free entry in the IP table*/
>> +		if (ip_index < 0) {
>> +			for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
>> +				if (priv->rx_ip_table[i].ref_count == 0) {
>> +					ip_index = i;
>> +					break;
>> +				}
>> +			}
>> +		}
>> +
>> +		/* Fail if no match and no free entries*/
>> +		if (ip_index < 0) {
>> +			PMD_DRV_LOG(ERR, "No free entry left "
>> +					"in the Rx IP table\n");
>> +			return -1;
>> +		}
>> +
>> +		/* Find a free entry in the SA table*/
>> +		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
>> +			if (priv->rx_sa_table[i].used == 0) {
>> +				sa_index = i;
>> +				break;
>> +			}
>> +		}
>> +		/* Fail if no free entries*/
>> +		if (sa_index < 0) {
>> +			PMD_DRV_LOG(ERR, "No free entry left in "
>> +					"the Rx SA table\n");
>> +			return -1;
>> +		}
>> +
>> +		priv->rx_ip_table[ip_index].ip.ipv6[0] =
>> +				rte_cpu_to_be_32(sess->dst_ip.ipv6[0]);
>> +		priv->rx_ip_table[ip_index].ip.ipv6[1] =
>> +				rte_cpu_to_be_32(sess->dst_ip.ipv6[1]);
>> +		priv->rx_ip_table[ip_index].ip.ipv6[2] =
>> +				rte_cpu_to_be_32(sess->dst_ip.ipv6[2]);
>> +		priv->rx_ip_table[ip_index].ip.ipv6[3] =
>> +				rte_cpu_to_be_32(sess->dst_ip.ipv6[3]);
>> +		priv->rx_ip_table[ip_index].ref_count++;
>> +
>> +		priv->rx_sa_table[sa_index].spi =
>> +				rte_cpu_to_be_32(sess->spi);
>> +		priv->rx_sa_table[sa_index].ip_index = ip_index;
>> +		priv->rx_sa_table[sa_index].key[3] =
>> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[0]);
>> +		priv->rx_sa_table[sa_index].key[2] =
>> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[4]);
>> +		priv->rx_sa_table[sa_index].key[1] =
>> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[8]);
>> +		priv->rx_sa_table[sa_index].key[0] =
>> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[12]);
>> +		priv->rx_sa_table[sa_index].salt =
>> +				rte_cpu_to_be_32(sess->salt);
>> +		priv->rx_sa_table[sa_index].mode = IPSRXMOD_VALID;
>> +		if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION)
>> +			priv->rx_sa_table[sa_index].mode |=
>> +					(IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
>> +		if (sess->dst_ip.type == IPv6)
>> +			priv->rx_sa_table[sa_index].mode |= IPSRXMOD_IPV6;
>> +		priv->rx_sa_table[sa_index].used = 1;
>> +
>> +		/* write IP table entry*/
>> +		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
>> +				| IPSRXIDX_TABLE_IP | (ip_index << 3);
>> +		if (priv->rx_ip_table[ip_index].ip.type == IPv4) {
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
>> +					priv->rx_ip_table[ip_index].ip.ipv4);
>> +		} else {
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0),
>> +					priv->rx_ip_table[ip_index].ip.ipv6[0]);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1),
>> +					priv->rx_ip_table[ip_index].ip.ipv6[1]);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2),
>> +					priv->rx_ip_table[ip_index].ip.ipv6[2]);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3),
>> +					priv->rx_ip_table[ip_index].ip.ipv6[3]);
>> +		}
>> +		IXGBE_WAIT_RWRITE;
>> +
>> +		/* write SPI table entry*/
>> +		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
>> +				| IPSRXIDX_TABLE_SPI | (sa_index << 3);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI,
>> +				priv->rx_sa_table[sa_index].spi);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX,
>> +				priv->rx_sa_table[sa_index].ip_index);
>> +		IXGBE_WAIT_RWRITE;
>> +
>> +		/* write Key table entry*/
>> +		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE
>> +				| IPSRXIDX_TABLE_KEY | (sa_index << 3);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0),
>> +				priv->rx_sa_table[sa_index].key[0]);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1),
>> +				priv->rx_sa_table[sa_index].key[1]);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2),
>> +				priv->rx_sa_table[sa_index].key[2]);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3),
>> +				priv->rx_sa_table[sa_index].key[3]);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT,
>> +				priv->rx_sa_table[sa_index].salt);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD,
>> +				priv->rx_sa_table[sa_index].mode);
>> +		IXGBE_WAIT_RWRITE;
>> +
>> +	} else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
>> +		int i;
>> +
>> +		/* Find a free entry in the SA table*/
>> +		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
>> +			if (priv->tx_sa_table[i].used == 0) {
>> +				sa_index = i;
>> +				break;
>> +			}
>> +		}
>> +		/* Fail if no free entries*/
>> +		if (sa_index < 0) {
>> +			PMD_DRV_LOG(ERR, "No free entry left in "
>> +					"the Tx SA table\n");
>> +			return -1;
>> +		}
>> +
>> +		priv->tx_sa_table[sa_index].spi =
>> +				rte_cpu_to_be_32(sess->spi);
>> +		priv->tx_sa_table[sa_index].key[3] =
>> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[0]);
>> +		priv->tx_sa_table[sa_index].key[2] =
>> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[4]);
>> +		priv->tx_sa_table[sa_index].key[1] =
>> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[8]);
>> +		priv->tx_sa_table[sa_index].key[0] =
>> +				rte_cpu_to_be_32(*(uint32_t *)&sess->key[12]);
>> +		priv->tx_sa_table[sa_index].salt =
>> +				rte_cpu_to_be_32(sess->salt);
>> +
>> +		reg = IPSRXIDX_RX_EN | IPSRXIDX_WRITE | (sa_index << 3);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0),
>> +				priv->tx_sa_table[sa_index].key[0]);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1),
>> +				priv->tx_sa_table[sa_index].key[1]);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2),
>> +				priv->tx_sa_table[sa_index].key[2]);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3),
>> +				priv->tx_sa_table[sa_index].key[3]);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT,
>> +				priv->tx_sa_table[sa_index].salt);
>> +		IXGBE_WAIT_TWRITE;
>> +
>> +		rte_hash_add_key_data(priv->tx_spi_sai_hash,
>> +				&priv->tx_sa_table[sa_index].spi,
>> +				(void *)(uint64_t)sa_index);
>> +		priv->tx_sa_table[i].used = 1;
>> +		sess->sa_index = sa_index;
>> +	}
>> +
>> +	return sa_index;
>> +}
>> +
>> +static int
>> +ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
>> +		     struct ixgbe_crypto_session *sess)
>> +{
>> +	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>> +	struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
>> +			dev->data->dev_private);
>> +	uint32_t reg;
>> +	int sa_index = -1;
>> +
>> +	if (sess->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
>> +		int i, ip_index = -1;
>> +
>> +		/* Find a match in the IP table*/
>> +		for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
>> +			if (CMP_IP(priv->rx_ip_table[i].ip, sess->dst_ip)) {
>> +				ip_index = i;
>> +				break;
>> +			}
>> +		}
>> +
>> +		/* Fail if no match*/
>> +		if (ip_index < 0) {
>> +			PMD_DRV_LOG(ERR, "Entry not found in the Rx IP table\n");
>> +			return -1;
>> +		}
>> +
>> +		/* Find a free entry in the SA table*/
>> +		for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
>> +			if (priv->rx_sa_table[i].spi ==
>> +					rte_cpu_to_be_32(sess->spi)) {
>> +				sa_index = i;
>> +				break;
>> +			}
>> +		}
>> +		/* Fail if no match*/
>> +		if (sa_index < 0) {
>> +			PMD_DRV_LOG(ERR, "Entry not found in the Rx SA table\n");
>> +			return -1;
>> +		}
>> +
>> +		/* Disable and clear Rx SPI and key table table entryes*/
>> +		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0);
>> +		IXGBE_WAIT_RWRITE;
>> +		reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_KEY | (sa_index << 3);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(0), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(1), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(2), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXKEY(3), 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXSALT, 0);
>> +		IXGBE_WRITE_REG(hw, IXGBE_IPSRXMOD, 0);
>> +		IXGBE_WAIT_RWRITE;
>> +		priv->rx_sa_table[sa_index].used = 0;
>> +
>> +		/* If last used then clear the IP table entry*/
>> +		priv->rx_ip_table[ip_index].ref_count--;
>> +		if (priv->rx_ip_table[ip_index].ref_count == 0) {
>> +			reg = IPSRXIDX_WRITE | IPSRXIDX_TABLE_IP
>> +					| (ip_index << 3);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(0), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(1), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(2), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPADDR(3), 0);
>> +		}
>> +		} else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
>> +			int i;
>> +
>> +			/* Find a match in the SA table*/
>> +			for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
>> +				if (priv->tx_sa_table[i].spi ==
>> +						rte_cpu_to_be_32(sess->spi)) {
>> +					sa_index = i;
>> +					break;
>> +				}
>> +			}
>> +			/* Fail if no match entries*/
>> +			if (sa_index < 0) {
>> +				PMD_DRV_LOG(ERR, "Entry not found in the "
>> +						"Tx SA table\n");
>> +				return -1;
>> +			}
>> +			reg = IPSRXIDX_WRITE | (sa_index << 3);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(0), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(1), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(2), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXKEY(3), 0);
>> +			IXGBE_WRITE_REG(hw, IXGBE_IPSTXSALT, 0);
>> +			IXGBE_WAIT_TWRITE;
>> +
>> +			priv->tx_sa_table[sa_index].used = 0;
>> +			rte_hash_del_key(priv->tx_spi_sai_hash,
>> +					&priv->tx_sa_table[sa_index].spi);
>> +		}
>> +
>> +	return 0;
>> +}
>> +
>> +static int
>> +ixgbe_crypto_create_session(void *dev,
>> +		struct rte_security_sess_conf *sess_conf,
>> +		struct rte_security_session *sess,
>> +		struct rte_mempool *mempool)
>> +{
>> +	struct ixgbe_crypto_session *session = NULL;
>> +	struct rte_security_ipsec_xform *ipsec_xform = sess_conf->ipsec_xform;
>> +
>> +	if (rte_mempool_get(mempool, (void **)&session)) {
>> +		PMD_DRV_LOG(ERR, "Cannot get object from session mempool");
>> +		return -ENOMEM;
>> +	}
>> +	if (ipsec_xform->aead_alg != RTE_CRYPTO_AEAD_AES_GCM) {
>> +		PMD_DRV_LOG(ERR, "Unsupported IPsec mode\n");
>> +		return -ENOTSUP;
>> +	}
>> +
>> +	session->op = (ipsec_xform->op == RTE_SECURITY_IPSEC_OP_DECAP) ?
>> +			IXGBE_OP_AUTHENTICATED_DECRYPTION :
>> +			IXGBE_OP_AUTHENTICATED_ENCRYPTION;
>> +	session->key = ipsec_xform->aead_key.data;
>> +	memcpy(&session->salt,
>> +	     &ipsec_xform->aead_key.data[ipsec_xform->aead_key.length], 4);
>> +	session->spi = ipsec_xform->spi;
>> +
>> +	if (ipsec_xform->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
>> +		uint32_t sip = ipsec_xform->tunnel.ipv4.src_ip.s_addr;
>> +		uint32_t dip = ipsec_xform->tunnel.ipv4.dst_ip.s_addr;
>> +		session->src_ip.type = IPv4;
>> +		session->dst_ip.type = IPv4;
>> +		session->src_ip.ipv4 = rte_cpu_to_be_32(sip);
>> +		session->dst_ip.ipv4 = rte_cpu_to_be_32(dip);
>> +
>> +	} else {
>> +		uint32_t *sip = (uint32_t *)&ipsec_xform->tunnel.ipv6.src_addr;
>> +		uint32_t *dip = (uint32_t *)&ipsec_xform->tunnel.ipv6.dst_addr;
>> +		session->src_ip.type = IPv6;
>> +		session->dst_ip.type = IPv6;
>> +		session->src_ip.ipv6[0] = rte_cpu_to_be_32(sip[0]);
>> +		session->src_ip.ipv6[1] = rte_cpu_to_be_32(sip[1]);
>> +		session->src_ip.ipv6[2] = rte_cpu_to_be_32(sip[2]);
>> +		session->src_ip.ipv6[3] = rte_cpu_to_be_32(sip[3]);
>> +		session->dst_ip.ipv6[0] = rte_cpu_to_be_32(dip[0]);
>> +		session->dst_ip.ipv6[1] = rte_cpu_to_be_32(dip[1]);
>> +		session->dst_ip.ipv6[2] = rte_cpu_to_be_32(dip[2]);
>> +		session->dst_ip.ipv6[3] = rte_cpu_to_be_32(dip[3]);
>> +	}
>> +
>> +	session->dev = (struct rte_eth_dev *)dev;
>> +	set_sec_session_private_data(sess, 0, session);
>> +
>> +	if (ixgbe_crypto_add_sa(session)) {
>> +		PMD_DRV_LOG(ERR, "Failed to add SA\n");
>> +		return -EPERM;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static void
>> +ixgbe_crypto_remove_session(void *dev,
>> +		struct rte_security_session *session)
>> +{
>> +	struct ixgbe_crypto_session *sess =
>> +		(struct ixgbe_crypto_session *)
>> +		get_sec_session_private_data(session, 0);
>> +	if (dev != sess->dev) {
>> +		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
>> +		return;
>> +	}
>> +
>> +	if (ixgbe_crypto_remove_sa(dev, sess)) {
>> +		PMD_DRV_LOG(ERR, "Failed to remove session\n");
>> +		return;
>> +	}
>> +
>> +	rte_free(session);
>> +}
>> +
>> +uint64_t
>> +ixgbe_crypto_get_txdesc_flags(uint16_t port_id, struct rte_mbuf *mb) {
>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>> +	struct ixgbe_ipsec *priv =
>> +			IXGBE_DEV_PRIVATE_TO_IPSEC(dev->data->dev_private);
>> +	struct ipv4_hdr *ip4 =
>> +			rte_pktmbuf_mtod_offset(mb, struct ipv4_hdr*,
>> +						sizeof(struct ether_hdr));
>> +	uint32_t spi = 0;
>> +	uintptr_t sa_index;
>> +	struct ixgbe_crypto_tx_desc_metadata mdata = {0};
>> +
>> +	if (ip4->version_ihl == 0x45)
>> +		spi = *rte_pktmbuf_mtod_offset(mb, uint32_t*,
>> +					sizeof(struct ether_hdr) +
>> +					sizeof(struct ipv4_hdr));
>> +	else
>> +		spi = *rte_pktmbuf_mtod_offset(mb, uint32_t*,
>> +					sizeof(struct ether_hdr) +
>> +					sizeof(struct ipv6_hdr));
> Instead of using hardcoded values for L2/L3 len, why not to require user to setup mbuf's
> l2_len/l3_len fields properly (as we do for other TX offloads)?
I will look into this.
>
>> +
>> +	if (priv->tx_spi_sai_hash &&
>> +			rte_hash_lookup_data(priv->tx_spi_sai_hash, &spi,
>> +					(void **)&sa_index) == 0) {
>> +		mdata.enc = 1;
>> +		mdata.sa_idx = (uint32_t)sa_index;
>> +		mdata.pad_len = *rte_pktmbuf_mtod_offset(mb, uint8_t *,
>> +					rte_pktmbuf_pkt_len(mb) - 18);
>> +	}
> Might be worse to introduce some uint64_t security_id inside mbuf's second cache line.
> Then you can move whole functionality into tx_prepare().
> I suppose current implementation will hit TX performance quite badly.
I assume you mean worth :)
We tried to have ipsec related data in mbuf, but we got a lot of pushback.
Maybe something more generic, like an "extended metadata" field will be 
more acceptable. Then we can used it for passing the security session, 
which will be much faster than the current solution.
>
>> +
>> +	return mdata.data;
>> +}
>> +
>> +
>> +struct rte_security_ops ixgbe_security_ops = {
>> +		.session_configure = ixgbe_crypto_create_session,
>> +		.session_clear = ixgbe_crypto_remove_session,
>> +};
>> diff --git a/drivers/net/ixgbe/ixgbe_ipsec.h b/drivers/net/ixgbe/ixgbe_ipsec.h
>> new file mode 100644
>> index 0000000..fd479eb
>> --- /dev/null
>> +++ b/drivers/net/ixgbe/ixgbe_ipsec.h
>> @@ -0,0 +1,142 @@
>> +/*-
>> + *   BSD LICENSE
>> + *
>> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
>> + *   All rights reserved.
>> + *
>> + *   Redistribution and use in source and binary forms, with or without
>> + *   modification, are permitted provided that the following conditions
>> + *   are met:
>> + *
>> + *     * Redistributions of source code must retain the above copyright
>> + *       notice, this list of conditions and the following disclaimer.
>> + *     * Redistributions in binary form must reproduce the above copyright
>> + *       notice, this list of conditions and the following disclaimer in
>> + *       the documentation and/or other materials provided with the
>> + *       distribution.
>> + *     * Neither the name of Intel Corporation nor the names of its
>> + *       contributors may be used to endorse or promote products derived
>> + *       from this software without specific prior written permission.
>> + *
>> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
>> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
>> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
>> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
>> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
>> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
>> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
>> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
>> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
>> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
>> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>> + */
>> +
>> +#ifndef IXGBE_IPSEC_H_
>> +#define IXGBE_IPSEC_H_
>> +
>> +#include <rte_security.h>
>> +
>> +#define IPSRXIDX_RX_EN                                    0x00000001
>> +#define IPSRXIDX_TABLE_IP                                 0x00000002
>> +#define IPSRXIDX_TABLE_SPI                                0x00000004
>> +#define IPSRXIDX_TABLE_KEY                                0x00000006
>> +#define IPSRXIDX_WRITE                                    0x80000000
>> +#define IPSRXIDX_READ                                     0x40000000
>> +#define IPSRXMOD_VALID                                    0x00000001
>> +#define IPSRXMOD_PROTO                                    0x00000004
>> +#define IPSRXMOD_DECRYPT                                  0x00000008
>> +#define IPSRXMOD_IPV6                                     0x00000010
>> +#define IXGBE_ADVTXD_POPTS_IPSEC                          0x00000400
>> +#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP                 0x00002000
>> +#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN               0x00004000
>> +#define IXGBE_RXDADV_IPSEC_STATUS_SECP                    0x00020000
>> +#define IXGBE_RXDADV_IPSEC_ERROR_BIT_MASK                 0x18000000
>> +#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_PROTOCOL         0x08000000
>> +#define IXGBE_RXDADV_IPSEC_ERROR_INVALID_LENGTH           0x10000000
>> +#define IXGBE_RXDADV_IPSEC_ERROR_AUTHENTICATION_FAILED    0x18000000
>> +
>> +#define IPSEC_MAX_RX_IP_COUNT           128
>> +#define IPSEC_MAX_SA_COUNT              1024
>> +
>> +enum ixgbe_operation {
>> +	IXGBE_OP_AUTHENTICATED_ENCRYPTION, IXGBE_OP_AUTHENTICATED_DECRYPTION
>> +};
>> +
>> +enum ixgbe_gcm_key {
>> +	IXGBE_GCM_KEY_128, IXGBE_GCM_KEY_256
>> +};
>> +
>> +/**
>> + * Generic IP address structure
>> + * TODO: Find better location for this rte_net.h possibly.
>> + **/
>> +struct ipaddr {
>> +	enum ipaddr_type {
>> +		IPv4, IPv6
>> +	} type;
>> +	/**< IP Address Type - IPv4/IPv6 */
>> +
>> +	union {
>> +		uint32_t ipv4;
>> +		uint32_t ipv6[4];
>> +	};
>> +};
>> +
>> +/** inline crypto crypto private session structure */
>> +struct ixgbe_crypto_session {
>> +	enum ixgbe_operation op;
>> +	uint8_t *key;
>> +	uint32_t salt;
>> +	uint32_t sa_index;
>> +	uint32_t spi;
>> +	struct ipaddr src_ip;
>> +	struct ipaddr dst_ip;
>> +	struct rte_eth_dev *dev;
>> +} __rte_cache_aligned;
>> +
>> +struct ixgbe_crypto_rx_ip_table {
>> +	struct ipaddr ip;
>> +	uint16_t ref_count;
>> +};
>> +struct ixgbe_crypto_rx_sa_table {
>> +	uint32_t spi;
>> +	uint32_t ip_index;
>> +	uint32_t key[4];
>> +	uint32_t salt;
>> +	uint8_t mode;
>> +	uint8_t used;
>> +};
>> +
>> +struct ixgbe_crypto_tx_sa_table {
>> +	uint32_t spi;
>> +	uint32_t key[4];
>> +	uint32_t salt;
>> +	uint8_t used;
>> +};
>> +
>> +struct ixgbe_crypto_tx_desc_metadata {
>> +	union {
>> +		uint64_t data;
>> +		struct {
>> +			uint32_t sa_idx;
>> +			uint8_t pad_len;
>> +			uint8_t enc;
>> +		};
>> +	};
>> +};
>> +
>> +struct ixgbe_ipsec {
>> +#define IS_INITIALIZED (1 << 0)
>> +	uint8_t flags;
>> +	struct ixgbe_crypto_rx_ip_table rx_ip_table[IPSEC_MAX_RX_IP_COUNT];
>> +	struct ixgbe_crypto_rx_sa_table rx_sa_table[IPSEC_MAX_SA_COUNT];
>> +	struct ixgbe_crypto_tx_sa_table tx_sa_table[IPSEC_MAX_SA_COUNT];
>> +	struct rte_hash *tx_spi_sai_hash;
>> +};
>> +
>> +extern struct rte_security_ops ixgbe_security_ops;
>> +
>> +uint64_t ixgbe_crypto_get_txdesc_flags(uint16_t port_id, struct rte_mbuf *mb);
>> +
>> +
>> +#endif /*IXGBE_IPSEC_H_*/
>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
>> index 64bff25..76be27a 100644
>> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
>> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> The patch seems not complete...
> I think you need new DEV_RX_OFFLOAD_*/ DEV_TX_OFFLOAD_* flags to advertisize new
> offload capabilities.
> Plus changes in dev_configure() and/or rx(/tx)_queue_setup() to allow user to enable/disable
> these offloads at setup stage and select proper rx/tx function.
> Also, I think you'll need new fields in ixgbe_tx_offload and realted mask, plus changes
> in the code that manages txq->ctx_cache.
I will add the offload capabilities flags and functionality and rework 
the tx descriptor update.
>
>> @@ -395,7 +395,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
>>   static inline void
>>   ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
>>   		volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
>> -		uint64_t ol_flags, union ixgbe_tx_offload tx_offload)
>> +		uint64_t ol_flags, union ixgbe_tx_offload tx_offload,
>> +		__rte_unused struct rte_mbuf *mb)
>>   {
>>   	uint32_t type_tucmd_mlhl;
>>   	uint32_t mss_l4len_idx = 0;
>> @@ -479,6 +480,20 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
>>   		seqnum_seed |= tx_offload.l2_len
>>   			       << IXGBE_ADVTXD_TUNNEL_LEN;
>>   	}
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +	if (mb->ol_flags & PKT_TX_SECURITY_OFFLOAD) {
>> +		struct ixgbe_crypto_tx_desc_metadata mdata = {
>> +			.data = ixgbe_crypto_get_txdesc_flags(txq->port_id, mb),
>> +		};
>> +		seqnum_seed |=
>> +			(IXGBE_ADVTXD_IPSEC_SA_INDEX_MASK & mdata.sa_idx);
>> +		type_tucmd_mlhl |= mdata.enc ?
>> +			(IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP |
>> +				IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN) : 0;
>> +		type_tucmd_mlhl |=
>> +			(mdata.pad_len & IXGBE_ADVTXD_IPSEC_ESP_LEN_MASK);
>> +	}
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>>
>>   	txq->ctx_cache[ctx_idx].flags = ol_flags;
>>   	txq->ctx_cache[ctx_idx].tx_offload.data[0]  =
>> @@ -855,7 +870,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>>   				}
>>
>>   				ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
>> -					tx_offload);
>> +					tx_offload, tx_pkt);
>>
>>   				txe->last_id = tx_last;
>>   				tx_id = txe->next_id;
>> @@ -872,7 +887,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>>   			olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
>>   		}
>>
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +		olinfo_status |= ((pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT) |
>> +				(((ol_flags & PKT_TX_SECURITY_OFFLOAD) != 0)
>> +					* IXGBE_ADVTXD_POPTS_IPSEC));
>> +#else /* RTE_LIBRTE_IXGBE_IPSEC */
>>   		olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>>
>>   		m_seg = tx_pkt;
>>   		do {
>> @@ -1447,6 +1468,14 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
>>   		pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
>>   	}
>>
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +	if (rx_status & IXGBE_RXD_STAT_SECP) {
>> +		pkt_flags |= PKT_RX_SECURITY_OFFLOAD;
>> +		if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
>> +			pkt_flags |= PKT_RX_SECURITY_OFFLOAD_FAILED;
>> +	}
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>> +
>>   	return pkt_flags;
>>   }
>>
>> diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
>> index e704a7f..8673a01 100644
>> --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
>> +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
>> @@ -128,6 +128,9 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
>>   {
>>   	__m128i ptype0, ptype1, vtag0, vtag1, csum;
>>   	__m128i rearm0, rearm1, rearm2, rearm3;
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +	__m128i sterr0, sterr1, sterr2, sterr3, tmp1, tmp2;
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>>
>>   	/* mask everything except rss type */
>>   	const __m128i rsstype_msk = _mm_set_epi16(
>> @@ -174,6 +177,19 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
>>   		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
>>   		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
>>
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +	const __m128i ipsec_sterr_msk = _mm_set_epi32(
>> +		0, IXGBE_RXD_STAT_SECP | IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG,
>> +		0, 0);
>> +	const __m128i ipsec_proc_msk  = _mm_set_epi32(
>> +		0, IXGBE_RXD_STAT_SECP, 0, 0);
>> +	const __m128i ipsec_err_flag  = _mm_set_epi32(
>> +		0, PKT_RX_SECURITY_OFFLOAD_FAILED | PKT_RX_SECURITY_OFFLOAD,
>> +		0, 0);
>> +	const __m128i ipsec_proc_flag = _mm_set_epi32(
>> +		0, PKT_RX_SECURITY_OFFLOAD, 0, 0);
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>> +
>>   	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
>>   	ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
>>   	vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
>> @@ -221,6 +237,34 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
>>   	rearm2 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 4), 0x10);
>>   	rearm3 = _mm_blend_epi16(mbuf_init, _mm_slli_si128(vtag1, 2), 0x10);
>>
>> +#ifdef RTE_LIBRTE_IXGBE_IPSEC
>> +	/*inline ipsec, extract the flags from the descriptors*/
>> +	sterr0 = _mm_and_si128(descs[0], ipsec_sterr_msk);
>> +	sterr1 = _mm_and_si128(descs[1], ipsec_sterr_msk);
>> +	sterr2 = _mm_and_si128(descs[2], ipsec_sterr_msk);
>> +	sterr3 = _mm_and_si128(descs[3], ipsec_sterr_msk);
>> +	tmp1 = _mm_cmpeq_epi32(sterr0, ipsec_sterr_msk);
>> +	tmp2 = _mm_cmpeq_epi32(sterr0, ipsec_proc_msk);
>> +	sterr0 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
>> +				_mm_and_si128(tmp2, ipsec_proc_flag));
>> +	tmp1 = _mm_cmpeq_epi32(sterr1, ipsec_sterr_msk);
>> +	tmp2 = _mm_cmpeq_epi32(sterr1, ipsec_proc_msk);
>> +	sterr1 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
>> +				_mm_and_si128(tmp2, ipsec_proc_flag));
>> +	tmp1 = _mm_cmpeq_epi32(sterr2, ipsec_sterr_msk);
>> +	tmp2 = _mm_cmpeq_epi32(sterr2, ipsec_proc_msk);
>> +	sterr2 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
>> +				_mm_and_si128(tmp2, ipsec_proc_flag));
>> +	tmp1 = _mm_cmpeq_epi32(sterr3, ipsec_sterr_msk);
>> +	tmp2 = _mm_cmpeq_epi32(sterr3, ipsec_proc_msk);
>> +	sterr3 = _mm_or_si128(_mm_and_si128(tmp1, ipsec_err_flag),
>> +				_mm_and_si128(tmp2, ipsec_proc_flag));
>> +	rearm0 = _mm_or_si128(rearm0, sterr0);
>> +	rearm1 = _mm_or_si128(rearm1, sterr1);
>> +	rearm2 = _mm_or_si128(rearm2, sterr2);
>> +	rearm3 = _mm_or_si128(rearm3, sterr3);
>> +#endif /* RTE_LIBRTE_IXGBE_IPSEC */
>> +
> Wonder what is the performance drop (if any) when ipsec RX is on?
> Would it make sense to introduce a new RX function for that case?
I will try to find a better solution that will have minimal impact when 
ipsec is off.
>
>>   	/* write the rearm data and the olflags in one write */
>>   	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
>>   			offsetof(struct rte_mbuf, rearm_data) + 8);
>> --
>> 2.7.5

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH 3/5] rte_security: updates and enabled security operations for ethdev
  2017-08-29 12:14   ` Akhil Goyal
@ 2017-08-29 13:13     ` Radu Nicolau
  2017-08-29 13:19       ` Akhil Goyal
  0 siblings, 1 reply; 13+ messages in thread
From: Radu Nicolau @ 2017-08-29 13:13 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: hemant.agrawal, Declan Doherty, Boris Pismenny

Hi,


On 8/29/2017 1:14 PM, Akhil Goyal wrote:
> Hi Radu,
> On 8/25/2017 8:27 PM, Radu Nicolau wrote:
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> ---
>>   lib/Makefile                                   |  1 +
>>   lib/librte_cryptodev/rte_cryptodev_pmd.h       |  4 +--
>>   lib/librte_cryptodev/rte_cryptodev_version.map | 10 ++++++++
>>   lib/librte_cryptodev/rte_security.c            | 34 
>> +++++++++++++++++---------
>>   lib/librte_cryptodev/rte_security.h            | 12 ++++++---
>>   5 files changed, 44 insertions(+), 17 deletions(-)
>>
>> diff --git a/lib/Makefile b/lib/Makefile
>> index 86caba1..08a1767 100644
>> --- a/lib/Makefile
>> +++ b/lib/Makefile
>> @@ -51,6 +51,7 @@ DEPDIRS-librte_ether += librte_mbuf
>>   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
>>   DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring 
>> librte_mbuf
>>   DEPDIRS-librte_cryptodev += librte_kvargs
>> +DEPDIRS-librte_cryptodev += librte_ether
> Is the shared build working now?
It does only for core dpdk (but not for the sample app), but after 
updating the exported symbols list it works for both.
>>   DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
>>   DEPDIRS-librte_eventdev := librte_eal librte_ring
>>   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
>> diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h 
>> b/lib/librte_cryptodev/rte_cryptodev_pmd.h
>> index 219fba6..ab3ecf7 100644
>> --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
>> +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
>> @@ -371,7 +371,7 @@ struct rte_cryptodev_ops {
>>    *  - Returns -ENOTSUP if crypto device does not support the crypto 
>> transform.
>>    *  - Returns -ENOMEM if the private session could not be allocated.
>>    */
>> -typedef int (*security_configure_session_t)(struct rte_cryptodev *dev,
>> +typedef int (*security_configure_session_t)(void *dev,
>>           struct rte_security_sess_conf *conf,
>>           struct rte_security_session *sess,
>>           struct rte_mempool *mp);
>> @@ -382,7 +382,7 @@ typedef int 
>> (*security_configure_session_t)(struct rte_cryptodev *dev,
>>    * @param    dev        Crypto device pointer
>>    * @param    sess        Security session structure
>>    */
>> -typedef void (*security_free_session_t)(struct rte_cryptodev *dev,
>> +typedef void (*security_free_session_t)(void *dev,
>>           struct rte_security_session *sess);
>>     /** Security operations function pointer table */
>> diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map 
>> b/lib/librte_cryptodev/rte_cryptodev_version.map
>> index e9ba88a..20b553e 100644
>> --- a/lib/librte_cryptodev/rte_cryptodev_version.map
>> +++ b/lib/librte_cryptodev/rte_cryptodev_version.map
>> @@ -79,3 +79,13 @@ DPDK_17.08 {
>>       rte_crypto_aead_operation_strings;
>>     } DPDK_17.05;
>> +
>> +DPDK_17.11 {
>> +    global:
>> +
>> +    rte_security_session_create;
>> +    rte_security_session_init;
>> +    rte_security_attach_session;
>> +    rte_security_session_free;
>> +
>> +} DPDK_17.08;
>> diff --git a/lib/librte_cryptodev/rte_security.c 
>> b/lib/librte_cryptodev/rte_security.c
>> index 7c73c93..5f35355 100644
>> --- a/lib/librte_cryptodev/rte_security.c
>> +++ b/lib/librte_cryptodev/rte_security.c
>> @@ -86,8 +86,12 @@ rte_security_session_init(uint16_t dev_id,
>>               return -EINVAL;
>>           cdev = rte_cryptodev_pmd_get_dev(dev_id);
>>           index = cdev->driver_id;
>> +        if (cdev == NULL || sess == NULL || cdev->sec_ops == NULL
>> +                || cdev->sec_ops->session_configure == NULL)
>> +            return -EINVAL;
>>           if (sess->sess_private_data[index] == NULL) {
>> -            ret = cdev->sec_ops->session_configure(cdev, conf, sess, 
>> mp);
>> +            ret = cdev->sec_ops->session_configure((void *)cdev,
>> +                    conf, sess, mp);
>>               if (ret < 0) {
>>                   CDEV_LOG_ERR(
>>                       "cdev_id %d failed to configure session details",
>> @@ -100,14 +104,18 @@ rte_security_session_init(uint16_t dev_id,
>>       case RTE_SECURITY_SESS_ETH_PROTO_OFFLOAD:
>>           dev = &rte_eth_devices[dev_id];
>>           index = dev->data->port_id;
>> +        if (dev == NULL || sess == NULL || dev->sec_ops == NULL
>> +                || dev->sec_ops->session_configure == NULL)
>> +            return -EINVAL;
>>           if (sess->sess_private_data[index] == NULL) {
>> -//            ret = dev->sec_ops->session_configure(dev, conf, sess, 
>> mp);
>> -//            if (ret < 0) {
>> -//                CDEV_LOG_ERR(
>> -//                    "dev_id %d failed to configure session details",
>> -//                    dev_id);
>> -//                return ret;
>> -//            }
>> +            ret = dev->sec_ops->session_configure((void *)dev,
>> +                    conf, sess, mp);
>> +            if (ret < 0) {
>> +                CDEV_LOG_ERR(
>> +                    "dev_id %d failed to configure session details",
>> +                    dev_id);
>> +                return ret;
>> +            }
>>           }
>>           break;
>>       default:
>> @@ -152,16 +160,18 @@ rte_security_session_clear(uint8_t dev_id,
>>       switch (action_type) {
>>       case RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD:
>>           cdev =  rte_cryptodev_pmd_get_dev(dev_id);
>> -        if (cdev == NULL || sess == NULL)
>> +        if (cdev == NULL || sess == NULL || cdev->sec_ops == NULL
>> +                || cdev->sec_ops->session_clear == NULL)
>>               return -EINVAL;
>> -        cdev->sec_ops->session_clear(cdev, sess);
>> +        cdev->sec_ops->session_clear((void *)cdev, sess);
>>           break;
>>       case RTE_SECURITY_SESS_ETH_INLINE_CRYPTO:
>>       case RTE_SECURITY_SESS_ETH_PROTO_OFFLOAD:
>>           dev = &rte_eth_devices[dev_id];
>> -        if (dev == NULL || sess == NULL)
>> +        if (dev == NULL || sess == NULL || dev->sec_ops == NULL
>> +                || dev->sec_ops->session_clear == NULL)
>>               return -EINVAL;
>> -//        dev->dev_ops->session_clear(dev, sess);
>> +        dev->sec_ops->session_clear((void *)dev, sess);
>>           break;
>>       default:
>>           return -EINVAL;
>> diff --git a/lib/librte_cryptodev/rte_security.h 
>> b/lib/librte_cryptodev/rte_security.h
>> index 9747d5e..0c8b358 100644
>> --- a/lib/librte_cryptodev/rte_security.h
>> +++ b/lib/librte_cryptodev/rte_security.h
>> @@ -20,7 +20,7 @@ extern "C" {
>>   #include <rte_memory.h>
>>   #include <rte_mempool.h>
>>   #include <rte_common.h>
>> -#include <rte_crypto.h>
>> +#include "rte_crypto.h"
>>     /** IPSec protocol mode */
>>   enum rte_security_conf_ipsec_sa_mode {
>> @@ -70,9 +70,9 @@ struct rte_security_ipsec_tunnel_param {
>>           } ipv4; /**< IPv4 header parameters */
>>             struct {
>> -            struct in6_addr *src_addr;
>> +            struct in6_addr src_addr;
>>               /**< IPv6 source address */
>> -            struct in6_addr *dst_addr;
>> +            struct in6_addr dst_addr;
>>               /**< IPv6 destination address */
>>               uint8_t dscp;
>>               /**< IPv6 Differentiated Services Code Point */
>> @@ -171,6 +171,12 @@ struct rte_security_ipsec_xform {
>>           uint8_t *data;  /**< pointer to key data */
>>           size_t length;   /**< key length in bytes */
>>       } auth_key;
>> +    enum rte_crypto_aead_algorithm aead_alg;
>> +    /**< AEAD Algorithm */
>> +    struct {
>> +        uint8_t *data;  /**< pointer to key data */
>> +        size_t length;   /**< key length in bytes */
>> +    } aead_key;
> I believe it would be better to use a union here.
> union {
>     struct {
>         enum rte_crypto_cipher_algorithm cipher_alg;
>             /**< Cipher Algorithm */
>             struct {
>                     uint8_t *data;  /**< pointer to key data */
>                     size_t length;   /**< key length in bytes */
>             } cipher_key;
>             enum rte_crypto_auth_algorithm auth_alg;
>             /**< Authentication Algorithm */
>             struct {
>                     uint8_t *data;  /**< pointer to key data */
>                     size_t length;   /**< key length in bytes */
>             } auth_key;
>     };
>     struct {
>         enum rte_crypto_aead_algorithm aead_alg;
>         /**< AEAD Algorithm */
>         struct {
>             uint8_t *data;  /**< pointer to key data */
>             size_t length;   /**< key length in bytes */
>         } aead_key;
>     };
> };
Probably the best way will be to have a chain of transforms, I will 
follow up in the next patchset.

>
>>       uint32_t salt;    /**< salt for this SA */
>>       enum rte_security_conf_ipsec_sa_mode mode;
>>       /**< IPsec SA Mode - transport/tunnel */
>>
>
> Thanks for the updates. I missed some of the checks.
>
> -Akhil

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH 3/5] rte_security: updates and enabled security operations for ethdev
  2017-08-29 13:13     ` Radu Nicolau
@ 2017-08-29 13:19       ` Akhil Goyal
  0 siblings, 0 replies; 13+ messages in thread
From: Akhil Goyal @ 2017-08-29 13:19 UTC (permalink / raw)
  To: Radu Nicolau, dev; +Cc: hemant.agrawal, Declan Doherty, Boris Pismenny

Hi Radu,
On 8/29/2017 6:43 PM, Radu Nicolau wrote:
>>> @@ -70,9 +70,9 @@ struct rte_security_ipsec_tunnel_param {
>>>           } ipv4; /**< IPv4 header parameters */
>>>             struct {
>>> -            struct in6_addr *src_addr;
>>> +            struct in6_addr src_addr;
>>>               /**< IPv6 source address */
>>> -            struct in6_addr *dst_addr;
>>> +            struct in6_addr dst_addr;
>>>               /**< IPv6 destination address */
>>>               uint8_t dscp;
>>>               /**< IPv6 Differentiated Services Code Point */
>>> @@ -171,6 +171,12 @@ struct rte_security_ipsec_xform {
>>>           uint8_t *data;  /**< pointer to key data */
>>>           size_t length;   /**< key length in bytes */
>>>       } auth_key;
>>> +    enum rte_crypto_aead_algorithm aead_alg;
>>> +    /**< AEAD Algorithm */
>>> +    struct {
>>> +        uint8_t *data;  /**< pointer to key data */
>>> +        size_t length;   /**< key length in bytes */
>>> +    } aead_key;
>> I believe it would be better to use a union here.
>> union {
>>     struct {
>>         enum rte_crypto_cipher_algorithm cipher_alg;
>>             /**< Cipher Algorithm */
>>             struct {
>>                     uint8_t *data;  /**< pointer to key data */
>>                     size_t length;   /**< key length in bytes */
>>             } cipher_key;
>>             enum rte_crypto_auth_algorithm auth_alg;
>>             /**< Authentication Algorithm */
>>             struct {
>>                     uint8_t *data;  /**< pointer to key data */
>>                     size_t length;   /**< key length in bytes */
>>             } auth_key;
>>     };
>>     struct {
>>         enum rte_crypto_aead_algorithm aead_alg;
>>         /**< AEAD Algorithm */
>>         struct {
>>             uint8_t *data;  /**< pointer to key data */
>>             size_t length;   /**< key length in bytes */
>>         } aead_key;
>>     };
>> };
> Probably the best way will be to have a chain of transforms, I will 
> follow up in the next patchset.

Will it be chain of crypto xforms? If yes, then we may not be needing 
the fields like iv and digest length and op will need to assigned twice.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2017-08-29 13:20 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-25 14:57 [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Radu Nicolau
2017-08-25 14:57 ` [RFC PATCH 1/5] mbuff: added security offload flags Radu Nicolau
2017-08-25 14:57 ` [RFC PATCH 2/5] ethdev: added security ops struct pointer Radu Nicolau
2017-08-25 14:57 ` [RFC PATCH 3/5] rte_security: updates and enabled security operations for ethdev Radu Nicolau
2017-08-29 12:14   ` Akhil Goyal
2017-08-29 13:13     ` Radu Nicolau
2017-08-29 13:19       ` Akhil Goyal
2017-08-25 14:57 ` [RFC PATCH 4/5] ixgbe: enable inline ipsec Radu Nicolau
2017-08-28 17:47   ` Ananyev, Konstantin
2017-08-29 13:06     ` Radu Nicolau
2017-08-25 14:57 ` [RFC PATCH 5/5] examples/ipsec-secgw: enabled " Radu Nicolau
2017-08-29 12:04   ` Akhil Goyal
2017-08-29 13:00 ` [RFC PATCH 0/5] Enable IPSec Inline for IXGBE PMD Boris Pismenny

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.