All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
To: <jerinj@marvell.com>, Nithin Dabilpuram <ndabilpuram@marvell.com>,
	"Kiran Kumar K" <kirankumark@marvell.com>,
	Sunil Kumar Kori <skori@marvell.com>,
	Satha Rao <skoteshwar@marvell.com>,
	Pavan Nikhilesh <pbhagavatula@marvell.com>,
	Shijith Thotton <sthotton@marvell.com>,
	"Anatoly Burakov" <anatoly.burakov@intel.com>
Cc: <dev@dpdk.org>
Subject: [dpdk-dev] [PATCH v2 17/28] net/cnxk: support inline security setup for cn10k
Date: Thu, 30 Sep 2021 22:31:02 +0530	[thread overview]
Message-ID: <20210930170113.29030-18-ndabilpuram@marvell.com> (raw)
In-Reply-To: <20210930170113.29030-1-ndabilpuram@marvell.com>

Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.

This patch also changes dpdk-devbind.py to list new inline
device as misc device.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 doc/guides/nics/cnxk.rst                 | 102 ++++++++
 doc/guides/nics/features/cnxk.ini        |   1 +
 doc/guides/nics/features/cnxk_vec.ini    |   1 +
 doc/guides/nics/features/cnxk_vf.ini     |   1 +
 doc/guides/rel_notes/release_21_11.rst   |   2 +
 drivers/event/cnxk/cnxk_eventdev_adptr.c |  36 ++-
 drivers/net/cnxk/cn10k_ethdev.c          |  36 ++-
 drivers/net/cnxk/cn10k_ethdev.h          |  43 ++++
 drivers/net/cnxk/cn10k_ethdev_sec.c      | 426 +++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn10k_rx.h              |   1 +
 drivers/net/cnxk/cn10k_tx.h              |   1 +
 drivers/net/cnxk/meson.build             |   1 +
 usertools/dpdk-devbind.py                |   8 +-
 13 files changed, 654 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 90d27db..b542437 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -34,6 +34,7 @@ Features of the CNXK Ethdev PMD are:
 - Vector Poll mode driver
 - Debug utilities - Context dump and error interrupt support
 - Support Rx interrupt
+- Inline IPsec processing support
 
 Prerequisites
 -------------
@@ -185,6 +186,74 @@ Runtime Config Options
 
       -a 0002:02:00.0,tag_as_xor=1
 
+- ``Max SPI for inbound inline IPsec`` (default ``255``)
+
+   Max SPI supported for inbound inline IPsec processing can be specified by
+   ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,ipsec_in_max_spi=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 inbound SAs (SPI 0-127).
+
+- ``Max SA's for outbound inline IPsec`` (default ``4096``)
+
+   Max number of SA's supported for outbound inline IPsec processing can be
+   specified by ``ipsec_out_max_sa`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,ipsec_out_max_sa=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 outbound SAs.
+
+- ``Outbound CPT LF queue size`` (default ``8200``)
+
+   Size of Outbound CPT LF queue in number of descriptors can be specified by
+   ``outb_nb_desc`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,outb_nb_desc=16384
+
+    With the above configuration, Outbound CPT LF will be created to accommodate
+    at max 16384 descriptors at any given time.
+
+- ``Outbound CPT LF count`` (default ``1``)
+
+   Number of CPT LF's to attach for Outbound processing can be specified by
+   ``outb_nb_crypto_qs`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,outb_nb_crypto_qs=2
+
+   With the above confiuration, two CPT LF's are setup and distributed among
+   all the Tx queues for outbound processing.
+
+- ``Force using inline ipsec device for inbound`` (default ``0``)
+
+   In CN10K, in event mode, driver can work in two modes,
+
+   1. Inbound encrypted traffic received by probed ipsec inline device while
+      plain traffic post decryption is received by ethdev.
+
+   2. Both Inbound encrypted traffic and plain traffic post decryption are
+      received by ethdev.
+
+   By default event mode works without using inline device i.e mode ``2``.
+   This behaviour can be changed to pick mode ``1`` by using
+   ``force_inb_inl_dev`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,force_inb_inl_dev=1 -a 0002:03:00.0,force_inb_inl_dev=1
+
+   With the above configuration, inbound encrypted traffic from both the ports
+   is received by ipsec inline device.
 
 .. note::
 
@@ -250,6 +319,39 @@ Example usage in testpmd::
    testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
           spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
 
+Inline device support for CN10K
+-------------------------------
+
+CN10K HW provides a misc device Inline device that supports ethernet devices in
+providing following features.
+
+  - Aggregate all the inline IPsec inbound traffic from all the CN10K ethernet
+    devices to be processed by the single inline IPSec device. This allows
+    single rte security session to accept traffic from multiple ports.
+
+  - Support for event generation on outbound inline IPsec processing errors.
+
+  - Support CN106xx poll mode of operation for inline IPSec inbound processing.
+
+Inline IPsec device is identified by PCI PF vendid:devid ``177D:A0F0`` or
+VF ``177D:A0F1``.
+
+Runtime Config Options for inline device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Max SPI for inbound inline IPsec`` (default ``255``)
+
+   Max SPI supported for inbound inline IPsec processing can be specified by
+   ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:1d:00.0,ipsec_in_max_spi=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 inbound SAs (SPI 0-127) for traffic aggregated on inline device.
+
+
 Debugging Options
 -----------------
 
diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini
index 5d45625..1ced3ee 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -27,6 +27,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Flow control         = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini
index abf2b8d..12ca0a5 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -26,6 +26,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Flow control         = Y
 Jumbo frame          = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini
index 7b4299f..139d9b9 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -22,6 +22,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f85dc99..8116f1e 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -65,6 +65,8 @@ New Features
 * **Updated Marvell cnxk ethdev driver.**
 
   * Added rte_flow support for dual VLAN insert and strip actions.
+  * Added support for Inline IPsec for CN9K event mode and CN10K
+    poll mode and event mode.
 
 * **Updated Marvell cnxk crypto PMD.**
 
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index baf2f2a..a34efbb 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -123,7 +123,9 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
 		    uint16_t port_id, const struct rte_event *ev,
 		    uint8_t custom_flowid)
 {
+	struct roc_nix *nix = &cnxk_eth_dev->nix;
 	struct roc_nix_rq *rq;
+	int rc;
 
 	rq = &cnxk_eth_dev->rqs[rq_id];
 	rq->sso_ena = 1;
@@ -140,7 +142,24 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
 		rq->tag_mask |= ev->flow_id;
 	}
 
-	return roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+	rc = roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+	if (rc)
+		return rc;
+
+	if (rq_id == 0 && roc_nix_inl_inb_is_enabled(nix)) {
+		uint32_t sec_tag_const;
+
+		/* IPSec tag const is 8-bit left shifted value of tag_mask
+		 * as it applies to bit 32:8 of tag only.
+		 */
+		sec_tag_const = rq->tag_mask >> 8;
+		rc = roc_nix_inl_inb_tag_update(nix, sec_tag_const,
+						ev->sched_type);
+		if (rc)
+			plt_err("Failed to set tag conf for ipsec, rc=%d", rc);
+	}
+
+	return rc;
 }
 
 static int
@@ -186,6 +205,7 @@ cnxk_sso_rx_adapter_queue_add(
 		rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
 				      rxq_sp->qconf.mp->pool_id, true,
 				      dev->force_ena_bp);
+		cnxk_eth_dev->nb_rxq_sso++;
 	}
 
 	if (rc < 0) {
@@ -196,6 +216,14 @@ cnxk_sso_rx_adapter_queue_add(
 
 	dev->rx_offloads |= cnxk_eth_dev->rx_offload_flags;
 
+	/* Switch to use PF/VF's NIX LF instead of inline device for inbound
+	 * when all the RQ's are switched to event dev mode. We do this only
+	 * when using inline device is not forced by dev args.
+	 */
+	if (!cnxk_eth_dev->inb.force_inl_dev &&
+	    cnxk_eth_dev->nb_rxq_sso == cnxk_eth_dev->nb_rxq)
+		cnxk_nix_inb_mode_set(cnxk_eth_dev, false);
+
 	return 0;
 }
 
@@ -220,12 +248,18 @@ cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
 		rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
 				      rxq_sp->qconf.mp->pool_id, false,
 				      dev->force_ena_bp);
+		cnxk_eth_dev->nb_rxq_sso--;
 	}
 
 	if (rc < 0)
 		plt_err("Failed to clear Rx adapter config port=%d, q=%d",
 			eth_dev->data->port_id, rx_queue_id);
 
+	/* Removing RQ from Rx adapter implies need to use
+	 * inline device for CQ/Poll mode.
+	 */
+	cnxk_nix_inb_mode_set(cnxk_eth_dev, true);
+
 	return rc;
 }
 
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6c..fa2343c 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		flags |= NIX_RX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
+	if (conf & DEV_TX_OFFLOAD_SECURITY)
+		flags |= NIX_TX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -181,8 +187,11 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			 const struct rte_eth_txconf *tx_conf)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_nix *nix = &dev->nix;
+	struct roc_cpt_lf *inl_lf;
 	struct cn10k_eth_txq *txq;
 	struct roc_nix_sq *sq;
+	uint16_t crypto_qid;
 	int rc;
 
 	RTE_SET_USED(socket);
@@ -198,11 +207,24 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq = eth_dev->data->tx_queues[qid];
 	txq->fc_mem = sq->fc;
 	/* Store lmt base in tx queue for easy access */
-	txq->lmt_base = dev->nix.lmt_base;
+	txq->lmt_base = nix->lmt_base;
 	txq->io_addr = sq->io_addr;
 	txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
 	txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
 
+	/* Fetch CPT LF info for outbound if present */
+	if (dev->outb.lf_base) {
+		crypto_qid = qid % dev->outb.nb_crypto_qs;
+		inl_lf = dev->outb.lf_base + crypto_qid;
+
+		txq->cpt_io_addr = inl_lf->io_addr;
+		txq->cpt_fc = inl_lf->fc_addr;
+		txq->cpt_desc = inl_lf->nb_desc * 0.7;
+		txq->sa_base = (uint64_t)dev->outb.sa_base;
+		txq->sa_base |= eth_dev->data->port_id;
+		PLT_STATIC_ASSERT(ROC_NIX_INL_SA_BASE_ALIGN == BIT_ULL(16));
+	}
+
 	nix_form_default_desc(dev, txq, qid);
 	txq->lso_tun_fmt = dev->lso_tun_fmt;
 	return 0;
@@ -215,6 +237,7 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			 struct rte_mempool *mp)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct cnxk_eth_rxq_sp *rxq_sp;
 	struct cn10k_eth_rxq *rxq;
 	struct roc_nix_rq *rq;
 	struct roc_nix_cq *cq;
@@ -250,6 +273,15 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq->data_off = rq->first_skip;
 	rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev);
 
+	/* Setup security related info */
+	if (dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		rxq->lmt_base = dev->nix.lmt_base;
+		rxq->sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix,
+							   dev->inb.inl_dev);
+	}
+	rxq_sp = cnxk_eth_rxq_to_sp(rxq);
+	rxq->aura_handle = rxq_sp->qconf.mp->pool_id;
+
 	/* Lookup mem */
 	rxq->lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
 	return 0;
@@ -500,6 +532,8 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	nix_eth_dev_ops_override();
 	npc_flow_ops_override();
 
+	cn10k_eth_sec_ops_override();
+
 	/* Common probe */
 	rc = cnxk_nix_probe(pci_drv, pci_dev);
 	if (rc)
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 8b6e0f2..a888364 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -5,6 +5,7 @@
 #define __CN10K_ETHDEV_H__
 
 #include <cnxk_ethdev.h>
+#include <cnxk_security.h>
 
 struct cn10k_eth_txq {
 	uint64_t send_hdr_w0;
@@ -15,6 +16,10 @@ struct cn10k_eth_txq {
 	rte_iova_t io_addr;
 	uint16_t sqes_per_sqb_log2;
 	int16_t nb_sqb_bufs_adj;
+	rte_iova_t cpt_io_addr;
+	uint64_t sa_base;
+	uint64_t *cpt_fc;
+	uint16_t cpt_desc;
 	uint64_t cmd[4];
 	uint64_t lso_tun_fmt;
 } __plt_cache_aligned;
@@ -30,12 +35,50 @@ struct cn10k_eth_rxq {
 	uint32_t qmask;
 	uint32_t available;
 	uint16_t data_off;
+	uint64_t sa_base;
+	uint64_t lmt_base;
+	uint64_t aura_handle;
 	uint16_t rq;
 	struct cnxk_timesync_info *tstamp;
 } __plt_cache_aligned;
 
+/* Private data in sw rsvd area of struct roc_ot_ipsec_inb_sa */
+struct cn10k_inb_priv_data {
+	void *userdata;
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+/* Private data in sw rsvd area of struct roc_ot_ipsec_outb_sa */
+struct cn10k_outb_priv_data {
+	void *userdata;
+	/* Rlen computation data */
+	struct cnxk_ipsec_outb_rlens rlens;
+	/* Back pinter to eth sec session */
+	struct cnxk_eth_sec_sess *eth_sec;
+	/* SA index */
+	uint32_t sa_idx;
+};
+
+struct cn10k_sec_sess_priv {
+	union {
+		struct {
+			uint32_t sa_idx;
+			uint8_t inb_sa : 1;
+			uint8_t rsvd1 : 2;
+			uint8_t roundup_byte : 5;
+			uint8_t roundup_len;
+			uint16_t partial_len;
+		};
+
+		uint64_t u64;
+	};
+} __rte_packed;
+
 /* Rx and Tx routines */
 void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
 void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
+/* Security context setup */
+void cn10k_eth_sec_ops_override(void);
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
new file mode 100644
index 0000000..3ffd824
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -0,0 +1,426 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_eventdev.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+
+#include <cn10k_ethdev.h>
+#include <cnxk_security.h>
+
+static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
+	{	/* AES GCM */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+			{.aead = {
+				.algo = RTE_CRYPTO_AEAD_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				},
+				.iv_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
+	{	/* IPsec Inline Protocol ESP Tunnel Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Tunnel Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static void
+cn10k_eth_sec_sso_work_cb(uint64_t *gw, void *args)
+{
+	struct rte_eth_event_ipsec_desc desc;
+	struct cn10k_sec_sess_priv sess_priv;
+	struct cn10k_outb_priv_data *priv;
+	struct roc_ot_ipsec_outb_sa *sa;
+	struct cpt_cn10k_res_s *res;
+	struct rte_eth_dev *eth_dev;
+	struct cnxk_eth_dev *dev;
+	uint16_t dlen_adj, rlen;
+	struct rte_mbuf *mbuf;
+	uintptr_t sa_base;
+	uintptr_t nixtx;
+	uint8_t port;
+
+	RTE_SET_USED(args);
+
+	switch ((gw[0] >> 28) & 0xF) {
+	case RTE_EVENT_TYPE_ETHDEV:
+		/* Event from inbound inline dev due to IPSEC packet bad L4 */
+		mbuf = (struct rte_mbuf *)(gw[1] - sizeof(struct rte_mbuf));
+		plt_nix_dbg("Received mbuf %p from inline dev inbound", mbuf);
+		rte_pktmbuf_free(mbuf);
+		return;
+	case RTE_EVENT_TYPE_CPU:
+		/* Check for subtype */
+		if (((gw[0] >> 20) & 0xFF) == CNXK_ETHDEV_SEC_OUTB_EV_SUB) {
+			/* Event from outbound inline error */
+			mbuf = (struct rte_mbuf *)gw[1];
+			break;
+		}
+		/* Fall through */
+	default:
+		plt_err("Unknown event gw[0] = 0x%016lx, gw[1] = 0x%016lx",
+			gw[0], gw[1]);
+		return;
+	}
+
+	/* Get ethdev port from tag */
+	port = gw[0] & 0xFF;
+	eth_dev = &rte_eth_devices[port];
+	dev = cnxk_eth_pmd_priv(eth_dev);
+
+	sess_priv.u64 = *rte_security_dynfield(mbuf);
+	/* Calculate dlen adj */
+	dlen_adj = mbuf->pkt_len - mbuf->l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Find the res area residing on next cacheline after end of data */
+	nixtx = rte_pktmbuf_mtod(mbuf, uintptr_t) + mbuf->pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+	res = (struct cpt_cn10k_res_s *)nixtx;
+
+	plt_nix_dbg("Outbound error, mbuf %p, sa_index %u, compcode %x uc %x",
+		    mbuf, sess_priv.sa_idx, res->compcode, res->uc_compcode);
+
+	sess_priv.u64 = *rte_security_dynfield(mbuf);
+
+	sa_base = dev->outb.sa_base;
+	sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(sa);
+
+	memset(&desc, 0, sizeof(desc));
+
+	switch (res->uc_compcode) {
+	case ROC_IE_OT_UCC_ERR_SA_OVERFLOW:
+		desc.subtype = RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW;
+		break;
+	default:
+		plt_warn("Outbound error, mbuf %p, sa_index %u, "
+			 "compcode %x uc %x", mbuf, sess_priv.sa_idx,
+			 res->compcode, res->uc_compcode);
+		desc.subtype = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+		break;
+	}
+
+	desc.metadata = (uint64_t)priv->userdata;
+	rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_IPSEC, &desc);
+	rte_pktmbuf_free(mbuf);
+}
+
+static int
+cn10k_eth_sec_session_create(void *device,
+			     struct rte_security_session_conf *conf,
+			     struct rte_security_session *sess,
+			     struct rte_mempool *mempool)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ipsec_xform *ipsec;
+	struct cn10k_sec_sess_priv sess_priv;
+	struct rte_crypto_sym_xform *crypto;
+	struct cnxk_eth_sec_sess *eth_sec;
+	bool inbound, inl_dev;
+	int rc = 0;
+
+	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
+		return -ENOTSUP;
+
+	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
+		return -ENOTSUP;
+
+	if (rte_security_dynfield_register() < 0)
+		return -ENOTSUP;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		roc_nix_inl_cb_register(cn10k_eth_sec_sso_work_cb, NULL);
+
+	ipsec = &conf->ipsec;
+	crypto = conf->crypto_xform;
+	inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);
+	inl_dev = !!dev->inb.inl_dev;
+
+	/* Search if a session already exits */
+	if (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {
+		plt_err("%s SA with SPI %u already in use",
+			inbound ? "Inbound" : "Outbound", ipsec->spi);
+		return -EEXIST;
+	}
+
+	if (rte_mempool_get(mempool, (void **)&eth_sec)) {
+		plt_err("Could not allocate security session private data");
+		return -ENOMEM;
+	}
+
+	memset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));
+	sess_priv.u64 = 0;
+
+	/* Acquire lock on inline dev for inbound */
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_lock();
+
+	if (inbound) {
+		struct cn10k_inb_priv_data *inb_priv;
+		struct roc_ot_ipsec_inb_sa *inb_sa;
+		uintptr_t sa;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn10k_inb_priv_data) <
+				  ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD);
+
+		/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE */
+		sa = roc_nix_inl_inb_sa_get(&dev->nix, inl_dev, ipsec->spi);
+		if (!sa && dev->inb.inl_dev) {
+			plt_err("Failed to create ingress sa, inline dev "
+				"not found or spi not in range");
+			rc = -ENOTSUP;
+			goto mempool_put;
+		} else if (!sa) {
+			plt_err("Failed to create ingress sa");
+			rc = -EFAULT;
+			goto mempool_put;
+		}
+
+		inb_sa = (struct roc_ot_ipsec_inb_sa *)sa;
+
+		/* Check if SA is already in use */
+		if (inb_sa->w2.s.valid) {
+			plt_err("Inbound SA with SPI %u already in use",
+				ipsec->spi);
+			rc = -EBUSY;
+			goto mempool_put;
+		}
+
+		memset(inb_sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
+
+		/* Fill inbound sa params */
+		rc = cnxk_ot_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init inbound sa, rc=%d", rc);
+			goto mempool_put;
+		}
+
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+		/* Back pointer to get eth_sec */
+		inb_priv->eth_sec = eth_sec;
+		/* Save userdata in inb private area */
+		inb_priv->userdata = conf->userdata;
+
+		/* Save SA index/SPI in cookie for now */
+		inb_sa->w1.s.cookie = rte_cpu_to_be_32(ipsec->spi);
+
+		/* Prepare session priv */
+		sess_priv.inb_sa = 1;
+		sess_priv.sa_idx = ipsec->spi;
+
+		/* Pointer from eth_sec -> inb_sa */
+		eth_sec->sa = inb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = ipsec->spi;
+		eth_sec->spi = ipsec->spi;
+		eth_sec->inl_dev = !!dev->inb.inl_dev;
+		eth_sec->inb = true;
+
+		TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess++;
+	} else {
+		struct cn10k_outb_priv_data *outb_priv;
+		struct roc_ot_ipsec_outb_sa *outb_sa;
+		struct cnxk_ipsec_outb_rlens *rlens;
+		uint64_t sa_base = dev->outb.sa_base;
+		uint32_t sa_idx;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn10k_outb_priv_data) <
+				  ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD);
+
+		/* Alloc an sa index */
+		rc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);
+		if (rc)
+			goto mempool_put;
+
+		outb_sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sa_idx);
+		outb_priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(outb_sa);
+		rlens = &outb_priv->rlens;
+
+		memset(outb_sa, 0, sizeof(struct roc_ot_ipsec_outb_sa));
+
+		/* Fill outbound sa params */
+		rc = cnxk_ot_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init outbound sa, rc=%d", rc);
+			rc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);
+			goto mempool_put;
+		}
+
+		/* Save userdata */
+		outb_priv->userdata = conf->userdata;
+		outb_priv->sa_idx = sa_idx;
+		outb_priv->eth_sec = eth_sec;
+
+		/* Save rlen info */
+		cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
+
+		/* Prepare session priv */
+		sess_priv.sa_idx = outb_priv->sa_idx;
+		sess_priv.roundup_byte = rlens->roundup_byte;
+		sess_priv.roundup_len = rlens->roundup_len;
+		sess_priv.partial_len = rlens->partial_len;
+
+		/* Pointer from eth_sec -> outb_sa */
+		eth_sec->sa = outb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = sa_idx;
+		eth_sec->spi = ipsec->spi;
+
+		TAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess++;
+	}
+
+	/* Sync session in context cache */
+	roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+			    ROC_NIX_INL_SA_OP_RELOAD);
+
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_unlock();
+
+	plt_nix_dbg("Created %s session with spi=%u, sa_idx=%u inl_dev=%u",
+		    inbound ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx, eth_sec->inl_dev);
+	/*
+	 * Update fast path info in priv area.
+	 */
+	set_sec_session_private_data(sess, (void *)sess_priv.u64);
+
+	return 0;
+mempool_put:
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_unlock();
+	rte_mempool_put(mempool, eth_sec);
+	return rc;
+}
+
+static int
+cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_ot_ipsec_inb_sa *inb_sa;
+	struct roc_ot_ipsec_outb_sa *outb_sa;
+	struct cnxk_eth_sec_sess *eth_sec;
+	struct rte_mempool *mp;
+
+	eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
+	if (!eth_sec)
+		return -ENOENT;
+
+	if (eth_sec->inl_dev)
+		roc_nix_inl_dev_lock();
+
+	if (eth_sec->inb) {
+		inb_sa = eth_sec->sa;
+		/* Disable SA */
+		inb_sa->w2.s.valid = 0;
+
+		TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess--;
+	} else {
+		outb_sa = eth_sec->sa;
+		/* Disable SA */
+		outb_sa->w2.s.valid = 0;
+
+		/* Release Outbound SA index */
+		cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
+		TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess--;
+	}
+
+	/* Sync session in context cache */
+	roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+			    ROC_NIX_INL_SA_OP_RELOAD);
+
+	if (eth_sec->inl_dev)
+		roc_nix_inl_dev_unlock();
+
+	plt_nix_dbg("Destroyed %s session with spi=%u, sa_idx=%u, inl_dev=%u",
+		    eth_sec->inb ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx, eth_sec->inl_dev);
+
+	/* Put eth_sec object back to pool */
+	mp = rte_mempool_from_obj(eth_sec);
+	set_sec_session_private_data(sess, NULL);
+	rte_mempool_put(mp, eth_sec);
+	return 0;
+}
+
+static const struct rte_security_capability *
+cn10k_eth_sec_capabilities_get(void *device __rte_unused)
+{
+	return cn10k_eth_sec_capabilities;
+}
+
+void
+cn10k_eth_sec_ops_override(void)
+{
+	static int init_once;
+
+	if (init_once)
+		return;
+	init_once = 1;
+
+	/* Update platform specific ops */
+	cnxk_eth_sec_ops.session_create = cn10k_eth_sec_session_create;
+	cnxk_eth_sec_ops.session_destroy = cn10k_eth_sec_session_destroy;
+	cnxk_eth_sec_ops.capabilities_get = cn10k_eth_sec_capabilities_get;
+}
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 68219b8..d27a231 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -16,6 +16,7 @@
 #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)
 #define NIX_RX_OFFLOAD_TSTAMP_F	     BIT(4)
 #define NIX_RX_OFFLOAD_VLAN_STRIP_F  BIT(5)
+#define NIX_RX_OFFLOAD_SECURITY_F    BIT(6)
 
 /* Flags to control cqe_to_mbuf conversion function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index f75cae0..8577a7b 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -13,6 +13,7 @@
 #define NIX_TX_OFFLOAD_MBUF_NOFF_F    BIT(3)
 #define NIX_TX_OFFLOAD_TSO_F	      BIT(4)
 #define NIX_TX_OFFLOAD_TSTAMP_F	      BIT(5)
+#define NIX_TX_OFFLOAD_SECURITY_F     BIT(6)
 
 /* Flags to control xmit_prepare function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index 6cc30c3..d1d4b4e 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -37,6 +37,7 @@ sources += files(
 # CN10K
 sources += files(
         'cn10k_ethdev.c',
+        'cn10k_ethdev_sec.c',
         'cn10k_rte_flow.c',
         'cn10k_rx.c',
         'cn10k_rx_mseg.c',
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 74d16e4..5f0e817 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -49,6 +49,8 @@
              'SVendor': None, 'SDevice': None}
 cnxk_bphy_cgx = {'Class': '08', 'Vendor': '177d', 'Device': 'a059,a060',
                  'SVendor': None, 'SDevice': None}
+cnxk_inl_dev = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f0,a0f1',
+                'SVendor': None, 'SDevice': None}
 
 intel_dlb = {'Class': '0b', 'Vendor': '8086', 'Device': '270b,2710,2714',
              'SVendor': None, 'SDevice': None}
@@ -73,9 +75,9 @@
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
 regex_devices = [octeontx2_ree]
-misc_devices = [cnxk_bphy, cnxk_bphy_cgx, intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr,
-                intel_ntb_skx, intel_ntb_icx,
-                octeontx2_dma]
+misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev, intel_ioat_bdw,
+	        intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx,
+		intel_ntb_icx, octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.8.4


  parent reply	other threads:[~2021-09-30 17:04 UTC|newest]

Thread overview: 91+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 01/27] common/cnxk: add security support for cn9k fast path Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 02/27] common/cnxk: add helper API to dump cpt parse header Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 03/27] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 04/27] common/cnxk: change nix debug API and queue API interface Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 05/27] common/cnxk: add nix inline device irq API Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 06/27] common/cnxk: add nix inline device init and fini Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 07/27] common/cnxk: add nix inline inbound and outbound support API Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 08/27] common/cnxk: dump cpt lf registers on error intr Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 09/27] common/cnxk: align cpt lf enable/disable sequence Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 10/27] common/cnxk: restore nix sqb pool limit before destroy Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 11/27] common/cnxk: add cq enable support in nix Tx path Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 12/27] common/cnxk: setup aura bp conf based on nix Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 13/27] common/cnxk: add anti-replay check implementation for cn9k Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 14/27] common/cnxk: add inline IPsec support in rte flow Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 15/27] net/cnxk: add inline security support for cn9k Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 16/27] net/cnxk: add inline security support for cn10k Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 17/27] net/cnxk: add cn9k Rx support for security offload Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 18/27] net/cnxk: add cn9k Tx " Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 19/27] net/cnxk: add cn10k Rx " Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 20/27] net/cnxk: add cn10k Tx " Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 21/27] net/cnxk: add cn9k anti replay " Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 22/27] net/cnxk: add cn10k IPsec transport mode support Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 23/27] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 24/27] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 25/27] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 26/27] net/cnxk: add devargs for configuring channel mask Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 27/27] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
2021-09-29 12:44 ` [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Jerin Jacob
2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 02/28] common/cnxk: support CPT parse header dump Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 03/28] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 04/28] common/cnxk: change NIX debug API and queue API interface Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 05/28] common/cnxk: support NIX inline device IRQ Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 06/28] common/cnxk: support NIX inline device init and fini Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 07/28] common/cnxk: support NIX inline inbound and outbound setup Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 08/28] common/cnxk: disable CQ drop when inline inbound is enabled Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 09/28] common/cnxk: dump CPT LF registers on error intr Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 10/28] common/cnxk: align CPT LF enable/disable sequence Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 11/28] common/cnxk: restore NIX sqb pool limit before destroy Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 12/28] common/cnxk: add CQ enable support in NIX Tx path Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 13/28] common/cnxk: setup aura BP conf based on nix Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 14/28] common/cnxk: support anti-replay check in SW for cn9k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 15/28] common/cnxk: support inline IPsec rte flow action Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
2021-09-30 17:01   ` Nithin Dabilpuram [this message]
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 18/28] net/cnxk: support Rx security offload on cn9k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 19/28] net/cnxk: support Tx " Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 20/28] net/cnxk: support Rx security offload on cn10k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 21/28] net/cnxk: support Tx " Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 22/28] net/cnxk: support IPsec anti replay in cn9k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 23/28] net/cnxk: support IPsec transport mode in cn10k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 25/28] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 27/28] net/cnxk: support configuring channel mask via devargs Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 28/28] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
2021-10-01  5:37   ` [dpdk-dev] [PATCH v2 00/28] net/cnxk: support for inline ipsec Jerin Jacob
2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 02/28] common/cnxk: support CPT parse header dump Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 03/28] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 04/28] common/cnxk: change NIX debug API and queue API interface Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 05/28] common/cnxk: support NIX inline device IRQ Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 06/28] common/cnxk: support NIX inline device init and fini Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 07/28] common/cnxk: support NIX inline inbound and outbound setup Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 08/28] common/cnxk: disable CQ drop when inline inbound is enabled Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 09/28] common/cnxk: dump CPT LF registers on error intr Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 10/28] common/cnxk: align CPT LF enable/disable sequence Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 11/28] common/cnxk: restore NIX sqb pool limit before destroy Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 12/28] common/cnxk: add CQ enable support in NIX Tx path Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 13/28] common/cnxk: setup aura BP conf based on nix Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 14/28] common/cnxk: support anti-replay check in SW for cn9k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 15/28] common/cnxk: support inline IPsec rte flow action Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
2021-10-06 16:21     ` Ferruh Yigit
2021-10-06 16:44       ` Nithin Kumar Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 17/28] net/cnxk: support inline security setup for cn10k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 18/28] net/cnxk: support Rx security offload on cn9k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 19/28] net/cnxk: support Tx " Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 20/28] net/cnxk: support Rx security offload on cn10k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 21/28] net/cnxk: support Tx " Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 22/28] net/cnxk: support IPsec anti replay in cn9k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 23/28] net/cnxk: support IPsec transport mode in cn10k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 25/28] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 27/28] net/cnxk: support configuring channel mask via devargs Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 28/28] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
2021-10-02 13:49   ` [dpdk-dev] [PATCH v3 00/28] net/cnxk: support for inline ipsec Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210930170113.29030-18-ndabilpuram@marvell.com \
    --to=ndabilpuram@marvell.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=kirankumark@marvell.com \
    --cc=pbhagavatula@marvell.com \
    --cc=skori@marvell.com \
    --cc=skoteshwar@marvell.com \
    --cc=sthotton@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.