All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Zhao1, Wei" <wei.zhao1@intel.com>
To: "Wang, Xiao W" <xiao.w.wang@intel.com>,
	"Yang, Qiming" <qiming.yang@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2 1/3] net/ice: enable switch filter
Date: Fri, 14 Jun 2019 09:46:53 +0000	[thread overview]
Message-ID: <A2573D2ACFCADC41BB3BE09C6DE313CA07F044F7@PGSMSX103.gar.corp.intel.com> (raw)
In-Reply-To: <B7F2E978279D1D49A3034B7786DACF407AF452EB@SHSMSX106.ccr.corp.intel.com>

Hi, xiao

> -----Original Message-----
> From: Wang, Xiao W
> Sent: Thursday, June 13, 2019 4:24 PM
> To: Yang, Qiming <qiming.yang@intel.com>; dev@dpdk.org
> Cc: Zhao1, Wei <wei.zhao1@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 1/3] net/ice: enable switch filter
> 
> Hi,
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qiming Yang
> > Sent: Wednesday, June 12, 2019 3:50 PM
> > To: dev@dpdk.org
> > Cc: Zhao1, Wei <wei.zhao1@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 1/3] net/ice: enable switch filter
> >
> > From: wei zhao <wei.zhao1@intel.com>
> >
> > The patch enables the backend of rte_flow. It transfers rte_flow_xxx
> > to device specific data structure and configures packet process
> > engine's binary classifier
> > (switch) properly.
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >  drivers/net/ice/Makefile            |   1 +
> >  drivers/net/ice/ice_ethdev.h        |   6 +
> >  drivers/net/ice/ice_switch_filter.c | 502
> > ++++++++++++++++++++++++++++++++++++
> >  drivers/net/ice/ice_switch_filter.h |  28 ++
> >  drivers/net/ice/meson.build         |   3 +-
> >  5 files changed, 539 insertions(+), 1 deletion(-)  create mode 100644
> > drivers/net/ice/ice_switch_filter.c
> >  create mode 100644 drivers/net/ice/ice_switch_filter.h
> >
> > diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile index
> > 0e5c55e..b10d826 100644
> > --- a/drivers/net/ice/Makefile
> > +++ b/drivers/net/ice/Makefile
> > @@ -60,6 +60,7 @@ ifeq ($(CONFIG_RTE_ARCH_X86), y)
> >  SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_rxtx_vec_sse.c  endif
> >
> > +SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch_filter.c
> >  ifeq ($(findstring
> > RTE_MACHINE_CPUFLAG_AVX2,$(CFLAGS)),RTE_MACHINE_CPUFLAG_AVX2)
> >  	CC_AVX2_SUPPORT=1
> >  else
> > diff --git a/drivers/net/ice/ice_ethdev.h
> > b/drivers/net/ice/ice_ethdev.h index 1385afa..67a358a 100644
> > --- a/drivers/net/ice/ice_ethdev.h
> > +++ b/drivers/net/ice/ice_ethdev.h
> > @@ -234,6 +234,12 @@ struct ice_vsi {
> >  	bool offset_loaded;
> >  };
> >
> > +/* Struct to store flow created. */
> > +struct rte_flow {
> > +	TAILQ_ENTRY(rte_flow) node;
> > +void *rule;
> > +};
> > +
> >  struct ice_pf {
> >  	struct ice_adapter *adapter; /* The adapter this PF associate to */
> >  	struct ice_vsi *main_vsi; /* pointer to main VSI structure */ diff
> > --git a/drivers/net/ice/ice_switch_filter.c
> > b/drivers/net/ice/ice_switch_filter.c
> > new file mode 100644
> > index 0000000..e679675
> > --- /dev/null
> > +++ b/drivers/net/ice/ice_switch_filter.c
> > @@ -0,0 +1,502 @@
> 
> SPDX-License-Identifier missing.

Ok, Updated in v3
> 
> > +#include <sys/queue.h>
> > +#include <stdio.h>
> > +#include <errno.h>
> > +#include <stdint.h>
> > +#include <string.h>
> > +#include <unistd.h>
> > +#include <stdarg.h>
> > +
> > +#include <rte_debug.h>
> > +#include <rte_ether.h>
> > +#include <rte_ethdev_driver.h>
> > +#include <rte_log.h>
> > +#include <rte_malloc.h>
> > +#include <rte_eth_ctrl.h>
> > +#include <rte_tailq.h>
> > +#include <rte_flow_driver.h>
> > +
> > +#include "ice_logs.h"
> > +#include "base/ice_type.h"
> > +#include "ice_switch_filter.h"
> > +
> > +static int
> > +ice_parse_switch_filter(
> > +			const struct rte_flow_item pattern[],
> > +			const struct rte_flow_action actions[],
> > +			struct rte_flow_error *error,
> > +			struct ice_adv_rule_info *rule_info,
> > +			struct ice_adv_lkup_elem **lkup_list,
> > +			uint16_t *lkups_num)
> > +{
> > +	const struct rte_flow_item *item = pattern;
> > +	enum rte_flow_item_type item_type;
> > +	const struct rte_flow_item_eth *eth_spec, *eth_mask;
> > +	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
> > +	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
> > +	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
> > +	const struct rte_flow_item_udp *udp_spec, *udp_mask;
> > +	const struct rte_flow_item_sctp *sctp_spec, *sctp_mask;
> > +	const struct rte_flow_item_nvgre  *nvgre_spec, *nvgre_mask;
> > +	const struct rte_flow_item_vxlan  *vxlan_spec, *vxlan_mask;
> > +	struct ice_adv_lkup_elem *list;
> > +	uint16_t i, j, t = 0;
> > +	uint16_t item_num = 0;
> > +	enum ice_sw_tunnel_type tun_type = ICE_NON_TUN;
> > +
> > +	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> > +		if (item->type == RTE_FLOW_ITEM_TYPE_ETH ||
> > +			item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
> > +			item->type == RTE_FLOW_ITEM_TYPE_IPV6 ||
> > +			item->type == RTE_FLOW_ITEM_TYPE_UDP ||
> > +			item->type == RTE_FLOW_ITEM_TYPE_TCP ||
> > +			item->type == RTE_FLOW_ITEM_TYPE_SCTP ||
> > +			item->type == RTE_FLOW_ITEM_TYPE_VXLAN ||
> > +			item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
> > +			item_num++;
> > +	}
> > +
> > +	list = rte_zmalloc(NULL, item_num * sizeof(*list), 0);
> > +	if (!list) {
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ITEM, actions,
> > +				   "no memory malloc");
> 
> {RTE_FLOW_ERROR_TYPE_ITEM_NUM, item, "No memory for PMD internal
> items"} is more appropriate.
> Refer to i40e implementation.

Ok, Updated in v3

> 
> > +		goto out;
> > +	}
> > +	*lkup_list = list;
> > +
> > +	for (item = pattern, i = 0; item->type !=
> > +			RTE_FLOW_ITEM_TYPE_END; item++, i++) {
> 
> It seems we don't need the "i" variable.

Ok, Updated in v3

> 
> > +		item_type = item->type;
> > +
> > +		switch (item_type) {
> > +		case RTE_FLOW_ITEM_TYPE_ETH:
> > +			eth_spec = item->spec;
> > +			eth_mask = item->mask;
> > +			if (eth_spec && eth_mask) {
> > +				list[t].type = (tun_type == ICE_NON_TUN) ?
> > +					ICE_MAC_OFOS : ICE_MAC_IL;
> > +				for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
> > +					if (eth_mask->src.addr_bytes[j] ==
> > +								UINT8_MAX) {
> > +						list[t].h_u.eth_hdr.
> > +							src_addr[j] =
> > +						eth_spec->src.addr_bytes[j];
> > +						list[t].m_u.eth_hdr.
> > +							src_addr[j] =
> > +						eth_mask->src.addr_bytes[j];
> > +					}
> > +					if (eth_mask->dst.addr_bytes[j] ==
> > +								UINT8_MAX) {
> > +						list[t].h_u.eth_hdr.
> > +							dst_addr[j] =
> > +						eth_spec->dst.addr_bytes[j];
> > +						list[t].m_u.eth_hdr.
> > +							dst_addr[j] =
> > +						eth_mask->dst.addr_bytes[j];
> > +					}
> > +				}
> > +				if (eth_mask->type == UINT16_MAX) {
> > +					list[t].h_u.eth_hdr.ethtype_id =
> > +					rte_be_to_cpu_16(eth_spec->type);
> > +					list[t].m_u.eth_hdr.ethtype_id =
> > +						UINT16_MAX;
> > +				}
> > +				t++;
> 
> A lot of "t++" below, can we move it outside the switch{ } to have only one "t++"?

By now, we can not, because share code can not handle  if (!eth_spec && !eth_mask)  case,  if we t++
For that case, that item will put into list[t], and share code will report error.
> 
> > +			} else if (!eth_spec && !eth_mask) {
> > +				list[t].type = (tun_type == ICE_NON_TUN) ?
> > +					ICE_MAC_OFOS : ICE_MAC_IL;
> > +			}
> > +			break;
> > +
> > +		case RTE_FLOW_ITEM_TYPE_IPV4:
> > +			ipv4_spec = item->spec;
> > +			ipv4_mask = item->mask;
> > +			if (ipv4_spec && ipv4_mask) {
> > +				list[t].type = (tun_type == ICE_NON_TUN) ?
> > +					ICE_IPV4_OFOS : ICE_IPV4_IL;
> > +				if (ipv4_mask->hdr.src_addr == UINT32_MAX)
> > {
> > +					list[t].h_u.ipv4_hdr.src_addr =
> > +						ipv4_spec->hdr.src_addr;
> > +					list[t].m_u.ipv4_hdr.src_addr =
> > +						UINT32_MAX;
> > +				}
> > +				if (ipv4_mask->hdr.dst_addr == UINT32_MAX)
> > {
> > +					list[t].h_u.ipv4_hdr.dst_addr =
> > +						ipv4_spec->hdr.dst_addr;
> > +					list[t].m_u.ipv4_hdr.dst_addr =
> > +						UINT32_MAX;
> > +				}
> > +				if (ipv4_mask->hdr.time_to_live ==
> > UINT8_MAX) {
> > +					list[t].h_u.ipv4_hdr.time_to_live =
> > +						ipv4_spec->hdr.time_to_live;
> > +					list[t].m_u.ipv4_hdr.time_to_live =
> > +						UINT8_MAX;
> > +				}
> > +				if (ipv4_mask->hdr.next_proto_id ==
> > UINT8_MAX) {
> > +					list[t].h_u.ipv4_hdr.protocol =
> > +						ipv4_spec-
> > >hdr.next_proto_id;
> > +					list[t].m_u.ipv4_hdr.protocol =
> > +						UINT8_MAX;
> > +				}
> > +				if (ipv4_mask->hdr.type_of_service ==
> > +						UINT8_MAX) {
> > +					list[t].h_u.ipv4_hdr.tos =
> > +						ipv4_spec-
> > >hdr.type_of_service;
> > +					list[t].m_u.ipv4_hdr.tos = UINT8_MAX;
> > +				}
> > +				t++;
> > +			} else if (!ipv4_spec && !ipv4_mask) {
> > +				list[t].type = (tun_type == ICE_NON_TUN) ?
> > +					ICE_IPV4_OFOS : ICE_IPV4_IL;
> > +			}
> > +			break;
> > +
> > +		case RTE_FLOW_ITEM_TYPE_IPV6:
> > +			ipv6_spec = item->spec;
> > +			ipv6_mask = item->mask;
> > +			if (ipv6_spec && ipv6_mask) {
> > +				list[t].type = (tun_type == ICE_NON_TUN) ?
> > +					ICE_IPV6_OFOS : ICE_IPV6_IL;
> > +				for (j = 0; j < ICE_IPV6_ADDR_LENGTH; j++) {
> > +					if (ipv6_mask->hdr.src_addr[j] ==
> > +								UINT8_MAX) {
> > +						list[t].h_u.ice_ipv6_ofos_hdr.
> > +							src_addr[j] =
> > +						ipv6_spec->hdr.src_addr[j];
> > +						list[t].m_u.ice_ipv6_ofos_hdr.
> > +							src_addr[j] =
> > +						ipv6_mask->hdr.src_addr[j];
> > +					}
> > +					if (ipv6_mask->hdr.dst_addr[j] ==
> > +								UINT8_MAX) {
> > +						list[t].h_u.ice_ipv6_ofos_hdr.
> > +							dst_addr[j] =
> > +						ipv6_spec->hdr.dst_addr[j];
> > +						list[t].m_u.ice_ipv6_ofos_hdr.
> > +							dst_addr[j] =
> > +						ipv6_mask->hdr.dst_addr[j];
> > +					}
> > +				}
> > +				if (ipv6_mask->hdr.proto == UINT8_MAX) {
> > +
> > 	list[t].h_u.ice_ipv6_ofos_hdr.next_hdr =
> > +						ipv6_spec->hdr.proto;
> > +
> > 	list[t].m_u.ice_ipv6_ofos_hdr.next_hdr =
> > +						UINT8_MAX;
> > +				}
> > +				if (ipv6_mask->hdr.hop_limits == UINT8_MAX)
> > {
> > +					list[t].h_u.ice_ipv6_ofos_hdr.
> > +					hop_limit = ipv6_spec-
> > >hdr.hop_limits;
> > +					list[t].m_u.ice_ipv6_ofos_hdr.
> > +						hop_limit  = UINT8_MAX;
> > +				}
> > +				t++;
> > +			} else if (!ipv6_spec && !ipv6_mask) {
> > +				list[t].type = (tun_type == ICE_NON_TUN) ?
> > +					ICE_IPV4_OFOS : ICE_IPV4_IL;
> > +			}
> > +			break;
> > +
> > +		case RTE_FLOW_ITEM_TYPE_UDP:
> > +			udp_spec = item->spec;
> > +			udp_mask = item->mask;
> > +			if (udp_spec && udp_mask) {
> > +				list[t].type = ICE_UDP_ILOS;
> > +				if (udp_mask->hdr.src_port == UINT16_MAX)
> > {
> > +					list[t].h_u.l4_hdr.src_port =
> > +						udp_spec->hdr.src_port;
> > +					list[t].m_u.l4_hdr.src_port =
> > +						udp_mask->hdr.src_port;
> > +				}
> > +				if (udp_mask->hdr.dst_port == UINT16_MAX)
> > {
> > +					list[t].h_u.l4_hdr.dst_port =
> > +						udp_spec->hdr.dst_port;
> > +					list[t].m_u.l4_hdr.dst_port =
> > +						udp_mask->hdr.dst_port;
> > +				}
> > +				t++;
> > +			} else if (!udp_spec && !udp_mask) {
> > +				list[t].type = ICE_UDP_ILOS;
> > +			}
> > +			break;
> > +
> > +		case RTE_FLOW_ITEM_TYPE_TCP:
> > +			tcp_spec = item->spec;
> > +			tcp_mask = item->mask;
> > +			if (tcp_spec && tcp_mask) {
> > +				list[t].type = ICE_TCP_IL;
> > +				if (tcp_mask->hdr.src_port == UINT16_MAX) {
> > +					list[t].h_u.l4_hdr.src_port =
> > +						tcp_spec->hdr.src_port;
> > +					list[t].m_u.l4_hdr.src_port =
> > +						tcp_mask->hdr.src_port;
> > +				}
> > +				if (tcp_mask->hdr.dst_port == UINT16_MAX) {
> > +					list[t].h_u.l4_hdr.dst_port =
> > +						tcp_spec->hdr.dst_port;
> > +					list[t].m_u.l4_hdr.dst_port =
> > +						tcp_mask->hdr.dst_port;
> > +				}
> > +				t++;
> > +			} else if (!tcp_spec && !tcp_mask) {
> > +				list[t].type = ICE_TCP_IL;
> > +			}
> > +			break;
> > +
> > +		case RTE_FLOW_ITEM_TYPE_SCTP:
> > +			sctp_spec = item->spec;
> > +			sctp_mask = item->mask;
> > +			if (sctp_spec && sctp_mask) {
> > +				list[t].type = ICE_SCTP_IL;
> > +				if (sctp_mask->hdr.src_port == UINT16_MAX)
> > {
> > +					list[t].h_u.sctp_hdr.src_port =
> > +						sctp_spec->hdr.src_port;
> > +					list[t].m_u.sctp_hdr.src_port =
> > +						sctp_mask->hdr.src_port;
> > +				}
> > +				if (sctp_mask->hdr.dst_port == UINT16_MAX)
> > {
> > +					list[t].h_u.sctp_hdr.dst_port =
> > +						sctp_spec->hdr.dst_port;
> > +					list[t].m_u.sctp_hdr.dst_port =
> > +						sctp_mask->hdr.dst_port;
> > +				}
> > +				t++;
> > +			} else if (!sctp_spec && !sctp_mask) {
> > +				list[t].type = ICE_SCTP_IL;
> > +			}
> > +			break;
> > +
> > +		case RTE_FLOW_ITEM_TYPE_VXLAN:
> > +			vxlan_spec = item->spec;
> > +			vxlan_mask = item->mask;
> > +			tun_type = ICE_SW_TUN_VXLAN;
> > +			if (vxlan_spec && vxlan_mask) {
> > +				list[t].type = ICE_VXLAN;
> > +				if (vxlan_mask->vni[0] == UINT8_MAX &&
> > +					vxlan_mask->vni[1] == UINT8_MAX
> > &&
> > +					vxlan_mask->vni[2] == UINT8_MAX) {
> > +					list[t].h_u.tnl_hdr.vni =
> > +						(vxlan_spec->vni[1] << 8) |
> > +						vxlan_spec->vni[0];
> > +					list[t].m_u.tnl_hdr.vni =
> > +						UINT16_MAX;
> 
> vxlan_spec->vni[2] does not need to be put into the list?

Old Share code only support 16bit vni, not 24bit ,share code bug.
But we will updated in v3 to 24bit.


> 
> > +				}
> > +				t++;
> > +			} else if (!vxlan_spec && !vxlan_mask) {
> > +				list[t].type = ICE_VXLAN;
> > +			}
> > +			break;
> > +
> > +		case RTE_FLOW_ITEM_TYPE_NVGRE:
> > +			nvgre_spec = item->spec;
> > +			nvgre_mask = item->mask;
> > +			tun_type = ICE_SW_TUN_NVGRE;
> > +			if (nvgre_spec && nvgre_mask) {
> > +				list[t].type = ICE_NVGRE;
> > +				if (nvgre_mask->tni[0] == UINT8_MAX &&
> > +					nvgre_mask->tni[1] == UINT8_MAX
> > &&
> > +					nvgre_mask->tni[2] == UINT8_MAX) {
> > +					list[t].h_u.nvgre_hdr.tni =
> > +						(nvgre_spec->tni[1] << 8) |
> > +						nvgre_spec->tni[0];
> > +					list[t].m_u.nvgre_hdr.tni =
> > +						UINT16_MAX;
> > +				}
> > +				t++;
> > +			} else if (!nvgre_spec && !nvgre_mask) {
> > +				list[t].type = ICE_NVGRE;
> > +			}
> > +			break;
> > +
> > +		case RTE_FLOW_ITEM_TYPE_VOID:
> > +		case RTE_FLOW_ITEM_TYPE_END:
> > +			break;
> > +
> > +		default:
> > +			rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ITEM, actions,
> > +				   "Invalid pattern item.");
> > +			goto out;
> > +		}
> > +	}
> > +
> > +	rule_info->tun_type = tun_type;
> > +	*lkups_num = t;
> > +
> > +	return 0;
> > +out:
> 
> We may need to free the allocated list before return.

No, list[] memory will be used in later function.
we can not free now.

> 
> > +	return -rte_errno;
> > +}
> > +
> > +/* By now ice switch filter action code implement only
> > +* supports QUEUE or DROP.
> > +*/
> > +static int
> > +ice_parse_switch_action(struct ice_pf *pf,
> > +				 const struct rte_flow_action *actions,
> > +				 struct rte_flow_error *error,
> > +				 struct ice_adv_rule_info *rule_info) {
> > +	struct ice_hw *hw = ICE_PF_TO_HW(pf);
> > +	struct ice_vsi *vsi = pf->main_vsi;
> > +	const struct rte_flow_action *act;
> > +	const struct rte_flow_action_queue *act_q;
> > +	uint16_t base_queue, index = 0;
> > +	uint32_t reg;
> > +
> > +	/* Check if the first non-void action is QUEUE or DROP. */
> > +	NEXT_ITEM_OF_ACTION(act, actions, index);
> > +	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
> > +	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
> > +		rte_flow_error_set(error, EINVAL,
> > RTE_FLOW_ERROR_TYPE_ACTION,
> > +				   act, "Not supported action.");
> > +		return -rte_errno;
> > +	}
> > +	reg = ICE_READ_REG(hw, PFLAN_RX_QALLOC);
> > +	if (reg & PFLAN_RX_QALLOC_VALID_M) {
> > +		base_queue = reg & PFLAN_RX_QALLOC_FIRSTQ_M;
> 
> Can we get this register info earlier in dev_start() or somewhere else? Then we
> can use the base_queue directly.

Maybe, it can be used for other case.


> 
> > +	} else {
> > +		rte_flow_error_set(error, EINVAL,
> > +			RTE_FLOW_ERROR_TYPE_ACTION,
> > +			act, "Invalid queue register");
> > +		return -rte_errno;
> > +	}
> > +	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
> > +		act_q = act->conf;
> > +		rule_info->sw_act.fltr_act = ICE_FWD_TO_Q;
> > +		rule_info->sw_act.fwd_id.q_id = base_queue + act_q->index;
> > +		if (act_q->index >= pf->dev_data->nb_rx_queues) {
> > +			rte_flow_error_set(error, EINVAL,
> > +				RTE_FLOW_ERROR_TYPE_ACTION,
> > +				act, "Invalid queue ID for"
> > +				" switch filter.");
> > +			return -rte_errno;
> > +		}
> > +	} else {
> > +		rule_info->sw_act.fltr_act = ICE_DROP_PACKET;
> > +	}
> > +
> > +	rule_info->sw_act.vsi_handle = vsi->idx;
> > +	rule_info->rx = 1;
> > +	rule_info->sw_act.src = vsi->idx;
> > +
> > +	/* Check if the next non-void item is END */
> > +	index++;
> > +	NEXT_ITEM_OF_ACTION(act, actions, index);
> > +	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
> > +		rte_flow_error_set(error, EINVAL,
> > RTE_FLOW_ERROR_TYPE_ACTION,
> > +				   act, "Not supported action.");
> > +		return -rte_errno;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int
> > +ice_switch_rule_set(struct ice_pf *pf,
> > +			struct ice_adv_lkup_elem *list,
> > +			uint16_t lkups_cnt,
> > +			struct ice_adv_rule_info *rule_info,
> > +			struct rte_flow *flow)
> > +{
> > +	struct ice_hw *hw = ICE_PF_TO_HW(pf);
> > +	int ret;
> > +	struct ice_rule_query_data rule_added = {0};
> > +	struct ice_rule_query_data *filter_ptr;
> > +
> > +	if (lkups_cnt > ICE_MAX_CHAIN_WORDS) {
> > +		PMD_DRV_LOG(ERR, "item number too large for rule");
> 
> Why not rte_flow_error_set() to report error?

Ok, Update in v3

> 
> > +		return -ENOTSUP;
> > +	}
> > +	if (!list) {
> > +		PMD_DRV_LOG(ERR, "lookup list should not be NULL");
> 
> Ditto.
> 
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	ret = ice_add_adv_rule(hw, list, lkups_cnt, rule_info, &rule_added);
> > +
> > +	if (!ret) {
> > +		filter_ptr = rte_zmalloc("ice_switch_filter",
> > +			sizeof(struct ice_rule_query_data), 0);
> > +		if (!filter_ptr) {
> > +			PMD_DRV_LOG(ERR, "failed to allocate memory");
> > +			return -EINVAL;
> > +		}
> > +		flow->rule = filter_ptr;
> > +		rte_memcpy(filter_ptr,
> > +			&rule_added,
> > +			sizeof(struct ice_rule_query_data));
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> > +int
> > +ice_create_switch_filter(struct ice_pf *pf,
> > +			const struct rte_flow_item pattern[],
> > +			const struct rte_flow_action actions[],
> > +			struct rte_flow *flow,
> > +			struct rte_flow_error *error)
> > +{
> > +	int ret = 0;
> > +	struct ice_adv_rule_info rule_info = {0};
> > +	struct ice_adv_lkup_elem *list = NULL;
> > +	uint16_t lkups_num = 0;
> > +
> > +	ret = ice_parse_switch_filter(pattern, actions, error,
> > +			&rule_info, &list, &lkups_num);
> > +	if (ret)
> > +		goto out;
> > +
> > +	ret = ice_parse_switch_action(pf, actions, error, &rule_info);
> > +	if (ret)
> > +		goto out;
> > +
> > +	ret = ice_switch_rule_set(pf, list, lkups_num, &rule_info, flow);
> > +	if (ret)
> > +		goto out;
> > +
> > +	rte_free(list);
> > +	return 0;
> > +
> > +out:
> > +	rte_free(list);
> > +
> > +	return -rte_errno;
> > +}
> > +
> > +int
> > +ice_destroy_switch_filter(struct ice_pf *pf,
> > +			struct rte_flow *flow)
> > +{
> > +	struct ice_hw *hw = ICE_PF_TO_HW(pf);
> > +	int ret;
> > +	struct ice_rule_query_data *filter_ptr;
> > +	struct ice_rule_query_data rule_added;
> > +
> > +	filter_ptr = (struct ice_rule_query_data *)
> > +			flow->rule;
> > +	rte_memcpy(&rule_added, filter_ptr,
> > +		sizeof(struct ice_rule_query_data));
> > +
> > +	if (!filter_ptr) {
> > +		PMD_DRV_LOG(ERR, "no such flow"
> > +			    " create by switch filter");
> > +		return -EINVAL;
> > +	}
> 
> We should do this check at least before rte_memcpy.

Ok, update in v3

> 
> > +
> > +	ret = ice_rem_adv_rule_by_id(hw, &rule_added);
> 
> We can use filter_ptr directly for the switch rule delete.


Ok, update in v3

> 
> > +
> > +	rte_free(filter_ptr);
> > +
> > +	return ret;
> > +}
> > +
> > +void
> > +ice_free_switch_filter_rule(void *rule) {
> > +	struct ice_rule_query_data *filter_ptr;
> > +
> > +	filter_ptr = (struct ice_rule_query_data *)rule;
> > +
> > +	rte_free(filter_ptr);
> > +}
> > diff --git a/drivers/net/ice/ice_switch_filter.h
> > b/drivers/net/ice/ice_switch_filter.h
> > new file mode 100644
> > index 0000000..957d0d1
> > --- /dev/null
> > +++ b/drivers/net/ice/ice_switch_filter.h
> > @@ -0,0 +1,28 @@
> 
> Also a license is needed in new file.


Ok, update in v3

> 
> BRs,
> Xiao
> 
> > +#ifndef _ICE_SWITCH_FILTER_H_
> > +#define _ICE_SWITCH_FILTER_H_


  reply	other threads:[~2019-06-14  9:47 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-03  9:05 [dpdk-dev] [PATCH 0/2] Enable rte_flow API in ice driver Qiming Yang
2019-06-03  9:05 ` [dpdk-dev] [PATCH 1/2] net/ice: enable switch filter Qiming Yang
2019-06-03 17:07   ` Aaron Conole
2019-06-04  2:02     ` Zhao1, Wei
2019-06-03  9:05 ` [dpdk-dev] [PATCH 2/2] net/ice: add generic flow API Qiming Yang
2019-06-12  7:50 ` [dpdk-dev] [PATCH v2 0/3] Enable rte_flow API in ice driver Qiming Yang
2019-06-12  7:50   ` [dpdk-dev] [PATCH v2 1/3] net/ice: enable switch filter Qiming Yang
2019-06-13  8:23     ` Wang, Xiao W
2019-06-14  9:46       ` Zhao1, Wei [this message]
2019-06-17  8:28         ` Wang, Xiao W
2019-06-18  1:57           ` Zhao1, Wei
2019-06-17  5:27     ` Xing, Beilei
2019-06-17  8:23       ` Zhao1, Wei
2019-06-17  8:51       ` Zhao1, Wei
2019-06-18  1:50         ` Xing, Beilei
2019-06-18  9:40     ` Ye Xiaolong
2019-06-19  3:06       ` Zhao1, Wei
2019-06-12  7:50   ` [dpdk-dev] [PATCH v2 2/3] net/ice: add generic flow API Qiming Yang
2019-06-17  5:50     ` Xing, Beilei
2019-06-17  6:02     ` Xing, Beilei
2019-06-17  9:19     ` Wang, Xiao W
2019-06-12  7:50   ` [dpdk-dev] [PATCH v2 3/3] net/ice: add UDP tunnel port support Qiming Yang
2019-06-20  5:34 ` [dpdk-dev] [PATCH v3 0/3] Enable rte_flow API in ice driver Qiming Yang
2019-06-20  5:34   ` [dpdk-dev] [PATCH v3 1/3] net/ice: enable switch filter Qiming Yang
2019-06-20  9:01     ` Wang, Xiao W
2019-06-20  9:12       ` Zhao1, Wei
2019-06-20  5:34   ` [dpdk-dev] [PATCH v3 2/3] net/ice: add generic flow API Qiming Yang
2019-06-20  9:32     ` Wang, Xiao W
2019-06-21  5:47       ` Yang, Qiming
2019-06-20 10:21     ` Wang, Xiao W
2019-06-20 13:33     ` Aaron Conole
2019-06-21  2:18       ` Yang, Qiming
2019-06-20  5:34   ` [dpdk-dev] [PATCH v3 3/3] net/ice: add UDP tunnel port support Qiming Yang
2019-06-21  6:13 ` [dpdk-dev] [PATCH v4 0/3] Enable rte_flow API in ice driver Qiming Yang
2019-06-21  6:13   ` [dpdk-dev] [PATCH v4 1/3] net/ice: enable switch filter Qiming Yang
2019-06-21  6:13   ` [dpdk-dev] [PATCH v4 2/3] net/ice: add generic flow API Qiming Yang
2019-06-21  6:13   ` [dpdk-dev] [PATCH v4 3/3] net/ice: add UDP tunnel port support Qiming Yang
2019-06-21  9:21 ` [dpdk-dev] [PATCH v5 0/3] Enable rte_flow API in ice driver Qiming Yang
2019-06-21  9:21   ` [dpdk-dev] [PATCH v5 1/3] net/ice: enable switch filter Qiming Yang
2019-06-21  9:21   ` [dpdk-dev] [PATCH v5 2/3] net/ice: add generic flow API Qiming Yang
2019-06-21  9:21   ` [dpdk-dev] [PATCH v5 3/3] net/ice: add UDP tunnel port support Qiming Yang
2019-06-21 14:46   ` [dpdk-dev] [PATCH v5 0/3] Enable rte_flow API in ice driver Aaron Conole
2019-06-24  6:15 ` [dpdk-dev] [PATCH v6 " Qiming Yang
2019-06-24  6:15   ` [dpdk-dev] [PATCH v6 1/3] net/ice: enable switch filter Qiming Yang
2019-06-24  6:15   ` [dpdk-dev] [PATCH v6 2/3] net/ice: add generic flow API Qiming Yang
2019-06-24  6:15   ` [dpdk-dev] [PATCH v6 3/3] net/ice: add UDP tunnel port support Qiming Yang
2019-06-25  6:48 ` [dpdk-dev] [PATCH v7 0/3] Enable rte_flow API in ice driver Qiming Yang
2019-06-25  6:48   ` [dpdk-dev] [PATCH v7 1/3] net/ice: enable switch filter Qiming Yang
2019-06-25  6:48   ` [dpdk-dev] [PATCH v7 2/3] net/ice: add generic flow API Qiming Yang
2019-06-25  6:48   ` [dpdk-dev] [PATCH v7 3/3] net/ice: add UDP tunnel port support Qiming Yang
2019-06-26  7:07     ` Xing, Beilei
2019-06-25 14:58   ` [dpdk-dev] [PATCH v7 0/3] Enable rte_flow API in ice driver Aaron Conole
2019-06-26  1:52     ` Yang, Qiming
2019-06-26  7:42       ` Ferruh Yigit
2019-06-26  8:26         ` Yang, Qiming
2019-06-26 15:52   ` Ye Xiaolong
2019-06-26  8:03 ` [dpdk-dev] [PATCH v8 " Qiming Yang
2019-06-26  8:03   ` [dpdk-dev] [PATCH v8 1/3] net/ice: enable switch filter Qiming Yang
2019-06-26  8:03   ` [dpdk-dev] [PATCH v8 2/3] net/ice: add generic flow API Qiming Yang
2019-06-26  8:03   ` [dpdk-dev] [PATCH v8 3/3] net/ice: add UDP tunnel port support Qiming Yang
2019-06-26  8:58 ` [dpdk-dev] [PATCH v8 0/4] Enable rte_flow API in ice driver Qiming Yang
2019-06-26  8:58   ` [dpdk-dev] [PATCH v8 1/4] net/ice: enable switch filter Qiming Yang
2019-06-26  8:58   ` [dpdk-dev] [PATCH v8 2/4] net/ice: add generic flow API Qiming Yang
2019-06-26  8:58   ` [dpdk-dev] [PATCH v8 3/4] net/ice: add UDP tunnel port support Qiming Yang
2019-06-26  8:58   ` [dpdk-dev] [PATCH v8 4/4] doc: add release note for generic flow Qiming Yang
2019-06-26 21:27     ` Thomas Monjalon
2019-06-27  2:04       ` Yang, Qiming
2019-06-26 13:25   ` [dpdk-dev] [PATCH v8 0/4] Enable rte_flow API in ice driver Xing, Beilei
2019-07-01  8:32 ` [dpdk-dev] [PATCH v9 0/3] " Qiming Yang
2019-07-01  8:32   ` [dpdk-dev] [PATCH v9 1/3] net/ice: enable switch filter Qiming Yang
2019-07-01  8:32   ` [dpdk-dev] [PATCH v9 2/3] net/ice: add generic flow API Qiming Yang
2019-07-01  8:32   ` [dpdk-dev] [PATCH v9 3/3] net/ice: add UDP tunnel port support Qiming Yang
2019-07-01 11:38   ` [dpdk-dev] [PATCH v9 0/3] Enable rte_flow API in ice driver Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A2573D2ACFCADC41BB3BE09C6DE313CA07F044F7@PGSMSX103.gar.corp.intel.com \
    --to=wei.zhao1@intel.com \
    --cc=dev@dpdk.org \
    --cc=qiming.yang@intel.com \
    --cc=xiao.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.