From mboxrd@z Thu Jan 1 00:00:00 1970 From: Adrien Mazarguil Subject: Re: [RFC] New packet type query API Date: Tue, 16 Jan 2018 16:55:32 +0100 Message-ID: <20180116155532.GH4256@6wind.com> References: <20180111160405.182159-1-qiming.yang@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: dev@dpdk.org To: Qiming Yang Return-path: Received: from mail-wr0-f175.google.com (mail-wr0-f175.google.com [209.85.128.175]) by dpdk.org (Postfix) with ESMTP id 3A5891B2DD for ; Tue, 16 Jan 2018 16:55:46 +0100 (CET) Received: by mail-wr0-f175.google.com with SMTP id t16so2057085wrc.10 for ; Tue, 16 Jan 2018 07:55:46 -0800 (PST) Content-Disposition: inline In-Reply-To: <20180111160405.182159-1-qiming.yang@intel.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Jan 12, 2018 at 12:04:05AM +0800, Qiming Yang wrote: > This RFC contains a proposal to add a new packet type query API to DPDK. > > Motivation > ========== > In current DPDK implementation, when received one packet, driver will lookup the packet type mapping table and transmit the result value to struct mbuf. Appilcation will analyze the mbuf->packet_type only when user want to show the packet type name. It will waste a lot of cycles, because sometimes user don't care what the packet type is, but the look-up will always exist. We think the packet type name is not needed all the time and the look-up is not needed neither. So we will add a flag, when enable the flag, the ptype value in descriptor will be passed through to mbuf->packet_type directly. So a new API to get packet name by values is added also, it will be called when user want to know what the packet type name is. So we can use the flag to control which value will transmit to mbuf(di rect ptype value or ptype name value). > > The current ptype expression depend on bit width(max 32 bit, used 28bit already), if we support more packet type in future, like next generation NIC support flexible pipeline, it will introduce some new packet types, the remaining bit may not enough. So I suppose to use the rte_flow items to present packet type. The advantage of rte_flow item are: > 1. It's generic enough, every NIC can use these items to present their packet; > 2. it's flexible, can add new items unlimited to support new packet type. > > Proposed solution > ================ > The new API used to query the exact packet type name with packet type value from descriptor. > > /** Packet type name structure */ > struct rte_eth_ptype_name { > uint64_t ptype_value; > enum *name; > } > > /** > * Query the exact packet type name with packet type values > * > * @param port_id Port identifier of Ethernet device. > * @param names Pointer to packet type name buffer. > * @param size The size of ptype_values array (number of element). > * @param ptype_values Pointer to a table of ptype values. > * @return > * - On success returns zero > * - On failure returns non-zero > */ > int rte_eth_ptype_name_get_by_value (uint8_t port_id, struct rte_eth_ptype_name *names, int size, uint64_t *ptype_values); > > Add a new flag 'enable_ptype_direct' in structure rte_eth_rxmode to enable packet type get direct mode. The direct mode means the packet type value will pass through directly from HW to driver without lookup ptype mapping table. User can call function rte_eth_ptype_name_get_by_value to query what the packet type value means when they need. > > struct rte_eth_rxmode { > … … > uint16_t header_split : 1, /**< Header Split enable. */ > hw_ip_checksum : 1, /**< IP/UDP/TCP checksum offload enable. */ > hw_vlan_filter : 1, /**< VLAN filter enable. */ > hw_vlan_strip : 1, /**< VLAN strip enable. */ > hw_vlan_extend : 1, /**< Extended VLAN enable. */ > jumbo_frame : 1, /**< Jumbo Frame Receipt enable. */ > hw_strip_crc : 1, /**< Enable CRC stripping by hardware. */ > enable_scatter : 1, /**< Enable scatter packets rx handler */ > + enable_ptype_direct :1, /** < Enable packet type get direct mode. */ > enable_lro : 1; /**< Enable LRO */ > }; > > This flag will be configured in dev_configure(), and can be queried by user through dev->data->dev_conf.rxmode.enable_ptype_direct. In receive function, driver will store HW's packet type value in mbuf->packet_type if direct mode is enabled, If not, maintain the existing code. > > Driver maintains a new ptype_value and rte_flow item mapping table, when download a profile and support new packet type, the SW mapping table will be updated according to the ptype information analyzed from profile. > > Future work > =========== > Support to configure the packet type direct mode per queue. I understand the motivation behind this proposal, however since new ideas must be challenged, I have a few comments: - How about making packet type recognition an optional offload configurable per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra processing cost could be avoided for applications that do not care. - Depending on HW, packet type information inside RX descriptors may not necessarily fit 64-bit, or at least not without transformation. This transformation would still cause wasted cycles on the PMD side. - In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles but the subsequent look-up with the proposed API would translate to a higher cost on the application side. As a data plane API, how does this benefit applications that want to retrieve packet type information? - Without a dedicated mbuf flag, an application cannot tell whether enclosed packet type data is in HW format. Even if present, if port information is discarded or becomes invalid (e.g. mbuf stored in an application queue for lengthy periods or passed as is to an unrelated application), there is no way to make sense of the data. In my opinion, mbufs should only contain data fields in a standardized format. Managing packet types like an offload which can be toggled at will seems to be the best compromise. Thoughts? -- Adrien Mazarguil 6WIND