From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27536CA9EAF for ; Wed, 30 Oct 2019 07:35:33 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 92DD22054F for ; Wed, 30 Oct 2019 07:35:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 92DD22054F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=solarflare.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 27AB51BEB7; Wed, 30 Oct 2019 08:35:31 +0100 (CET) Received: from dispatchb-us1.ppe-hosted.com (dispatchb-us1.ppe-hosted.com [148.163.129.53]) by dpdk.org (Postfix) with ESMTP id 5F34E1BEA0 for ; Wed, 30 Oct 2019 08:35:29 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us2.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 3094D68005C; Wed, 30 Oct 2019 07:35:26 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 30 Oct 2019 07:35:20 +0000 To: Slava Ovsiienko , Thomas Monjalon , "olivier.matz@6wind.com" CC: "dev@dpdk.org" , Matan Azrad , Ori Kam , Yongseok Koh References: <1571922495-4588-1-git-send-email-viacheslavo@mellanox.com> <1572201636-16374-1-git-send-email-viacheslavo@mellanox.com> From: Andrew Rybchenko Message-ID: Date: Wed, 30 Oct 2019 10:35:16 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25010.003 X-TM-AS-Result: No-16.038800-8.000000-10 X-TMASE-MatchedRID: 8HTFlOrbAtFjDV//SvkH3qHs7urN7C7g6nvAsCWRRlHk1kyQDpEj8AVM 9kPsaYx4eUtecZUvaYrxTwx4UJIMclJeM5ZqkpajuwdUMMznEA8jpidOH0ki9cxWo/NgqhfZ/o1 m2Eogz1BpvLuvLuAwgSkpcgahZBUmIb90fDiD2WDjSNzHLdpc2gZyESFXAljfsVuGFxbE1A6+4V SgFx781n9cUrfwKRxzDkfP++tzwZvsBYKXBvet/Mv+/JlwYh7CSoCG4sefl8SZfDRE1uqSghoMY Y+C632su2vjFcHskFRN+GS/WvKbnvJSbAY0TdLC82YZTB0b+K+3xwqfnvnKHrjOUXWmQ3OWclLz WaoHvbjCFEXLkq1tYJs3pCs522MziJ46xHiAhUZDk5poSYXNl2j1CdgkJWDyDjjiM8xEqBMKdVr mcJJuASUcbH8pz9oIWCqT9wkD/QaZlYl0tsn/46btlVLKF4zpz3/zJxxP73tz1vqdMzrPw8lXRk VN+Et82EveFYDXTERTfB+iRk/opAwoXPTi0zqHjoyKzEmtrEfvSp2iuuHtoiuGKh4AkqKVMUCdt saTPHkuxGRZG44hSsF0uBJI7Ez3hfAuYM59evBe0qtFyqBh6yI/8jW908lnIeOLVF/ekdClVZBd jGT+chMlYcQ7Bc05dzfIhIFNB+JKN7mClxmlXA8DYIw5GOGVBGvINcfHqhcUvn93ewYignFC0fM eQA5w2vKD6VRf2idLivkfQR70p7QBD8Gbrcz9WHGJY8KbuMRgsFOoKOJn5gL+e4+Xk/QWtA0iO/ M57cuYbKaDYD79ht3KO3UmBuWN4DQVgTDl7pqeAiCmPx4NwLTrdaH1ZWqC1B0Hk1Q1KyLUZxEAl FPo846HM5rqDwqt0TKljaAFYXkJcSv3FRTrYvZhY02Sr6gs+ec7ws4Dgu0OzJu1eKxPjw== X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--16.038800-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25010.003 X-MDID: 1572420928-6GjF-WS1dSkg Subject: Re: [dpdk-dev] [PATCH v4] ethdev: extend flow metadata X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" @Olivier, please, take a look at the end of the mail. On 10/29/19 8:19 PM, Slava Ovsiienko wrote: > Hi, Andrew > > Thank you for the review. > >> -----Original Message----- >> From: Andrew Rybchenko >> Sent: Tuesday, October 29, 2019 18:22 >> To: Slava Ovsiienko ; dev@dpdk.org >> Cc: Thomas Monjalon ; Matan Azrad >> ; olivier.matz@6wind.com; Ori Kam >> ; Yongseok Koh >> Subject: Re: [dpdk-dev] [PATCH v4] ethdev: extend flow metadata >> >> On 10/27/19 9:40 PM, Viacheslav Ovsiienko wrote: >>> Currently, metadata can be set on egress path via mbuf tx_metadata >>> field with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META >> matches metadata. >>> This patch extends the metadata feature usability. >>> >>> 1) RTE_FLOW_ACTION_TYPE_SET_META >>> >>> When supporting multiple tables, Tx metadata can also be set by a rule >>> and matched by another rule. This new action allows metadata to be set >>> as a result of flow match. >>> >>> 2) Metadata on ingress >>> >>> There's also need to support metadata on ingress. Metadata can be set >>> by SET_META action and matched by META item like Tx. The final value >>> set by the action will be delivered to application via metadata >>> dynamic field of mbuf which can be accessed by >> RTE_FLOW_DYNF_METADATA(). >>> PKT_RX_DYNF_METADATA flag will be set along with the data. >>> >>> The mbuf dynamic field must be registered by calling >>> rte_flow_dynf_metadata_register() prior to use SET_META action. >>> >>> The availability of dynamic mbuf metadata field can be checked with >>> rte_flow_dynf_metadata_avail() routine. >>> >>> For loopback/hairpin packet, metadata set on Rx/Tx may or may not be >>> propagated to the other path depending on hardware capability. >>> >>> Signed-off-by: Yongseok Koh >>> Signed-off-by: Viacheslav Ovsiienko >> Above explanations lack information about "meta" vs "mark" which may be >> set on Rx as well and delivered in other mbuf field. >> It should be explained by one more field is required and rules defined. > There is some story about metadata features. > Initially, there were proposed two metadata related actions: > > - RTE_FLOW_ACTION_TYPE_FLAG > - RTE_FLOW_ACTION_TYPE_MARK > > These actions set the special flag in the packet metadata, MARK action stores some > specified value in the metadata storage, and, on the packet receiving PMD puts the flag > and value to the mbuf and applications can see the packet was threated inside flow engine > according to the appropriate RTE flow(s). MARK and FLAG are like some kind of gateway > to transfer some per-packet information from the flow engine to the application > via receiving datapath. > > From the datapath point of view, the MARK and FLAG are related to the receiving side only. > It would useful to have the same gateway on the transmitting side and there was the feature > of type RTE_FLOW_ITEM_TYPE_META was proposed. The application can fill the field in mbuf > and this value will be transferred to some field in the packet metadata inside the flow engine. > It did not matter whether these metadata fields are shared because of MARK and META items > belonged to different domains (receiving and transmitting) and could be vendor-specific. > > So far, so good, DPDK proposes some entities to control metadata inside the flow engine > and gateways to exchange these values on a per-packet basis via datapaths. > > As we can see, the MARK and META means are not symmetric, there is absent action which > would allow us to set META value on the transmitting path. So, the action of type: > - RTE_FLOW_ACTION_TYPE_SET_META is proposed. > > The next, applications raise the new requirements for packet metadata. The flow engines are > getting more complex, internal switches are introduced, multiple ports might be supported within > the same flow engine namespace. From the DPDK points of view, it means the packets might be sent > on one eth_dev port and received on the other one, and the packet path inside the flow engine entirely > belongs to the same hardware device. The simplest example is SR-IOV with PF, VFs and the representors. > And there is a brilliant opportunity to provide some out-of-band channel to transfer some extra data > from one port to another one, besides the packet data itself. > > >> Above explanations lack information about "meta" vs "mark" which may be >> set on Rx as well and delivered in other mbuf field. >> It should be explained by one more field is required and rules defined. >> Otherwise we can endup in half PMDs supporting mark only, half PMDs >> supporting meta only and applications in an interesting situation to make a >> choice which one to use. > There is no "mark" vs "meta". MARK and META means are kept for compatibility issues > and legacy part works exactly as before. The trials (with flow_validate) is supposed > to check whether PMD supports MARK or META feature on appropriate domain. It depends > on PMD implementation, configuration and underlaying HW/FW/kernel capabilities and > should be resolved in runtime. The trials a way, but very tricky way. My imagination draws me pictures how an application code could look like in attempt to use either mark or meta for Rx only and these pictures are not nice. May be it will look acceptable when mark becomes a dynamic since usage of either one or another dynamic field is definitely easier than usage of either fixed or dynamic field. May be dynamic field for mark at fixed offset should be introduced in the release or the nearest future? It will allow to preserve ABI up to 20.11 and provide future proof API. The trick is to register dynamic meta field at fixed offset at start of a day to be sure that it is guaranteed to succeed. It sounds like it is a transition mechanism from fixed to dynamic fields. [snip] >>> diff --git a/lib/librte_ethdev/rte_flow.h >>> b/lib/librte_ethdev/rte_flow.h index 4fee105..b821557 100644 >>> --- a/lib/librte_ethdev/rte_flow.h >>> +++ b/lib/librte_ethdev/rte_flow.h >>> @@ -28,6 +28,8 @@ >>> #include >>> #include >>> #include >>> +#include >>> +#include >>> >>> #ifdef __cplusplus >>> extern "C" { >>> @@ -418,7 +420,8 @@ enum rte_flow_item_type { >>> /** >>> * [META] >>> * >>> - * Matches a metadata value specified in mbuf metadata field. >>> + * Matches a metadata value. >>> + * >>> * See struct rte_flow_item_meta. >>> */ >>> RTE_FLOW_ITEM_TYPE_META, >>> @@ -1263,9 +1266,17 @@ struct rte_flow_item_icmp6_nd_opt_tla_eth { >>> #endif >>> >>> /** >>> - * RTE_FLOW_ITEM_TYPE_META. >>> + * @warning >>> + * @b EXPERIMENTAL: this structure may change without prior notice >> Is it allowed to make experimental back? > I think we should remove EXPERIMENTAL here. We do not introduce new > feature, but just extend the apply area. Agreed. >>> * >>> - * Matches a specified metadata value. >>> + * RTE_FLOW_ITEM_TYPE_META >>> + * >>> + * Matches a specified metadata value. On egress, metadata can be set >>> + either by >>> + * mbuf tx_metadata field with PKT_TX_METADATA flag or >>> + * RTE_FLOW_ACTION_TYPE_SET_META. On ingress, >>> + RTE_FLOW_ACTION_TYPE_SET_META sets >>> + * metadata for a packet and the metadata will be reported via mbuf >>> + metadata >>> + * dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic mbuf >>> + field must be >>> + * registered in advance by rte_flow_dynf_metadata_register(). >>> */ >>> struct rte_flow_item_meta { >>> rte_be32_t data; >> [snip] >> >>> @@ -2429,6 +2447,55 @@ struct rte_flow_action_set_mac { >>> uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; >>> }; >>> >>> +/** >>> + * @warning >>> + * @b EXPERIMENTAL: this structure may change without prior notice >>> + * >>> + * RTE_FLOW_ACTION_TYPE_SET_META >>> + * >>> + * Set metadata. Metadata set by mbuf tx_metadata field with >>> + * PKT_TX_METADATA flag on egress will be overridden by this action. >>> +On >>> + * ingress, the metadata will be carried by mbuf metadata dynamic >>> +field >>> + * with PKT_RX_DYNF_METADATA flag if set. The dynamic mbuf field >>> +must be >>> + * registered in advance by rte_flow_dynf_metadata_register(). >>> + * >>> + * Altering partial bits is supported with mask. For bits which have >>> +never >>> + * been set, unpredictable value will be seen depending on driver >>> + * implementation. For loopback/hairpin packet, metadata set on Rx/Tx >>> +may >>> + * or may not be propagated to the other path depending on HW >> capability. >>> + * >>> + * RTE_FLOW_ITEM_TYPE_META matches metadata. >>> + */ >>> +struct rte_flow_action_set_meta { >>> + rte_be32_t data; >>> + rte_be32_t mask; >> As I understand tx_metadata is host endian. Just double-checking. >> Is a new dynamic field host endian or big endian? >> I definitely would like to see motivation in comments why data/mask are big- >> endian here. > metadata is opaque value, endianness does not matter, there are no some > special motivations for choosing endiannes. rte_flow_item_meta() structure > provides data with rte_be32_t type, so meta related action does the same. Endianness of meta in mbuf and flow API should match and it must be documented. Endianness is important if a HW supports less bits since it makes a hit for application to use LSB first if the bit space is sufficient. mark is defined as host-endian (uint32_t) and I think meta should be the same. Otherwise it complicates even more either mark or meta usage as discussed above . Yes, I think that rte_flow_item_meta should be fixed since both mark and tx_metadata are host-endian. (it says nothing about HW interface which is vendor specific and vendor PMDs should care about it) > I could assume the origin of selecting bigendian type was the endianness > of metadata field in Tx descriptor of ConnectX NICs. > >>> +}; >>> + >>> +/* Mbuf dynamic field offset for metadata. */ extern int >>> +rte_flow_dynf_metadata_offs; >>> + >>> +/* Mbuf dynamic field flag mask for metadata. */ extern uint64_t >>> +rte_flow_dynf_metadata_mask; >> These two global variables look frightening to me. >> It does not look good to me. > For me too. But we need the performance, these ones are > intended for usage in datapath, any overhead is painful. @Olivier, could you share your thoughts, please. Andrew.