From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A76ABCA9EC5 for ; Wed, 30 Oct 2019 17:12:51 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 4818720659 for ; Wed, 30 Oct 2019 17:12:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4818720659 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mellanox.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D17411C0C6; Wed, 30 Oct 2019 18:12:43 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id F24F51C07D for ; Wed, 30 Oct 2019 18:12:38 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@mellanox.com) with ESMTPS (AES256-SHA encrypted); 30 Oct 2019 19:12:35 +0200 Received: from pegasus11.mtr.labs.mlnx (pegasus11.mtr.labs.mlnx [10.210.16.104]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x9UHCY07022777; Wed, 30 Oct 2019 19:12:35 +0200 Received: from pegasus11.mtr.labs.mlnx (localhost [127.0.0.1]) by pegasus11.mtr.labs.mlnx (8.14.7/8.14.7) with ESMTP id x9UHCYda023617; Wed, 30 Oct 2019 17:12:34 GMT Received: (from viacheslavo@localhost) by pegasus11.mtr.labs.mlnx (8.14.7/8.14.7/Submit) id x9UHCY5Q023615; Wed, 30 Oct 2019 17:12:34 GMT X-Authentication-Warning: pegasus11.mtr.labs.mlnx: viacheslavo set sender to viacheslavo@mellanox.com using -f From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: matan@mellanox.com, rasland@mellanox.com, thomas@monjalon.net, olivier.matz@6wind.com, arybchenko@solarflare.com, orika@mellanox.com, Yongseok Koh Date: Wed, 30 Oct 2019 17:12:27 +0000 Message-Id: <1572455548-23420-2-git-send-email-viacheslavo@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1572455548-23420-1-git-send-email-viacheslavo@mellanox.com> References: <1572377502-13620-1-git-send-email-viacheslavo@mellanox.com> <1572455548-23420-1-git-send-email-viacheslavo@mellanox.com> Subject: [dpdk-dev] [PATCH v6 1/2] ethdev: extend flow metadata X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, metadata can be set on egress path via mbuf tx_metadata field with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META matches metadata. This patch extends the metadata feature usability. 1) RTE_FLOW_ACTION_TYPE_SET_META When supporting multiple tables, Tx metadata can also be set by a rule and matched by another rule. This new action allows metadata to be set as a result of flow match. 2) Metadata on ingress There's also need to support metadata on ingress. Metadata can be set by SET_META action and matched by META item like Tx. The final value set by the action will be delivered to application via metadata dynamic field of mbuf which can be accessed by RTE_FLOW_DYNF_METADATA() macro or with rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper routines. PKT_RX_DYNF_METADATA flag will be set along with the data. The mbuf dynamic field must be registered by calling rte_flow_dynf_metadata_register() prior to use SET_META action. The availability of dynamic mbuf metadata field can be checked with rte_flow_dynf_metadata_avail() routine. If application is going to engage the metadata feature it registers the metadata dynamic fields, then PMD checks the metadata field availability and handles the appropriate fields in datapath. For loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to the other path depending on hardware capability. MARK and METADATA look similar and might operate in similar way, but not interacting. Initially, there were proposed two metadata related actions: - RTE_FLOW_ACTION_TYPE_FLAG - RTE_FLOW_ACTION_TYPE_MARK These actions set the special flag in the packet metadata, MARK action stores some specified value in the metadata storage, and, on the packet receiving PMD puts the flag and value to the mbuf and applications can see the packet was threated inside flow engine according to the appropriate RTE flow(s). MARK and FLAG are like some kind of gateway to transfer some per-packet information from the flow engine to the application via receiving datapath. Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK provided. It allows us to extend the flow match pattern with the capability to match the metadata values set by MARK/FLAG actions on other flows. >From the datapath point of view, the MARK and FLAG are related to the receiving side only. It would useful to have the same gateway on the transmitting side and there was the feature of type RTE_FLOW_ITEM_TYPE_META was proposed. The application can fill the field in mbuf and this value will be transferred to some field in the packet metadata inside the flow engine. It did not matter whether these metadata fields are shared because of MARK and META items belonged to different domains (receiving and transmitting) and could be vendor-specific. So far, so good, DPDK proposes some entities to control metadata inside the flow engine and gateways to exchange these values on a per-packet basis via datapaths. As we can see, the MARK and META means are not symmetric, there is absent action which would allow us to set META value on the transmitting path. So, the action of type: - RTE_FLOW_ACTION_TYPE_SET_META was proposed. The next, applications raise the new requirements for packet metadata. The flow ngines are getting more complex, internal switches are introduced, multiple ports might be supported within the same flow engine namespace. >From the DPDK points of view, it means the packets might be sent on one eth_dev port and received on the other one, and the packet path inside the flow engine entirely belongs to the same hardware device. The simplest example is SR-IOV with PF, VFs and the representors. And there is a brilliant opportunity to provide some out-of-band channel to transfer some extra data from one port to another one, besides the packet data itself. And applications would like to use this opportunity. It is supposed for application to use trials (with rte_flow_validate) to detect which metadata features (FLAG, MARK, META) actually supported by PMD and underlying hardware. It might depend on PMD configuration, system software, hardware settings, etc., and should be detected in run time. Signed-off-by: Yongseok Koh Signed-off-by: Viacheslav Ovsiienko --- v6: - minor code style issues - is combined in series with followed egress metadata patch v5: - http://patches.dpdk.org/patch/62179/ - addressed code style issues from comments - Tx metadata deprecation notice removed (dedicated tx_metadata patch is coming) - MBUF_DYNF_METADATA_NAME is splitted into FIELD and FLAG dedicated ones, RTE suffix is added - metadata historic retrospective is added to log message - rebased v4: - http://patches.dpdk.org/patch/62065/ - documentation comments addressed - deprecation notice for Tx metadata offload flag - rebased v3: - http://patches.dpdk.org/patch/61902/ - rebased, neat updates v2: - http://patches.dpdk.org/patch/60909/ v1: - http://patches.dpdk.org/patch/56104/ - rfc: http://patches.dpdk.org/patch/54271/ app/test-pmd/cmdline_flow.c | 57 ++++++++++++++++- app/test-pmd/util.c | 5 ++ doc/guides/prog_guide/rte_flow.rst | 72 ++++++++++++++++----- doc/guides/rel_notes/release_19_11.rst | 13 ++++ lib/librte_ethdev/rte_ethdev_version.map | 3 + lib/librte_ethdev/rte_flow.c | 40 ++++++++++++ lib/librte_ethdev/rte_flow.h | 103 +++++++++++++++++++++++++++++-- lib/librte_mbuf/rte_mbuf_dyn.h | 8 ++- 8 files changed, 279 insertions(+), 22 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 0d0bc0a..e4ef066 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -316,6 +316,9 @@ enum index { ACTION_RAW_ENCAP_INDEX_VALUE, ACTION_RAW_DECAP_INDEX, ACTION_RAW_DECAP_INDEX_VALUE, + ACTION_SET_META, + ACTION_SET_META_DATA, + ACTION_SET_META_MASK, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -1067,6 +1070,7 @@ struct parse_action_priv { ACTION_DEC_TCP_ACK, ACTION_RAW_ENCAP, ACTION_RAW_DECAP, + ACTION_SET_META, ZERO, }; @@ -1265,6 +1269,13 @@ struct parse_action_priv { ZERO, }; +static const enum index action_set_meta[] = { + ACTION_SET_META_DATA, + ACTION_SET_META_MASK, + ACTION_NEXT, + ZERO, +}; + static int parse_set_raw_encap_decap(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -1329,6 +1340,10 @@ static int parse_vc_action_raw_encap_index(struct context *, static int parse_vc_action_raw_decap_index(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_action_set_meta(struct context *ctx, + const struct token *token, const char *str, + unsigned int len, void *buf, + unsigned int size); static int parse_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -3378,7 +3393,31 @@ static int comp_set_raw_index(struct context *, const struct token *, .help = "index of raw_encap/raw_decap data", .next = NEXT(next_item), .call = parse_port, - } + }, + [ACTION_SET_META] = { + .name = "set_meta", + .help = "set metadata", + .priv = PRIV_ACTION(SET_META, + sizeof(struct rte_flow_action_set_meta)), + .next = NEXT(action_set_meta), + .call = parse_vc_action_set_meta, + }, + [ACTION_SET_META_DATA] = { + .name = "data", + .help = "metadata value", + .next = NEXT(action_set_meta, NEXT_ENTRY(UNSIGNED)), + .args = ARGS(ARGS_ENTRY_HTON + (struct rte_flow_action_set_meta, data)), + .call = parse_vc_conf, + }, + [ACTION_SET_META_MASK] = { + .name = "mask", + .help = "mask for metadata value", + .next = NEXT(action_set_meta, NEXT_ENTRY(UNSIGNED)), + .args = ARGS(ARGS_ENTRY_HTON + (struct rte_flow_action_set_meta, mask)), + .call = parse_vc_conf, + }, }; /** Remove and return last entry from argument stack. */ @@ -4818,6 +4857,22 @@ static int comp_set_raw_index(struct context *, const struct token *, return ret; } +static int +parse_vc_action_set_meta(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + ret = rte_flow_dynf_metadata_register(); + if (ret < 0) + return -1; + return len; +} + /** Parse tokens for destroy command. */ static int parse_destroy(struct context *ctx, const struct token *token, diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index f20531d..56075b3 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -82,6 +82,11 @@ mb->vlan_tci, mb->vlan_tci_outer); else if (ol_flags & PKT_RX_VLAN) printf(" - VLAN tci=0x%x", mb->vlan_tci); + if (ol_flags & PKT_TX_METADATA) + printf(" - Tx metadata: 0x%x", mb->tx_metadata); + if (ol_flags & PKT_RX_DYNF_METADATA) + printf(" - Rx metadata: 0x%x", + *RTE_FLOW_DYNF_METADATA(mb)); if (mb->packet_type) { rte_get_ptype_name(mb->packet_type, buf, sizeof(buf)); printf(" - hw ptype: %s", buf); diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 159ce19..c943aca 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -658,6 +658,32 @@ the physical device, with virtual groups in the PMD or not at all. | ``mask`` | ``id`` | zeroed to match any value | +----------+----------+---------------------------+ +Item: ``META`` +^^^^^^^^^^^^^^^^^ + +Matches 32 bit metadata item set. + +On egress, metadata can be set either by mbuf metadata field with +PKT_TX_METADATA flag or ``SET_META`` action. On ingress, ``SET_META`` +action sets metadata for a packet and the metadata will be reported via +``metadata`` dynamic field of ``rte_mbuf`` with PKT_RX_DYNF_METADATA flag. + +- Default ``mask`` matches the specified Rx metadata value. + +.. _table_rte_flow_item_meta: + +.. table:: META + + +----------+----------+---------------------------------------+ + | Field | Subfield | Value | + +==========+==========+=======================================+ + | ``spec`` | ``data`` | 32 bit metadata value | + +----------+----------+---------------------------------------+ + | ``last`` | ``data`` | upper range value | + +----------+----------+---------------------------------------+ + | ``mask`` | ``data`` | bit-mask applies to "spec" and "last" | + +----------+----------+---------------------------------------+ + Data matching item types ~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1232,21 +1258,6 @@ Matches a PPPoE session protocol identifier. - ``proto_id``: PPP protocol identifier. - Default ``mask`` matches proto_id only. - -.. _table_rte_flow_item_meta: - -.. table:: META - - +----------+----------+---------------------------------------+ - | Field | Subfield | Value | - +==========+==========+=======================================+ - | ``spec`` | ``data`` | 32 bit metadata value | - +----------+--------------------------------------------------+ - | ``last`` | ``data`` | upper range value | - +----------+----------+---------------------------------------+ - | ``mask`` | ``data`` | bit-mask applies to "spec" and "last" | - +----------+----------+---------------------------------------+ - Item: ``NSH`` ^^^^^^^^^^^^^ @@ -2466,6 +2477,37 @@ Value to decrease TCP acknowledgment number by is a big-endian 32 bit integer. Using this action on non-matching traffic will result in undefined behavior. +Action: ``SET_META`` +^^^^^^^^^^^^^^^^^^^^^^^ + +Set metadata. Item ``META`` matches metadata. + +Metadata set by mbuf metadata field with PKT_TX_METADATA flag on egress will be +overridden by this action. On ingress, the metadata will be carried by +``metadata`` dynamic field of ``rte_mbuf`` which can be accessed by +``RTE_FLOW_DYNF_METADATA()``. PKT_RX_DYNF_METADATA flag will be set along +with the data. + +The mbuf dynamic field must be registered by calling +``rte_flow_dynf_metadata_register()`` prior to use ``SET_META`` action. + +Altering partial bits is supported with ``mask``. For bits which have never been +set, unpredictable value will be seen depending on driver implementation. For +loopback/hairpin packet, metadata set on Rx/Tx may or may not be propagated to +the other path depending on HW capability. + +.. _table_rte_flow_action_set_meta: + +.. table:: SET_META + + +----------+----------------------------+ + | Field | Value | + +==========+============================+ + | ``data`` | 32 bit metadata value | + +----------+----------------------------+ + | ``mask`` | bit-mask applies to "data" | + +----------+----------------------------+ + Negative types ~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst index f6e90cb..963c4f8 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -237,6 +237,14 @@ New Features On supported NICs, we can now setup haipin queue which will offload packets from the wire, backto the wire. +* **Extended metadata support in rte_flow.** + + Flow metadata is extended to both Rx and Tx. + + * Tx metadata can also be set by SET_META action of rte_flow. + * Rx metadata is delivered to host via a dynamic field of ``rte_mbuf`` with + PKT_RX_DYNF_METADATA. + Removed Items ------------- @@ -344,6 +352,11 @@ API Changes has been introduced in this release is used when used when all the packets enqueued in the tx adapter are destined for the same Ethernet port & Tx queue. +* metadata: RTE_FLOW_ITEM_TYPE_META data endianness altered to host one. + Due to the new dynamic metadata field in mbuf is host-endian either, there + is the minor compatibility issue for applications in case of 32-bit values + supported. + * sched: The pipe nodes configuration parameters such as number of pipes, pipe queue sizes, pipe profiles, etc., are moved from port level structure to subport level. This allows different subports of the same port to diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map index 48b5389..e593f34 100644 --- a/lib/librte_ethdev/rte_ethdev_version.map +++ b/lib/librte_ethdev/rte_ethdev_version.map @@ -291,4 +291,7 @@ EXPERIMENTAL { rte_eth_rx_hairpin_queue_setup; rte_eth_tx_hairpin_queue_setup; rte_eth_dev_hairpin_capability_get; + rte_flow_dynf_metadata_offs; + rte_flow_dynf_metadata_mask; + rte_flow_dynf_metadata_register; }; diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c index ca0f680..b0490cd 100644 --- a/lib/librte_ethdev/rte_flow.c +++ b/lib/librte_ethdev/rte_flow.c @@ -12,10 +12,18 @@ #include #include #include +#include +#include #include "rte_ethdev.h" #include "rte_flow_driver.h" #include "rte_flow.h" +/* Mbuf dynamic field name for metadata. */ +int rte_flow_dynf_metadata_offs = -1; + +/* Mbuf dynamic field flag bit number for metadata. */ +uint64_t rte_flow_dynf_metadata_mask; + /** * Flow elements description tables. */ @@ -157,8 +165,40 @@ struct rte_flow_desc_data { MK_FLOW_ACTION(DEC_TCP_SEQ, sizeof(rte_be32_t)), MK_FLOW_ACTION(INC_TCP_ACK, sizeof(rte_be32_t)), MK_FLOW_ACTION(DEC_TCP_ACK, sizeof(rte_be32_t)), + MK_FLOW_ACTION(SET_META, sizeof(struct rte_flow_action_set_meta)), }; +int +rte_flow_dynf_metadata_register(void) +{ + int offset; + int flag; + + static const struct rte_mbuf_dynfield desc_offs = { + .name = RTE_MBUF_DYNFIELD_METADATA_NAME, + .size = sizeof(uint32_t), + .align = __alignof__(uint32_t), + }; + static const struct rte_mbuf_dynflag desc_flag = { + .name = RTE_MBUF_DYNFLAG_METADATA_NAME, + }; + + offset = rte_mbuf_dynfield_register(&desc_offs); + if (offset < 0) + goto error; + flag = rte_mbuf_dynflag_register(&desc_flag); + if (flag < 0) + goto error; + rte_flow_dynf_metadata_offs = offset; + rte_flow_dynf_metadata_mask = (1ULL << flag); + return 0; + +error: + rte_flow_dynf_metadata_offs = -1; + rte_flow_dynf_metadata_mask = 0ULL; + return -rte_errno; +} + static int flow_err(uint16_t port_id, int ret, struct rte_flow_error *error) { diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index 4fee105..f6e050c 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -28,6 +28,8 @@ #include #include #include +#include +#include #ifdef __cplusplus extern "C" { @@ -418,7 +420,8 @@ enum rte_flow_item_type { /** * [META] * - * Matches a metadata value specified in mbuf metadata field. + * Matches a metadata value. + * * See struct rte_flow_item_meta. */ RTE_FLOW_ITEM_TYPE_META, @@ -1263,18 +1266,23 @@ struct rte_flow_item_icmp6_nd_opt_tla_eth { #endif /** - * RTE_FLOW_ITEM_TYPE_META. + * RTE_FLOW_ITEM_TYPE_META * - * Matches a specified metadata value. + * Matches a specified metadata value. On egress, metadata can be set either by + * mbuf tx_metadata field with PKT_TX_METADATA flag or + * RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META sets + * metadata for a packet and the metadata will be reported via mbuf metadata + * dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic mbuf field must be + * registered in advance by rte_flow_dynf_metadata_register(). */ struct rte_flow_item_meta { - rte_be32_t data; + uint32_t data; }; /** Default mask for RTE_FLOW_ITEM_TYPE_META. */ #ifndef __cplusplus static const struct rte_flow_item_meta rte_flow_item_meta_mask = { - .data = RTE_BE32(UINT32_MAX), + .data = UINT32_MAX, }; #endif @@ -1942,6 +1950,13 @@ enum rte_flow_action_type { * undefined behavior. */ RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK, + + /** + * Set metadata on ingress or egress path. + * + * See struct rte_flow_action_set_meta. + */ + RTE_FLOW_ACTION_TYPE_SET_META, }; /** @@ -2429,6 +2444,57 @@ struct rte_flow_action_set_mac { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; }; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_SET_META + * + * Set metadata. Metadata set by mbuf tx_metadata field with + * PKT_TX_METADATA flag on egress will be overridden by this action. On + * ingress, the metadata will be carried by mbuf metadata dynamic field + * with PKT_RX_DYNF_METADATA flag if set. The dynamic mbuf field must be + * registered in advance by rte_flow_dynf_metadata_register(). + * + * Altering partial bits is supported with mask. For bits which have never + * been set, unpredictable value will be seen depending on driver + * implementation. For loopback/hairpin packet, metadata set on Rx/Tx may + * or may not be propagated to the other path depending on HW capability. + * + * RTE_FLOW_ITEM_TYPE_META matches metadata. + */ +struct rte_flow_action_set_meta { + uint32_t data; + uint32_t mask; +}; + +/* Mbuf dynamic field offset for metadata. */ +extern int rte_flow_dynf_metadata_offs; + +/* Mbuf dynamic field flag mask for metadata. */ +extern uint64_t rte_flow_dynf_metadata_mask; + +/* Mbuf dynamic field pointer for metadata. */ +#define RTE_FLOW_DYNF_METADATA(m) \ + RTE_MBUF_DYNFIELD((m), rte_flow_dynf_metadata_offs, uint32_t *) + +/* Mbuf dynamic flag for metadata. */ +#define PKT_RX_DYNF_METADATA (rte_flow_dynf_metadata_mask) + +__rte_experimental +static inline uint32_t +rte_flow_dynf_metadata_get(struct rte_mbuf *m) +{ + return *RTE_FLOW_DYNF_METADATA(m); +} + +__rte_experimental +static inline void +rte_flow_dynf_metadata_set(struct rte_mbuf *m, uint32_t v) +{ + *RTE_FLOW_DYNF_METADATA(m) = v; +} + /* * Definition of a single action. * @@ -2662,6 +2728,33 @@ enum rte_flow_conv_op { }; /** + * Check if mbuf dynamic field for metadata is registered. + * + * @return + * True if registered, false otherwise. + */ +__rte_experimental +static inline int +rte_flow_dynf_metadata_avail(void) +{ + return !!rte_flow_dynf_metadata_mask; +} + +/** + * Register mbuf dynamic field and flag for metadata. + * + * This function must be called prior to use SET_META action in order to + * register the dynamic mbuf field. Otherwise, the data cannot be delivered to + * application. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_dynf_metadata_register(void); + +/** * Check whether a flow rule can be created on a given port. * * The flow rule is validated for correctness and whether it could be accepted diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h index 2e9d418..de651c1 100644 --- a/lib/librte_mbuf/rte_mbuf_dyn.h +++ b/lib/librte_mbuf/rte_mbuf_dyn.h @@ -234,6 +234,12 @@ int rte_mbuf_dynflag_lookup(const char *name, __rte_experimental void rte_mbuf_dyn_dump(FILE *out); -/* Placeholder for dynamic fields and flags declarations. */ +/* + * Placeholder for dynamic fields and flags declarations. + * This is centralizing point to gather all field names + * and parameters together. + */ +#define RTE_MBUF_DYNFIELD_METADATA_NAME "rte_flow_dynfield_metadata" +#define RTE_MBUF_DYNFLAG_METADATA_NAME "rte_flow_dynflag_metadata" #endif -- 1.8.3.1