From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kevin Traynor Subject: Re: [PATCH 01/22] ethdev: introduce generic flow API Date: Wed, 14 Dec 2016 16:11:15 +0000 Message-ID: References: <1c8a8e4fec73ed33836f1da9525b1b8b53048518.1479309720.git.adrien.mazarguil@6wind.com> <59393e58-6c85-d2e5-1aab-a721fe9c933e@redhat.com> <20161201083652.GI10340@6wind.com> <7f65ba09-e6fe-d97a-6ab5-97e84a828a81@redhat.com> <20161208170715.GM10340@6wind.com> <5c53f86b-ead7-b539-6250-40613c7a57db@redhat.com> <20161214135423.GZ10340@6wind.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org, Thomas Monjalon , Pablo de Lara , Olivier Matz , sugesh.chandran@intel.comn To: Adrien Mazarguil Return-path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 3A9CC20F for ; Wed, 14 Dec 2016 17:11:19 +0100 (CET) In-Reply-To: <20161214135423.GZ10340@6wind.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 12/14/2016 01:54 PM, Adrien Mazarguil wrote: >> >>>>>>> + * @param[out] error >>>>>>> + * Perform verbose error reporting if not NULL. >>>>>>> + * >>>>>>> + * @return >>>>>>> + * 0 on success, a negative errno value otherwise and rte_errno is set. >>>>>>> + */ >>>>>>> +int >>>>>>> +rte_flow_query(uint8_t port_id, >>>>>>> + struct rte_flow *flow, >>>>>>> + enum rte_flow_action_type action, >>>>>>> + void *data, >>>>>>> + struct rte_flow_error *error); >>>>>>> + >>>>>>> +#ifdef __cplusplus >>>>>>> +} >>>>>>> +#endif >>>>>> >>>>>> I don't see a way to dump all the rules for a port out. I think this is >>>>>> neccessary for degbugging. You could have a look through dpif.h in OVS >>>>>> and see how dpif_flow_dump_next() is used, it might be a good reference. >>>>> >>>>> DPDK does not maintain flow rules and, depending on hardware capabilities >>>>> and level of compliance, PMDs do not necessarily do it either, particularly >>>>> since it requires space and application probably have a better method to >>>>> store these pointers for their own needs. >>>> >>>> understood >>>> >>>>> >>>>> What you see here is only a PMD interface. Depending on applications needs, >>>>> generic helper functions built on top of these may be added to manage flow >>>>> rules in the future. >>>> >>>> I'm thinking of the case where something goes wrong and I want to get a >>>> dump of all the flow rules from hardware, not query the rules I think I >>>> have. I don't see a way to do it or something to build a helper on top of? >>> >>> Generic helper functions would exist on top of this API and would likely >>> maintain a list of flow rules themselves. The dump in that case would be >>> entirely implemented in software. I think that recovering flow rules from HW >>> may be complicated in many cases (even without taking storage allocation and >>> rules conversion issues into account), therefore if there is really a need >>> for it, we could perhaps add a dump() function that PMDs are free to >>> implement later. >>> >> >> ok. Maybe there are some more generic stats that can be got from the >> hardware that would help debugging that would suffice, like total flow >> rule hits/misses (i.e. not on a per flow rule basis). >> >> You can get this from the software flow caches and it's widely used for >> debugging. e.g. >> >> pmd thread numa_id 0 core_id 3: >> emc hits:0 >> megaflow hits:0 >> avg. subtable lookups per hit:0.00 >> miss:0 >> > > Perhaps a rule such as the following could do the trick: > > group: 42 (or priority 42) > pattern: void > actions: count / passthru > > Assuming useful flow rules are defined with higher priorities (using lower > group ID or priority level) and provide a terminating action, this one would > count all packets that were not caught by them. > > That is one example to illustrate how "global" counters can be requested by > applications. > > Otherwise you could just make sure all rules contain mark / flag actions, in > which case mbufs would tell directly if they went through them or need > additional SW processing. > ok, sounds like there's some options at least to work with on this which is good. thanks.