From mboxrd@z Thu Jan 1 00:00:00 1970 From: Adrien Mazarguil Subject: Re: [RFC] New packet type query API Date: Mon, 5 Feb 2018 20:29:11 +0100 Message-ID: <20180205192911.GP4256@6wind.com> References: <20180111160405.182159-1-qiming.yang@intel.com> <20180116155532.GH4256@6wind.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Andrew Rybchenko , Shahaf Shuler , "dev@dpdk.org" , "Lu, Wenzhuo" , "Wu, Jingjing" To: "Yang, Qiming" Return-path: Received: from mail-wr0-f174.google.com (mail-wr0-f174.google.com [209.85.128.174]) by dpdk.org (Postfix) with ESMTP id 7C4A31B688 for ; Mon, 5 Feb 2018 20:29:30 +0100 (CET) Received: by mail-wr0-f174.google.com with SMTP id y3so21001292wrh.3 for ; Mon, 05 Feb 2018 11:29:30 -0800 (PST) Content-Disposition: inline In-Reply-To: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Jan 23, 2018 at 02:46:19AM +0000, Yang, Qiming wrote: > Sorry for reply so late. Answered in line. Same for me, please see below. > > > This flag will be configured in dev_configure(), and can be queried by user > > through dev->data->dev_conf.rxmode.enable_ptype_direct. In receive function, > > driver will store HW's packet type value in mbuf->packet_type if direct mode is > > enabled, If not, maintain the existing code. > > > > > > Driver maintains a new ptype_value and rte_flow item mapping table, when > > download a profile and support new packet type, the SW mapping table will be > > updated according to the ptype information analyzed from profile. > > > > > > Future work > > > =========== > > > Support to configure the packet type direct mode per queue. > > > > I understand the motivation behind this proposal, however since new ideas must > > be challenged, I have a few comments: > > > > - How about making packet type recognition an optional offload configurable > > per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra > > processing cost could be avoided for applications that do not care. > > > It's acceptable for me, but I'm afraid the name can lead to confusion using OFFLOAD. > Because if get packet type directly is not HW offload, it's SW work. > I don't know what 'extra processing cost' can be avoid? I think the two ways should be the same. Offloads generally require a bit of processing from PMDs, if only to convert raw values to a consistent format. Collecting and converting scattered bits of data inside RX descriptors to expose them to applications is an acceptable trade-off only a PMD can do anyway. It can be defined as an offload because doing the same purely in software (packet parsing) would be significantly more expensive on the software side. For the above scenario, unless I'm mistaken, HW performs most of the packet processing to inform the PMD, otherwise there would be no such thing as an "internal packet type format" to feed back to the PMD right? This, in my opinion, is enough to call it an offload. > > - Depending on HW, packet type information inside RX descriptors may not > > necessarily fit 64-bit, or at least not without transformation. This > > transformation would still cause wasted cycles on the PMD side. > > > Do you means transform packet type information in RX descriptors to 64 bits will cost > many cycles? Depending on one's definition of "many". Transferring some data from a memory location (RX descriptor) to another (mbuf) is the main expense, adding a couple of binary operations (shifts, XORs and whatnot) is nothing in comparison and in many cases is done while performing the copy, with the result of these operations being directly assigned to the destination buffer. > > - In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles > > but the subsequent look-up with the proposed API would translate to a > > higher cost on the application side. As a data plane API, how does this > > benefit applications that want to retrieve packet type information? > > > And the flag stored in mbuf now may not matches other API's format requirement. They still > need to use the flag translate to exact name or other format. And not all using cases need > packet type. So we think transfer the description's packet type value to mbuf is more useful and flexible. > If a API always need ptype, using the existing mode (off the direct mode) is more suitable. Extrapolating a bit, why stop at packet type. The whole mbuf could be somehow used as an opaque storage back-end for RX descriptors, with all fields interpreted on a needed basis as long as a known mbuf flag says so. This would make RX super fast assuming an application doesn't do anything with such mbufs. Problem is, it's not the kind of use case targeted by DPDK, that's why we have to agree on a consistent format PMDs must conform and limit the proliferation of extra function calls needed to interpret data structures. That software cost is to be absorbed by PMDs. We can add as many toggles as needed for offloads (particularly since some of them have a non-negligible software cost, e.g. TSO), as long as the exposed data structures use a defined format, hence my suggestion of adding a new toggle for packet type. > > - Without a dedicated mbuf flag, an application cannot tell whether enclosed > > packet type data is in HW format. Even if present, if port information is > > discarded or becomes invalid (e.g. mbuf stored in an application queue for > > lengthy periods or passed as is to an unrelated application), there is no > > way to make sense of the data. > > > Could you tell me which using case can leading port information be discard? > We assume the mbuf is always synchronic with ptype configuration in port information. Hot-plug for one. The sudden loss of an ethdev possibly followed by the subsequent reuse of its port ID for a different device may be an issue for any application buffering traffic for any length of time, for instance TCP or IPsec stacks. Port ID information tells an application where the mbuf originally came from and that's about it. It can't be relied on for mbuf data post-processing. > > In my opinion, mbufs should only contain data fields in a standardized format. > > Managing packet types like an offload which can be toggled at will seems to be > > the best compromise. Thoughts? > > > Agreed, and we keep the existing mbuf flags, so at the most of time, mbufs using the > standard format. Great, looking forward to your next proposal then :) -- Adrien Mazarguil 6WIND