All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Yang, Qiming" <qiming.yang@intel.com>
To: Adrien Mazarguil <adrien.mazarguil@6wind.com>,
	Andrew Rybchenko <arybchenko@solarflare.com>,
	Shahaf Shuler <shahafs@mellanox.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Lu, Wenzhuo" <wenzhuo.lu@intel.com>,
	"Wu, Jingjing" <jingjing.wu@intel.com>
Subject: Re: [RFC] New packet type query API
Date: Tue, 23 Jan 2018 02:46:19 +0000	[thread overview]
Message-ID: <F5DF4F0E3AFEF648ADC1C3C33AD4DBF16F828A84@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <20180116155532.GH4256@6wind.com>

Sorry for reply so late. Answered in line.

> > This flag will be configured in dev_configure(), and can be queried by user
> through dev->data->dev_conf.rxmode.enable_ptype_direct. In receive function,
> driver will store HW's packet type value in mbuf->packet_type if direct mode is
> enabled, If not, maintain the existing code.
> >
> > Driver maintains a new ptype_value and rte_flow item mapping table, when
> download a profile and support new packet type, the SW mapping table will be
> updated according to the ptype information analyzed from profile.
> >
> > Future work
> > ===========
> > Support to configure the packet type direct mode per queue.
> 
> I understand the motivation behind this proposal, however since new ideas must
> be challenged, I have a few comments:
> 
> - How about making packet type recognition an optional offload configurable
>   per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra
>   processing cost could be avoided for applications that do not care.
> 
It's acceptable for me, but I'm afraid the name can lead to confusion using OFFLOAD.
Because if get packet type directly is not HW offload, it's SW work. 
I don't know what 'extra processing cost' can be avoid? I think the two ways should be the same. 

> - Depending on HW, packet type information inside RX descriptors may not
>   necessarily fit 64-bit, or at least not without transformation. This
>   transformation would still cause wasted cycles on the PMD side.
> 
Do you means transform packet type information in RX descriptors to 64 bits will cost
many cycles?
	
> - In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles
>   but the subsequent look-up with the proposed API would translate to a
>   higher cost on the application side. As a data plane API, how does this
>   benefit applications that want to retrieve packet type information?
> 
And the flag stored in mbuf now may not matches other API's format requirement. They still
need to use the flag translate to exact name or other format. And not all using cases need 
packet type. So we think transfer the description's packet type value to mbuf is more useful and flexible.
If a API always need ptype, using the existing mode (off the direct mode) is more suitable.

> - Without a dedicated mbuf flag, an application cannot tell whether enclosed
>   packet type data is in HW format. Even if present, if port information is
>   discarded or becomes invalid (e.g. mbuf stored in an application queue for
>   lengthy periods or passed as is to an unrelated application), there is no
>   way to make sense of the data.
> 
Could you tell me which using case can leading port information be discard?
We assume the mbuf is always synchronic with ptype configuration in port information.

> In my opinion, mbufs should only contain data fields in a standardized format.
> Managing packet types like an offload which can be toggled at will seems to be
> the best compromise. Thoughts?
> 
Agreed, and we keep the existing mbuf flags, so at the most of time, mbufs using the
standard format.

> --
> Adrien Mazarguil
> 6WIND

  parent reply	other threads:[~2018-01-23  2:46 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-11 16:04 [RFC] New packet type query API Qiming Yang
2018-01-16 15:55 ` Adrien Mazarguil
2018-01-17  8:08   ` Andrew Rybchenko
2018-01-17 14:34     ` Shahaf Shuler
2018-01-23  2:48       ` Yang, Qiming
2018-01-23  2:46     ` Yang, Qiming
2018-01-23  2:46   ` Yang, Qiming [this message]
2018-02-05 19:29     ` Adrien Mazarguil

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F5DF4F0E3AFEF648ADC1C3C33AD4DBF16F828A84@SHSMSX101.ccr.corp.intel.com \
    --to=qiming.yang@intel.com \
    --cc=adrien.mazarguil@6wind.com \
    --cc=arybchenko@solarflare.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    --cc=shahafs@mellanox.com \
    --cc=wenzhuo.lu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.