All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] New packet type query API
@ 2018-01-11 16:04 Qiming Yang
  2018-01-16 15:55 ` Adrien Mazarguil
  0 siblings, 1 reply; 8+ messages in thread
From: Qiming Yang @ 2018-01-11 16:04 UTC (permalink / raw)
  To: dev

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 3754 bytes --]

This RFC contains a proposal to add a new packet type query API to DPDK. 

Motivation
==========
In current DPDK implementation, when received one packet, driver will lookup the packet type mapping table and transmit the result value to struct mbuf. Appilcation will analyze the mbuf->packet_type only when user want to show the packet type name. It will waste a lot of cycles, because sometimes user don't care what the packet type is, but the look-up will always exist. We think the packet type name is not needed all the time and the look-up is not needed neither. So we will add a flag, when enable the flag, the ptype value in descriptor will be passed through to mbuf->packet_type directly. So a new API to get packet name by values is added also, it will be called when user want to know what the packet type name is. So we can use the flag to control which value will transmit to mbuf(dire
 ct ptype value or ptype name value).

The current ptype expression depend on bit width(max 32 bit, used 28bit already), if we support more packet type in future, like next generation NIC support flexible pipeline, it will introduce some new packet types, the remaining bit may not enough. So I suppose to use the rte_flow items to present packet type.  The advantage of rte_flow item are: 
1. It's generic enough, every NIC can use these items to present their packet; 
2. it's flexible, can add new items unlimited to support new packet type.

Proposed solution
================
The new API used to query the exact packet type name with packet type value from descriptor.

/** Packet type name structure */
struct rte_eth_ptype_name  {
	uint64_t ptype_value;
	enum *name;
}

/**
*    Query the exact packet type name with packet type values
*
* @param   port_id     Port identifier of Ethernet device.
* @param   names      Pointer to packet type name buffer.
* @param   size           The size of ptype_values array (number of element).
* @param   ptype_values    Pointer to a table of ptype values.
* @return
*  - On success returns zero 
*  - On failure returns non-zero
*/
int rte_eth_ptype_name_get_by_value (uint8_t port_id, struct rte_eth_ptype_name *names, int size, uint64_t *ptype_values);

Add a new flag 'enable_ptype_direct' in structure rte_eth_rxmode to enable packet type get direct mode. The direct mode means the packet type value will pass through directly from HW to driver without lookup ptype mapping table. User can call function  rte_eth_ptype_name_get_by_value to query what the packet type value means when they need.

struct rte_eth_rxmode {
	… …
	uint16_t header_split : 1, /**< Header Split enable. */
		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
		hw_vlan_filter   : 1, /**< VLAN filter enable. */
		hw_vlan_strip    : 1, /**< VLAN strip enable. */
		hw_vlan_extend   : 1, /**< Extended VLAN enable. */
		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
		enable_scatter   : 1, /**< Enable scatter packets rx handler */
+		enable_ptype_direct  :1, /** < Enable packet type get direct mode. */
		enable_lro       : 1; /**< Enable LRO */
};
 
This flag will be configured in dev_configure(), and can be queried by user through dev->data->dev_conf.rxmode.enable_ptype_direct. In receive function, driver will store HW's packet type value in mbuf->packet_type if direct mode is enabled, If not, maintain the existing code.

Driver maintains a new ptype_value and rte_flow item mapping table, when download a profile and support new packet type, the SW mapping table will be updated according to the ptype information analyzed from profile. 

Future work
===========
Support to configure the packet type direct mode per queue.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] New packet type query API
  2018-01-11 16:04 [RFC] New packet type query API Qiming Yang
@ 2018-01-16 15:55 ` Adrien Mazarguil
  2018-01-17  8:08   ` Andrew Rybchenko
  2018-01-23  2:46   ` Yang, Qiming
  0 siblings, 2 replies; 8+ messages in thread
From: Adrien Mazarguil @ 2018-01-16 15:55 UTC (permalink / raw)
  To: Qiming Yang; +Cc: dev

On Fri, Jan 12, 2018 at 12:04:05AM +0800, Qiming Yang wrote:
> This RFC contains a proposal to add a new packet type query API to DPDK. 
> 
> Motivation
> ==========
> In current DPDK implementation, when received one packet, driver will lookup the packet type mapping table and transmit the result value to struct mbuf. Appilcation will analyze the mbuf->packet_type only when user want to show the packet type name. It will waste a lot of cycles, because sometimes user don't care what the packet type is, but the look-up will always exist. We think the packet type name is not needed all the time and the look-up is not needed neither. So we will add a flag, when enable the flag, the ptype value in descriptor will be passed through to mbuf->packet_type directly. So a new API to get packet name by values is added also, it will be called when user want to know what the packet type name is. So we can use the flag to control which value will transmit to mbuf(di
 rect ptype value or ptype name value).
> 
> The current ptype expression depend on bit width(max 32 bit, used 28bit already), if we support more packet type in future, like next generation NIC support flexible pipeline, it will introduce some new packet types, the remaining bit may not enough. So I suppose to use the rte_flow items to present packet type.  The advantage of rte_flow item are: 
> 1. It's generic enough, every NIC can use these items to present their packet; 
> 2. it's flexible, can add new items unlimited to support new packet type.
> 
> Proposed solution
> ================
> The new API used to query the exact packet type name with packet type value from descriptor.
> 
> /** Packet type name structure */
> struct rte_eth_ptype_name  {
> 	uint64_t ptype_value;
> 	enum *name;
> }
> 
> /**
> *    Query the exact packet type name with packet type values
> *
> * @param   port_id     Port identifier of Ethernet device.
> * @param   names      Pointer to packet type name buffer.
> * @param   size           The size of ptype_values array (number of element).
> * @param   ptype_values    Pointer to a table of ptype values.
> * @return
> *  - On success returns zero 
> *  - On failure returns non-zero
> */
> int rte_eth_ptype_name_get_by_value (uint8_t port_id, struct rte_eth_ptype_name *names, int size, uint64_t *ptype_values);
> 
> Add a new flag 'enable_ptype_direct' in structure rte_eth_rxmode to enable packet type get direct mode. The direct mode means the packet type value will pass through directly from HW to driver without lookup ptype mapping table. User can call function  rte_eth_ptype_name_get_by_value to query what the packet type value means when they need.
> 
> struct rte_eth_rxmode {
> 	… …
> 	uint16_t header_split : 1, /**< Header Split enable. */
> 		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
> 		hw_vlan_filter   : 1, /**< VLAN filter enable. */
> 		hw_vlan_strip    : 1, /**< VLAN strip enable. */
> 		hw_vlan_extend   : 1, /**< Extended VLAN enable. */
> 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
> 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
> 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
> +		enable_ptype_direct  :1, /** < Enable packet type get direct mode. */
> 		enable_lro       : 1; /**< Enable LRO */
> };
>  
> This flag will be configured in dev_configure(), and can be queried by user through dev->data->dev_conf.rxmode.enable_ptype_direct. In receive function, driver will store HW's packet type value in mbuf->packet_type if direct mode is enabled, If not, maintain the existing code.
> 
> Driver maintains a new ptype_value and rte_flow item mapping table, when download a profile and support new packet type, the SW mapping table will be updated according to the ptype information analyzed from profile. 
> 
> Future work
> ===========
> Support to configure the packet type direct mode per queue.

I understand the motivation behind this proposal, however since new ideas
must be challenged, I have a few comments:

- How about making packet type recognition an optional offload configurable
  per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra
  processing cost could be avoided for applications that do not care.

- Depending on HW, packet type information inside RX descriptors may not
  necessarily fit 64-bit, or at least not without transformation. This
  transformation would still cause wasted cycles on the PMD side.

- In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles
  but the subsequent look-up with the proposed API would translate to a
  higher cost on the application side. As a data plane API, how does this
  benefit applications that want to retrieve packet type information?

- Without a dedicated mbuf flag, an application cannot tell whether enclosed
  packet type data is in HW format. Even if present, if port information is
  discarded or becomes invalid (e.g. mbuf stored in an application queue for
  lengthy periods or passed as is to an unrelated application), there is no
  way to make sense of the data.

In my opinion, mbufs should only contain data fields in a standardized
format. Managing packet types like an offload which can be toggled at will
seems to be the best compromise. Thoughts?

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] New packet type query API
  2018-01-16 15:55 ` Adrien Mazarguil
@ 2018-01-17  8:08   ` Andrew Rybchenko
  2018-01-17 14:34     ` Shahaf Shuler
  2018-01-23  2:46     ` Yang, Qiming
  2018-01-23  2:46   ` Yang, Qiming
  1 sibling, 2 replies; 8+ messages in thread
From: Andrew Rybchenko @ 2018-01-17  8:08 UTC (permalink / raw)
  To: Adrien Mazarguil, Qiming Yang; +Cc: dev

On 01/16/2018 06:55 PM, Adrien Mazarguil wrote:
> I understand the motivation behind this proposal, however since new ideas
> must be challenged, I have a few comments:
>
> - How about making packet type recognition an optional offload configurable
>    per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra
>    processing cost could be avoided for applications that do not care.
>
> - Depending on HW, packet type information inside RX descriptors may not
>    necessarily fit 64-bit, or at least not without transformation. This
>    transformation would still cause wasted cycles on the PMD side.
>
> - In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles
>    but the subsequent look-up with the proposed API would translate to a
>    higher cost on the application side. As a data plane API, how does this
>    benefit applications that want to retrieve packet type information?
>
> - Without a dedicated mbuf flag, an application cannot tell whether enclosed
>    packet type data is in HW format. Even if present, if port information is
>    discarded or becomes invalid (e.g. mbuf stored in an application queue for
>    lengthy periods or passed as is to an unrelated application), there is no
>    way to make sense of the data.
>
> In my opinion, mbufs should only contain data fields in a standardized
> format. Managing packet types like an offload which can be toggled at will
> seems to be the best compromise. Thoughts?

+1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] New packet type query API
  2018-01-17  8:08   ` Andrew Rybchenko
@ 2018-01-17 14:34     ` Shahaf Shuler
  2018-01-23  2:48       ` Yang, Qiming
  2018-01-23  2:46     ` Yang, Qiming
  1 sibling, 1 reply; 8+ messages in thread
From: Shahaf Shuler @ 2018-01-17 14:34 UTC (permalink / raw)
  To: Andrew Rybchenko, Adrien Mazarguil, Qiming Yang; +Cc: dev

Wednesday, January 17, 2018 10:09 AM, Andrew RybchenkoL
> On 01/16/2018 06:55 PM, Adrien Mazarguil wrote:
> > I understand the motivation behind this proposal, however since new
> > ideas must be challenged, I have a few comments:
> >
> > - How about making packet type recognition an optional offload
> configurable
> >    per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the
> extra
> >    processing cost could be avoided for applications that do not care.
> >
> > - Depending on HW, packet type information inside RX descriptors may not
> >    necessarily fit 64-bit, or at least not without transformation. This
> >    transformation would still cause wasted cycles on the PMD side.
> >
> > - In case enable_ptype_direct is enabled, the PMD may not waste CPU
> cycles
> >    but the subsequent look-up with the proposed API would translate to a
> >    higher cost on the application side. As a data plane API, how does this
> >    benefit applications that want to retrieve packet type information?
> >
> > - Without a dedicated mbuf flag, an application cannot tell whether
> enclosed
> >    packet type data is in HW format. Even if present, if port information is
> >    discarded or becomes invalid (e.g. mbuf stored in an application queue
> for
> >    lengthy periods or passed as is to an unrelated application), there is no
> >    way to make sense of the data.
> >
> > In my opinion, mbufs should only contain data fields in a standardized
> > format. Managing packet types like an offload which can be toggled at
> > will seems to be the best compromise. Thoughts?
> 
> +1

Yes.
PTYPE is yet another offload the PMD provides. It should be enabled/disabled in the same way all other offloads are.
Application who are not interested with it, and wants the extra performance should not enable it.  


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] New packet type query API
  2018-01-16 15:55 ` Adrien Mazarguil
  2018-01-17  8:08   ` Andrew Rybchenko
@ 2018-01-23  2:46   ` Yang, Qiming
  2018-02-05 19:29     ` Adrien Mazarguil
  1 sibling, 1 reply; 8+ messages in thread
From: Yang, Qiming @ 2018-01-23  2:46 UTC (permalink / raw)
  To: Adrien Mazarguil, Andrew Rybchenko, Shahaf Shuler
  Cc: dev, Lu, Wenzhuo, Wu, Jingjing

Sorry for reply so late. Answered in line.

> > This flag will be configured in dev_configure(), and can be queried by user
> through dev->data->dev_conf.rxmode.enable_ptype_direct. In receive function,
> driver will store HW's packet type value in mbuf->packet_type if direct mode is
> enabled, If not, maintain the existing code.
> >
> > Driver maintains a new ptype_value and rte_flow item mapping table, when
> download a profile and support new packet type, the SW mapping table will be
> updated according to the ptype information analyzed from profile.
> >
> > Future work
> > ===========
> > Support to configure the packet type direct mode per queue.
> 
> I understand the motivation behind this proposal, however since new ideas must
> be challenged, I have a few comments:
> 
> - How about making packet type recognition an optional offload configurable
>   per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra
>   processing cost could be avoided for applications that do not care.
> 
It's acceptable for me, but I'm afraid the name can lead to confusion using OFFLOAD.
Because if get packet type directly is not HW offload, it's SW work. 
I don't know what 'extra processing cost' can be avoid? I think the two ways should be the same. 

> - Depending on HW, packet type information inside RX descriptors may not
>   necessarily fit 64-bit, or at least not without transformation. This
>   transformation would still cause wasted cycles on the PMD side.
> 
Do you means transform packet type information in RX descriptors to 64 bits will cost
many cycles?
	
> - In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles
>   but the subsequent look-up with the proposed API would translate to a
>   higher cost on the application side. As a data plane API, how does this
>   benefit applications that want to retrieve packet type information?
> 
And the flag stored in mbuf now may not matches other API's format requirement. They still
need to use the flag translate to exact name or other format. And not all using cases need 
packet type. So we think transfer the description's packet type value to mbuf is more useful and flexible.
If a API always need ptype, using the existing mode (off the direct mode) is more suitable.

> - Without a dedicated mbuf flag, an application cannot tell whether enclosed
>   packet type data is in HW format. Even if present, if port information is
>   discarded or becomes invalid (e.g. mbuf stored in an application queue for
>   lengthy periods or passed as is to an unrelated application), there is no
>   way to make sense of the data.
> 
Could you tell me which using case can leading port information be discard?
We assume the mbuf is always synchronic with ptype configuration in port information.

> In my opinion, mbufs should only contain data fields in a standardized format.
> Managing packet types like an offload which can be toggled at will seems to be
> the best compromise. Thoughts?
> 
Agreed, and we keep the existing mbuf flags, so at the most of time, mbufs using the
standard format.

> --
> Adrien Mazarguil
> 6WIND

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] New packet type query API
  2018-01-17  8:08   ` Andrew Rybchenko
  2018-01-17 14:34     ` Shahaf Shuler
@ 2018-01-23  2:46     ` Yang, Qiming
  1 sibling, 0 replies; 8+ messages in thread
From: Yang, Qiming @ 2018-01-23  2:46 UTC (permalink / raw)
  To: Andrew Rybchenko, Adrien Mazarguil; +Cc: dev

Answered in adrien’s email.

From: Andrew Rybchenko [mailto:arybchenko@solarflare.com]
Sent: Wednesday, January 17, 2018 4:09 PM
To: Adrien Mazarguil <adrien.mazarguil@6wind.com>; Yang, Qiming <qiming.yang@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [RFC] New packet type query API

On 01/16/2018 06:55 PM, Adrien Mazarguil wrote:
I understand the motivation behind this proposal, however since new ideas

must be challenged, I have a few comments:



- How about making packet type recognition an optional offload configurable

  per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra

  processing cost could be avoided for applications that do not care.



- Depending on HW, packet type information inside RX descriptors may not

  necessarily fit 64-bit, or at least not without transformation. This

  transformation would still cause wasted cycles on the PMD side.



- In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles

  but the subsequent look-up with the proposed API would translate to a

  higher cost on the application side. As a data plane API, how does this

  benefit applications that want to retrieve packet type information?



- Without a dedicated mbuf flag, an application cannot tell whether enclosed

  packet type data is in HW format. Even if present, if port information is

  discarded or becomes invalid (e.g. mbuf stored in an application queue for

  lengthy periods or passed as is to an unrelated application), there is no

  way to make sense of the data.



In my opinion, mbufs should only contain data fields in a standardized

format. Managing packet types like an offload which can be toggled at will

seems to be the best compromise. Thoughts?

+1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] New packet type query API
  2018-01-17 14:34     ` Shahaf Shuler
@ 2018-01-23  2:48       ` Yang, Qiming
  0 siblings, 0 replies; 8+ messages in thread
From: Yang, Qiming @ 2018-01-23  2:48 UTC (permalink / raw)
  To: Shahaf Shuler, Andrew Rybchenko, Adrien Mazarguil; +Cc: dev

> -----Original Message-----
> From: Shahaf Shuler [mailto:shahafs@mellanox.com]
> Sent: Wednesday, January 17, 2018 10:34 PM
> To: Andrew Rybchenko <arybchenko@solarflare.com>; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>; Yang, Qiming <qiming.yang@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [RFC] New packet type query API
> 
> Wednesday, January 17, 2018 10:09 AM, Andrew RybchenkoL
> > On 01/16/2018 06:55 PM, Adrien Mazarguil wrote:
> > > I understand the motivation behind this proposal, however since new
> > > ideas must be challenged, I have a few comments:
> > >
> > > - How about making packet type recognition an optional offload
> > configurable
> > >    per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way
> > > the
> > extra
> > >    processing cost could be avoided for applications that do not care.
> > >
> > > - Depending on HW, packet type information inside RX descriptors may not
> > >    necessarily fit 64-bit, or at least not without transformation. This
> > >    transformation would still cause wasted cycles on the PMD side.
> > >
> > > - In case enable_ptype_direct is enabled, the PMD may not waste CPU
> > cycles
> > >    but the subsequent look-up with the proposed API would translate to a
> > >    higher cost on the application side. As a data plane API, how does this
> > >    benefit applications that want to retrieve packet type information?
> > >
> > > - Without a dedicated mbuf flag, an application cannot tell whether
> > enclosed
> > >    packet type data is in HW format. Even if present, if port information is
> > >    discarded or becomes invalid (e.g. mbuf stored in an application
> > > queue
> > for
> > >    lengthy periods or passed as is to an unrelated application), there is no
> > >    way to make sense of the data.
> > >
> > > In my opinion, mbufs should only contain data fields in a
> > > standardized format. Managing packet types like an offload which can
> > > be toggled at will seems to be the best compromise. Thoughts?
> >
> > +1
> 
> Yes.
> PTYPE is yet another offload the PMD provides. It should be enabled/disabled in
> the same way all other offloads are.
> Application who are not interested with it, and wants the extra performance
> should not enable it.

Agreed, thank you for your advice, and comments in Adrien's email. 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] New packet type query API
  2018-01-23  2:46   ` Yang, Qiming
@ 2018-02-05 19:29     ` Adrien Mazarguil
  0 siblings, 0 replies; 8+ messages in thread
From: Adrien Mazarguil @ 2018-02-05 19:29 UTC (permalink / raw)
  To: Yang, Qiming
  Cc: Andrew Rybchenko, Shahaf Shuler, dev, Lu, Wenzhuo, Wu, Jingjing


On Tue, Jan 23, 2018 at 02:46:19AM +0000, Yang, Qiming wrote:
> Sorry for reply so late. Answered in line.

Same for me, please see below.

> > > This flag will be configured in dev_configure(), and can be queried by user
> > through dev->data->dev_conf.rxmode.enable_ptype_direct. In receive function,
> > driver will store HW's packet type value in mbuf->packet_type if direct mode is
> > enabled, If not, maintain the existing code.
> > >
> > > Driver maintains a new ptype_value and rte_flow item mapping table, when
> > download a profile and support new packet type, the SW mapping table will be
> > updated according to the ptype information analyzed from profile.
> > >
> > > Future work
> > > ===========
> > > Support to configure the packet type direct mode per queue.
> > 
> > I understand the motivation behind this proposal, however since new ideas must
> > be challenged, I have a few comments:
> > 
> > - How about making packet type recognition an optional offload configurable
> >   per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra
> >   processing cost could be avoided for applications that do not care.
> > 
> It's acceptable for me, but I'm afraid the name can lead to confusion using OFFLOAD.
> Because if get packet type directly is not HW offload, it's SW work. 
> I don't know what 'extra processing cost' can be avoid? I think the two ways should be the same. 

Offloads generally require a bit of processing from PMDs, if only to convert
raw values to a consistent format. Collecting and converting scattered bits
of data inside RX descriptors to expose them to applications is an
acceptable trade-off only a PMD can do anyway. It can be defined as an
offload because doing the same purely in software (packet parsing) would be
significantly more expensive on the software side.

For the above scenario, unless I'm mistaken, HW performs most of the packet
processing to inform the PMD, otherwise there would be no such thing as an
"internal packet type format" to feed back to the PMD right? This, in my
opinion, is enough to call it an offload.

> > - Depending on HW, packet type information inside RX descriptors may not
> >   necessarily fit 64-bit, or at least not without transformation. This
> >   transformation would still cause wasted cycles on the PMD side.
> > 
> Do you means transform packet type information in RX descriptors to 64 bits will cost
> many cycles?

Depending on one's definition of "many". Transferring some data from a
memory location (RX descriptor) to another (mbuf) is the main expense,
adding a couple of binary operations (shifts, XORs and whatnot) is nothing
in comparison and in many cases is done while performing the copy, with
the result of these operations being directly assigned to the destination
buffer.

> > - In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles
> >   but the subsequent look-up with the proposed API would translate to a
> >   higher cost on the application side. As a data plane API, how does this
> >   benefit applications that want to retrieve packet type information?
> > 
> And the flag stored in mbuf now may not matches other API's format requirement. They still
> need to use the flag translate to exact name or other format. And not all using cases need 
> packet type. So we think transfer the description's packet type value to mbuf is more useful and flexible.
> If a API always need ptype, using the existing mode (off the direct mode) is more suitable.

Extrapolating a bit, why stop at packet type. The whole mbuf could be
somehow used as an opaque storage back-end for RX descriptors, with all
fields interpreted on a needed basis as long as a known mbuf flag says so.

This would make RX super fast assuming an application doesn't do anything
with such mbufs. Problem is, it's not the kind of use case targeted by DPDK,
that's why we have to agree on a consistent format PMDs must conform and
limit the proliferation of extra function calls needed to interpret data
structures. That software cost is to be absorbed by PMDs.

We can add as many toggles as needed for offloads (particularly since some
of them have a non-negligible software cost, e.g. TSO), as long as the
exposed data structures use a defined format, hence my suggestion of adding
a new toggle for packet type.

> > - Without a dedicated mbuf flag, an application cannot tell whether enclosed
> >   packet type data is in HW format. Even if present, if port information is
> >   discarded or becomes invalid (e.g. mbuf stored in an application queue for
> >   lengthy periods or passed as is to an unrelated application), there is no
> >   way to make sense of the data.
> > 
> Could you tell me which using case can leading port information be discard?
> We assume the mbuf is always synchronic with ptype configuration in port information.

Hot-plug for one. The sudden loss of an ethdev possibly followed by the
subsequent reuse of its port ID for a different device may be an issue for
any application buffering traffic for any length of time, for instance TCP
or IPsec stacks.

Port ID information tells an application where the mbuf originally came from
and that's about it. It can't be relied on for mbuf data post-processing.

> > In my opinion, mbufs should only contain data fields in a standardized format.
> > Managing packet types like an offload which can be toggled at will seems to be
> > the best compromise. Thoughts?
> > 
> Agreed, and we keep the existing mbuf flags, so at the most of time, mbufs using the
> standard format.

Great, looking forward to your next proposal then :)

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-02-05 19:29 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-11 16:04 [RFC] New packet type query API Qiming Yang
2018-01-16 15:55 ` Adrien Mazarguil
2018-01-17  8:08   ` Andrew Rybchenko
2018-01-17 14:34     ` Shahaf Shuler
2018-01-23  2:48       ` Yang, Qiming
2018-01-23  2:46     ` Yang, Qiming
2018-01-23  2:46   ` Yang, Qiming
2018-02-05 19:29     ` Adrien Mazarguil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.