All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: "Van Haaren, Harry" <harry.van.haaren@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Richardson, Bruce" <bruce.richardson@intel.com>
Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues
Date: Tue, 28 Mar 2017 16:13:02 +0530	[thread overview]
Message-ID: <20170328104301.ysxnlgyxvnqfv674@localhost.localdomain> (raw)
In-Reply-To: <E923DB57A917B54B9182A2E928D00FA612A1FEB6@IRSMSX102.ger.corp.intel.com>

On Mon, Mar 27, 2017 at 03:17:48PM +0000, Van Haaren, Harry wrote:
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > Sent: Monday, March 27, 2017 8:45 AM
> > To: Van Haaren, Harry <harry.van.haaren@intel.com>
> > Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>
> > Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues
> > 
> > On Fri, Mar 24, 2017 at 04:53:01PM +0000, Harry van Haaren wrote:
> > > From: Bruce Richardson <bruce.richardson@intel.com>
> > >
> > > Add in the data structures for the event queues, and the eventdev
> > > functions to create and destroy those queues.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> > > ---
> 
> <snip>
> 
> > > +static int32_t
> > > +qid_init(struct sw_evdev *sw, unsigned int idx, int type,
> > > +		const struct rte_event_queue_conf *queue_conf)
> > > +{
> > > +	unsigned int i;
> > > +	int dev_id = sw->data->dev_id;
> > > +	int socket_id = sw->data->socket_id;
> > > +	char buf[IQ_RING_NAMESIZE];
> > > +	struct sw_qid *qid = &sw->qids[idx];
> > > +
> > > +	for (i = 0; i < SW_IQS_MAX; i++) {
> > 
> > Just for my understanding, Are 4(SW_IQS_MAX) iq rings created to address
> > different priority for each enqueue operation? What is the significance of
> > 4(SW_IQS_MAX) here?
> 
> Yes each IQ represents a priority level. There is a compile-time define (SW_IQS_MAX) which allows setting the number of internal-queues at each queue stage. The default number of priorities is currently 4.

OK. The reason why I asked because, If i understood it correctly the
PRIO_TO_IQ is not normalizing it correctly if SW_IQS_MAX == 4.

I thought following mapping will be the correct normalization if SW_IQS_MAX
== 4

What do you think?

priority----iq
0 - 63    -> 0
64 -127   -> 1
127 -191  -> 2
192 - 255 -> 3

Snippet from header file:
uint8_t priority;
/**< Event priority relative to other events in the
 * event queue. The requested priority should in the
 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
 * RTE_EVENT_DEV_PRIORITY_LOWEST].
 * The implementation shall normalize the requested
 * priority to supported priority value.
 * Valid when the device has
 * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
 */

> 
> 
> > > +static int
> > > +sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
> > > +		const struct rte_event_queue_conf *conf)
> > > +{
> > > +	int type;
> > > +
> > > +	switch (conf->event_queue_cfg) {
> > > +	case RTE_EVENT_QUEUE_CFG_SINGLE_LINK:
> > > +		type = SW_SCHED_TYPE_DIRECT;
> > > +		break;
> > 
> > event_queue_cfg is a bitmap. It is valid to have
> > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY.
> > i.e An atomic schedule type queue and it has only one port linked to
> > dequeue the events.
> > So in the above context, The switch case is not correct. i.e
> > it goes to the default condition. Right?
> > Is this intentional?
> > 
> > If I understand it correctly, Based on the use case(grouped based event
> > pipelining), you have shared in
> > the documentation patch. RTE_EVENT_QUEUE_CFG_SINGLE_LINK used for last
> > stage(last queue). One option is if SW PMD cannot support
> > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY mode
> > then even tough application sets the RTE_EVENT_QUEUE_CFG_SINGLE_LINK |
> > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, driver can ignore
> > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. But I am not sure the case where
> > application sets RTE_EVENT_QUEUE_CFG_SINGLE_LINK in the middle of the pipeline.
> > 
> > Thoughts?
> 
> 
> I don't like the idea of the SW PMD ignoring flags for queues - the PMD has no idea if the queue is the final or middle of the pipeline as it's the applications usage which defines that.
> 
> 
> Does anybody have a need for a queue to be both Atomic *and* Single-link?  I understand the current API doesn't prohibit it, but I don't see the actual use-case in which that may be useful. Atomic implies load-balancing is occurring, single link implies there is only one consuming core. Those seem like opposites to me?
> 
> Unless anybody sees value in queue's having both, I suggest we update the documentation to specify that a queue is either load balanced, or single-link, and that setting both flags will result in -ENOTSUP being returned. (This check can be added to EventDev layer if consistent for all PMDs).

If I understand it correctly(Based on the previous discussions),
HW implementations(Cavium or NXP) does not
need to use RTE_EVENT_QUEUE_CFG_* flags for the operations(sched type
will be derived from event.sched_type on enqueue). So that means we are
free to tailor the header file based on the SW PMD requirement on this.
But semantically it has to be inline with rest of the header file.We can
work together to make it happen.

A few question on everyone benefit:

1) Does RTE_EVENT_QUEUE_CFG_SINGLE_LINK has any other meaning other than an
event queue linked only to single port?  Based on the discussions, It was
add in the header file so that SW PMD can know upfront only single port
will be linked to the given event queue. It is added as an optimization for SW
PMD. Does it has any functional expectation?


2) Based on following topology given in documentation patch for queue
based event pipelining,

  rx_port        w1_port
	 \     /         \
	  qid0 - w2_port - qid1
	       \         /     \
		 w3_port        tx_port

a) I understand, rx_port is feeding events to qid0
b) But, Do you see any issue with following model? IMO, It scales well
linearly based on number of cores available to work(Since it is ATOMIC to
ATOMIC). Nothing wrong with
qid1 just connects to tx_port, I am just trying understand the rational
behind it?

  rx_port        w1_port         w1_port
	 \     /         \     /
	  qid0 - w2_port - qid1- w2_port
	       \         /     \
		 w3_port         w3_port

3)
> Does anybody have a need for a queue to be both Atomic *and* Single-link?  I understand the current API doesn't prohibit it, but I don't see the actual use-case in which that may be useful. Atomic implies load-balancing is occurring, single link implies there is only one consuming core. Those seem like opposites to me?

I can think about the following use case:

topology:

  rx_port        w1_port
	 \     /         \
	  qid0 - w2_port - qid1
	       \         /     \
		 w3_port        tx_port

Use case:

Queue based event pipeling:
ORERDED(Stage1) to ATOMIC(Stage2) pipeline:
- For ingress order maintenance
- For executing Stage 1 in parallel for better scaling
i.e A fat flow can spray over N cores while maintaining the ingress
order when it sends out on the wire(after consuming from tx_port)

I am not sure how SW PMD work in the use case of ingress order maintenance.

But the HW and header file expects this form:
Snippet from header file:
--
 * The source flow ordering from an event queue is maintained when events are
 * enqueued to their destination queue within the same ordered flow context.
 *
 * Events from the source queue appear in their original order when dequeued
 * from a destination queue.
--
Here qid0 is source queue with ORDERED sched_type and qid1 is destination
queue with ATOMIC sched_type. qid1 can be linked to only port(tx_port).

Are we on same page? If not, let me know the differences? We will try to
accommodate the same in header file.

> 

> 
> 
> Counter-thoughts?


> 
> 
> > > +	case RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY:
> > > +		type = RTE_SCHED_TYPE_ATOMIC;
> > > +		break;
> > > +	case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
> > > +		type = RTE_SCHED_TYPE_ORDERED;
> > > +		break;
> > > +	case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
> > > +		type = RTE_SCHED_TYPE_PARALLEL;
> > > +		break;
> > > +	case RTE_EVENT_QUEUE_CFG_ALL_TYPES:
> > > +		SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n");
> > > +		return -ENOTSUP;
> > > +	default:
> > > +		SW_LOG_ERR("Unknown queue type %d requested\n",
> > > +			   conf->event_queue_cfg);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	struct sw_evdev *sw = sw_pmd_priv(dev);
> > > +	return qid_init(sw, queue_id, type, conf);
> > > +}

  reply	other threads:[~2017-03-28 10:43 UTC|newest]

Thread overview: 109+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <489175012-101439-1-git-send-email-harry.van.haaren@intel.com>
2017-03-24 16:52 ` [PATCH v5 00/20] next-eventdev: event/sw software eventdev Harry van Haaren
2017-03-24 16:52   ` [PATCH v5 01/20] test/eventdev: pass timeout ticks unsupported Harry van Haaren
2017-03-25  5:38     ` Jerin Jacob
2017-03-24 16:52   ` [PATCH v5 02/20] event/sw: add new software-only eventdev driver Harry van Haaren
2017-03-25  6:24     ` Jerin Jacob
2017-03-27 15:30       ` Van Haaren, Harry
2017-03-24 16:52   ` [PATCH v5 03/20] event/sw: add device capabilities function Harry van Haaren
2017-03-25 10:50     ` Jerin Jacob
2017-03-24 16:52   ` [PATCH v5 04/20] event/sw: add configure function Harry van Haaren
2017-03-25 13:17     ` Jerin Jacob
2017-03-24 16:53   ` [PATCH v5 05/20] event/sw: add fns to return default port/queue config Harry van Haaren
2017-03-25 13:21     ` Jerin Jacob
2017-03-24 16:53   ` [PATCH v5 06/20] event/sw: add support for event queues Harry van Haaren
2017-03-27  7:45     ` Jerin Jacob
2017-03-27  8:47       ` Bruce Richardson
2017-03-27 15:17       ` Van Haaren, Harry
2017-03-28 10:43         ` Jerin Jacob [this message]
2017-03-28 12:42           ` Van Haaren, Harry
2017-03-28 17:36             ` Jerin Jacob
2017-03-29  8:28               ` Van Haaren, Harry
2017-03-24 16:53   ` [PATCH v5 07/20] event/sw: add support for event ports Harry van Haaren
2017-03-27  8:55     ` Jerin Jacob
2017-03-24 16:53   ` [PATCH v5 08/20] event/sw: add support for linking queues to ports Harry van Haaren
2017-03-27 11:20     ` Jerin Jacob
2017-03-29 10:58       ` Van Haaren, Harry
2017-03-24 16:53   ` [PATCH v5 09/20] event/sw: add worker core functions Harry van Haaren
2017-03-27 13:50     ` Jerin Jacob
2017-03-28 16:17       ` Van Haaren, Harry
2017-03-24 16:53   ` [PATCH v5 10/20] event/sw: add scheduling logic Harry van Haaren
2017-03-24 16:53   ` [PATCH v5 11/20] event/sw: add start stop and close functions Harry van Haaren
2017-03-27 16:02     ` Jerin Jacob
2017-03-24 16:53   ` [PATCH v5 12/20] event/sw: add dump function for easier debugging Harry van Haaren
2017-03-24 16:53   ` [PATCH v5 13/20] event/sw: add xstats support Harry van Haaren
2017-03-24 16:53   ` [PATCH v5 14/20] test/eventdev: add SW test infrastructure Harry van Haaren
2017-03-28 15:20     ` Burakov, Anatoly
2017-03-24 16:53   ` [PATCH v5 15/20] test/eventdev: add basic SW tests Harry van Haaren
2017-03-28 15:21     ` Burakov, Anatoly
2017-03-24 16:53   ` [PATCH v5 16/20] test/eventdev: add SW tests for load balancing Harry van Haaren
2017-03-28 15:21     ` Burakov, Anatoly
2017-03-24 16:53   ` [PATCH v5 17/20] test/eventdev: add SW xstats tests Harry van Haaren
2017-03-28 15:22     ` Burakov, Anatoly
2017-03-24 16:53   ` [PATCH v5 18/20] test/eventdev: add SW deadlock tests Harry van Haaren
2017-03-28 15:22     ` Burakov, Anatoly
2017-03-24 16:53   ` [PATCH v5 19/20] doc: add event device and software eventdev Harry van Haaren
2017-03-29 13:47     ` Jerin Jacob
2017-03-24 16:53   ` [PATCH v5 20/20] maintainers: add eventdev section and claim SW PMD Harry van Haaren
2017-03-29 13:05     ` Jerin Jacob
2017-03-29 23:25   ` [PATCH v6 00/21] next-eventdev: event/sw software eventdev Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 01/21] eventdev: improve API docs for start function Harry van Haaren
2017-03-30 10:56       ` Burakov, Anatoly
2017-03-30 17:11       ` Jerin Jacob
2017-03-30 17:24         ` Van Haaren, Harry
2017-03-29 23:25     ` [PATCH v6 02/21] test/eventdev: pass timeout ticks unsupported Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 03/21] event/sw: add new software-only eventdev driver Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 04/21] event/sw: add device capabilities function Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 05/21] event/sw: add configure function Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 06/21] event/sw: add fns to return default port/queue config Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 07/21] event/sw: add support for event queues Harry van Haaren
2017-03-30 18:06       ` Jerin Jacob
2017-03-29 23:25     ` [PATCH v6 08/21] event/sw: add support for event ports Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 09/21] event/sw: add support for linking queues to ports Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 10/21] event/sw: add worker core functions Harry van Haaren
2017-03-30 18:07       ` Jerin Jacob
2017-03-29 23:25     ` [PATCH v6 11/21] event/sw: add scheduling logic Harry van Haaren
2017-03-30 10:07       ` Hunt, David
2017-03-29 23:25     ` [PATCH v6 12/21] event/sw: add start stop and close functions Harry van Haaren
2017-03-30  8:24       ` Jerin Jacob
2017-03-30  8:49         ` Van Haaren, Harry
2017-03-29 23:25     ` [PATCH v6 13/21] event/sw: add dump function for easier debugging Harry van Haaren
2017-03-30 10:32       ` Hunt, David
2017-03-29 23:25     ` [PATCH v6 14/21] event/sw: add xstats support Harry van Haaren
2017-03-30 11:12       ` Hunt, David
2017-03-29 23:25     ` [PATCH v6 15/21] test/eventdev: add SW test infrastructure Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 16/21] test/eventdev: add basic SW tests Harry van Haaren
2017-03-29 23:25     ` [PATCH v6 17/21] test/eventdev: add SW tests for load balancing Harry van Haaren
2017-03-29 23:26     ` [PATCH v6 18/21] test/eventdev: add SW xstats tests Harry van Haaren
2017-03-29 23:26     ` [PATCH v6 19/21] test/eventdev: add SW deadlock tests Harry van Haaren
2017-03-29 23:26     ` [PATCH v6 20/21] doc: add event device and software eventdev Harry van Haaren
2017-03-30  8:27       ` Burakov, Anatoly
2017-03-29 23:26     ` [PATCH v6 21/21] maintainers: add eventdev section and claim SW PMD Harry van Haaren
2017-03-30 19:30     ` [PATCH v7 00/22] next-eventdev: event/sw software eventdev Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 01/22] eventdev: improve API docs for start function Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 02/22] test/eventdev: pass timeout ticks unsupported Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 03/22] event/sw: add new software-only eventdev driver Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 04/22] event/sw: add device capabilities function Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 05/22] event/sw: add configure function Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 06/22] event/sw: add fns to return default port/queue config Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 07/22] event/sw: add support for event queues Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 08/22] event/sw: add support for event ports Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 09/22] event/sw: add support for linking queues to ports Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 10/22] event/sw: add worker core functions Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 11/22] event/sw: add scheduling logic Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 12/22] event/sw: add start stop and close functions Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 13/22] event/sw: add dump function for easier debugging Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 14/22] event/sw: add xstats support Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 15/22] test/eventdev: add SW test infrastructure Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 16/22] test/eventdev: add basic SW tests Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 17/22] test/eventdev: add SW tests for load balancing Harry van Haaren
2017-04-02 14:56         ` Jerin Jacob
2017-04-03  9:08           ` Van Haaren, Harry
2017-03-30 19:30       ` [PATCH v7 18/22] test/eventdev: add SW xstats tests Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 19/22] test/eventdev: add SW deadlock tests Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 20/22] doc: add event device and software eventdev Harry van Haaren
2017-03-30 19:30       ` [PATCH v7 21/22] doc: add SW eventdev PMD to 17.05 release notes Harry van Haaren
2017-03-31 12:23         ` Hunt, David
2017-03-31 14:45         ` Jerin Jacob
2017-03-30 19:30       ` [PATCH v7 22/22] maintainers: add eventdev section and claim SW PMD Harry van Haaren
2017-03-31 13:56         ` Jerin Jacob
2017-04-01 11:38       ` [PATCH v7 00/22] next-eventdev: event/sw software eventdev Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170328104301.ysxnlgyxvnqfv674@localhost.localdomain \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.