All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Zhirun Yan <zhirun.yan@intel.com>
Cc: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com,
	 cunming.liang@intel.com, haiyue.wang@intel.com
Subject: Re: [RFC, v1 0/6] graph enhancement for multi-core dispatch
Date: Tue, 20 Sep 2022 15:03:05 +0530	[thread overview]
Message-ID: <CALBAE1OAd2U7eA-Rb-uTCDDU_OoD8i2RTJnX8HZ+fVFqEjcbGA@mail.gmail.com> (raw)
In-Reply-To: <20220908020959.1675953-1-zhirun.yan@intel.com>

On Thu, Sep 8, 2022 at 7:40 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
>
> Currently, the rte_graph_walk() and rte_node_enqueue* fast path API
> functions in graph lib implementation are designed to work on single-core.
>
> This solution(RFC) proposes usage of cross-core dispatching mechanism to
> enhance the graph scaling strategy. We introduce Scheduler Workqueue
> then we could directly dispatch streams to another worker core which is
> affinity with a specific node.
>
> This RFC:
>   1. Introduce core affinity API and graph clone API.
>   2. Introduce key functions to enqueue/dequeue for dispatching streams.
>   3. Enhance rte_graph_walk by cross-core dispatch.
>   4. Add l2fwd-graph example and stats for cross-core dispatching.
>
> With this patch set, it could easily plan and orchestrate stream on
> multi-core systems.
>
> Future work:
>   1. Support to affinity lcore set for one node.
>   2. Use l3fwd-graph instead of l2fwd-graph as example in patch 06.
>   3. Add new parameter, like --node(nodeid, lcoreid) to config node for core
>   affinity.
>
> Comments and suggestions are welcome. Thanks!

Some top level comments.

1)Yes it makes sense to not create the l2fwd-graph, Please enhance the
l3fwd-graph and compare the performance with multi core scenarios.

2) It is good to have multiple graph walk schemes like the one you
have introduced now.
Though I am not sure about performance aspects, specifically, it is
used with multiple producers and multi consumers with node.

If you have a use case for the new worker scheme, then we can add it.
I think, it would call for

a) We need to have separate rte_graph_worker.h for each implementation
to avoid the performance impact for each other.
That may boils down to
i) Create lib/graph/rte_graph_worker_common.h
ii) Treat existing rte_graph_worker.h as default scheme and include
rte_graph_worker_common.h
iii) Add new rte_graph_worker_xxx.h for the new scheme(diff between
default worker) with leveraging te_graph_worker_common.h

Application can select the worker by

#define RTE_GRAPH_WORKER_MODEL_XXX
//#define RTE_GRAPH_WORKER_MODEL_YYY
#include <rte_graph_worker.h>

b) Introduce a new enum rte_graph_model or so to express this  new
model and other models in feature

c) Each core has its own node instance so we don't need explicit
critical section management when dealing with node instances.
In this new scheme, Can we leverage the existing node implementation?
If not, we need to have separate node
implementation for different graph models. It will be a maintenance
issue. But if we really need to take this path,
Probably on each node's capability, the node needs to declare the
models supported(Use enum rte_graph_model).
This can be used for sanity checking when we clone the graph etc and
check the compatibility for creating the graph etc.
I think this is the biggest issue with adding a new model. Where nodes
need to be written based on the model. I think this could
be the reason for VPP not adding other models.

d) All new slowpath APIs like rte_node_set_lcore_affinity,
rte_graph_clone, We need to fix the namespace by
rte_graph_model_<model_name>_<operation> or so to make sure
application writer understand this APIs
are only for this model.(Also we can use "enum rte_graph_model" for
sanity check etc)



>

  parent reply	other threads:[~2022-09-20  9:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-08  2:09 [RFC, v1 0/6] graph enhancement for multi-core dispatch Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 1/6] graph: introduce core affinity API into graph Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 2/6] graph: introduce graph clone API for other worker core Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 3/6] graph: enable stream moving cross cores Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 4/6] graph: enhance graph walk by cross-core dispatch Zhirun Yan
2022-09-08  5:27   ` [EXT] " Pavan Nikhilesh Bhagavatula
2022-09-15  1:52     ` Yan, Zhirun
2022-09-08  2:09 ` [RFC, v1 5/6] graph: add stats for corss-core dispatching Zhirun Yan
2022-09-08  2:09 ` [RFC, v1 6/6] examples: add l2fwd-graph Zhirun Yan
2022-09-20  9:33 ` Jerin Jacob [this message]
2022-09-30  6:41   ` [RFC, v1 0/6] graph enhancement for multi-core dispatch Yan, Zhirun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALBAE1OAd2U7eA-Rb-uTCDDU_OoD8i2RTJnX8HZ+fVFqEjcbGA@mail.gmail.com \
    --to=jerinjacobk@gmail.com \
    --cc=cunming.liang@intel.com \
    --cc=dev@dpdk.org \
    --cc=haiyue.wang@intel.com \
    --cc=jerinj@marvell.com \
    --cc=kirankumark@marvell.com \
    --cc=zhirun.yan@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.