All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Carrillo, Erik G" <erik.g.carrillo@intel.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"rsanford@akamai.com" <rsanford@akamai.com>,
	Stephen Hemminger <stephen@networkplumber.org>,
	"Wiles, Keith" <keith.wiles@intel.com>
Subject: Re: [PATCH v2 1/3] timer: add per-installer pending lists for each lcore
Date: Tue, 5 Sep 2017 21:52:24 +0000	[thread overview]
Message-ID: <BE54F058557D9A4FAC1D84E2FC6D87570ED690F6@fmsmsx115.amr.corp.intel.com> (raw)

Hi all,

Another approach that I'd like to put out for consideration is as follows:

Let's say we introduce one flag per lcore - multi_pendlists.  This flag indicates whether that lcore supports multiple pending lists (one per source lcore) or not, and by default it's set to false.

At rte_timer_subsystem_init() time, each lcore will be configured to use a single pending list per lcore (rather than multiple).

A new API, rte_timer_subsystem_set_multi_pendlists(unsigned lcore_Id), can be called to enable multi_pendlists for a particular lcore.  It should be called after rte_timer_subsystem_init(), and before any timers are started for that lcore.

When timers are started for a particular lcore, that lcore's multi_pendlists flag will be inspected to determine whether it should go into a single list, or one of several lists.

When an lcore processes its timers with rte_timer_manage(), it will look at the multi_pendlists flag, and if it is false, only process a single list.  This should bring the overhead nearly back down to what it was originally.  And if multi_pendlists is true, it will break out the runlists from multiple pending lists in sequence and process them, as in the current patch.
	
Thoughts or comments?

Thanks,
Gabriel

> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, August 29, 2017 5:57 AM
> To: Carrillo, Erik G <erik.g.carrillo@intel.com>; rsanford@akamai.com
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 1/3] timer: add per-installer pending lists
> for each lcore
> 
> Hi Gabriel,
> 
> >
> > Instead of each priv_timer struct containing a single skiplist, this
> > commit adds a skiplist for each enabled lcore to priv_timer. In the
> > case that multiple lcores repeatedly install timers on the same target
> > lcore, this change reduces lock contention for the target lcore's
> > skiplists and increases performance.
> 
> I am not an rte_timer expert, but there is one thing that worries me:
> It seems that complexity of timer_manage() has increased with that patch
> quite a bit:
> now  it has to check/process up to RTE_MAX_LCORE skiplists instead of one,
> also it has to somehow to properly sort up to RTE_MAX_LCORE lists of
> retrieved (ready to run) timers.
> Wouldn't all that affect it's running time?
> 
> I understand your intention to reduce lock contention, but I suppose at least
> it could be done in a configurable way.
> Let say allow user to specify  dimension of pending_lists[] at init phase or so.
> Then timer from lcore_id=N will endup in
> pending_lists[N%RTE_DIM(pendilng_list)].
> 
> Another thought - might be better to divide pending timers list not by client
> (lcore) id, but by expiration time - some analog of  timer wheel or so.
> That, I think might greatly decrease the probability that timer_manage() and
> timer_add() will try to access the same list.
> From other side timer_manage() still would have to consume skip-lists one
> by one.
> Though I suppose that's quite radical change from what we have right now.
> Konstantin
> 

             reply	other threads:[~2017-09-05 21:52 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-05 21:52 Carrillo, Erik G [this message]
  -- strict thread matches above, loose matches on Subject: below --
2017-08-23 14:47 [PATCH 1/3] timer: add per-installer pending lists for each lcore Gabriel Carrillo
2017-08-25 20:27 ` [PATCH v2 " Gabriel Carrillo
2017-08-29 10:57   ` Ananyev, Konstantin
2017-08-29 16:13     ` Carrillo, Erik G

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BE54F058557D9A4FAC1D84E2FC6D87570ED690F6@fmsmsx115.amr.corp.intel.com \
    --to=erik.g.carrillo@intel.com \
    --cc=dev@dpdk.org \
    --cc=keith.wiles@intel.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=rsanford@akamai.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.