LKML Archive on lore.kernel.org
 help / Atom feed
From: Jan Kara <jack@suse.cz>
To: Davidlohr Bueso <dave@stgolabs.net>
Cc: akpm@linux-foundation.org, mingo@kernel.org,
	peterz@infradead.org, jack@suse.cz,
	torvalds@linux-foundation.org, kirill.shutemov@linux.intel.com,
	hch@infradead.org, ldufour@linux.vnet.ibm.com, mhocko@suse.com,
	mgorman@techsingularity.net, linux-kernel@vger.kernel.org,
	axboe@fb.com, linux-block@vger.kernel.org,
	Davidlohr Bueso <dbueso@suse.de>
Subject: Re: [PATCH 17/17] block/cfq: cache rightmost rb_node
Date: Wed, 19 Jul 2017 09:59:40 +0200
Message-ID: <20170719075940.GC12546@quack2.suse.cz> (raw)
In-Reply-To: <20170719014603.19029-18-dave@stgolabs.net>

On Tue 18-07-17 18:46:03, Davidlohr Bueso wrote:
> For the same reasons we already cache the leftmost pointer,
> apply the same optimization for rb_last() calls. Users must
> explicitly do this as rb_root_cached only deals with the
> smallest node.
> 
> Cc: axboe@fb.com
> Cc: linux-block@vger.kernel.org
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>

Hum, as I'm reading the code, here we have about 1:1 ratio of cached
lookups and tree insert / delete where we have to maintain the cached
value. Is the optimization worth it in such case?

								Honza

> ---
> This is part of the rbtree internal caching series:
> https://lwn.net/Articles/726809/
> 
>  block/cfq-iosched.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index 92c31683a2bb..57ec45fd4590 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -94,11 +94,13 @@ struct cfq_ttime {
>   */
>  struct cfq_rb_root {
>  	struct rb_root_cached rb;
> +	struct rb_node *rb_rightmost;
>  	unsigned count;
>  	u64 min_vdisktime;
>  	struct cfq_ttime ttime;
>  };
>  #define CFQ_RB_ROOT	(struct cfq_rb_root) { .rb = RB_ROOT_CACHED, \
> +			.rb_rightmost = NULL,			     \
>  			.ttime = {.last_end_request = ktime_get_ns(),},}
>  
>  /*
> @@ -1183,6 +1185,9 @@ static struct cfq_group *cfq_rb_first_group(struct cfq_rb_root *root)
>  
>  static void cfq_rb_erase(struct rb_node *n, struct cfq_rb_root *root)
>  {
> +	if (root->rb_rightmost == n)
> +		root->rb_rightmost = rb_next(n);
> +
>  	rb_erase_cached(n, &root->rb);
>  	RB_CLEAR_NODE(n);
>  
> @@ -1239,20 +1244,24 @@ __cfq_group_service_tree_add(struct cfq_rb_root *st, struct cfq_group *cfqg)
>  	struct rb_node *parent = NULL;
>  	struct cfq_group *__cfqg;
>  	s64 key = cfqg_key(st, cfqg);
> -	bool leftmost = true;
> +	bool leftmost = true, rightmost = true;
>  
>  	while (*node != NULL) {
>  		parent = *node;
>  		__cfqg = rb_entry_cfqg(parent);
>  
> -		if (key < cfqg_key(st, __cfqg))
> +		if (key < cfqg_key(st, __cfqg)) {
>  			node = &parent->rb_left;
> -		else {
> +			rightmost = false;
> +		} else {
>  			node = &parent->rb_right;
>  			leftmost = false;
>  		}
>  	}
>  
> +	if (rightmost)
> +		st->rb_rightmost = &cfqg->rb_node;
> +
>  	rb_link_node(&cfqg->rb_node, parent, node);
>  	rb_insert_color_cached(&cfqg->rb_node, &st->rb, leftmost);
>  }
> @@ -1355,7 +1364,7 @@ cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>  	 * so that groups get lesser vtime based on their weights, so that
>  	 * if group does not loose all if it was not continuously backlogged.
>  	 */
> -	n = rb_last(&st->rb.rb_root);
> +	n = st->rb_rightmost;
>  	if (n) {
>  		__cfqg = rb_entry_cfqg(n);
>  		cfqg->vdisktime = __cfqg->vdisktime +
> @@ -2204,7 +2213,7 @@ static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq,
>  	st = st_for(cfqq->cfqg, cfqq_class(cfqq), cfqq_type(cfqq));
>  	if (cfq_class_idle(cfqq)) {
>  		rb_key = CFQ_IDLE_DELAY;
> -		parent = rb_last(&st->rb.rb_root);
> +		parent = st->rb_rightmost;
>  		if (parent && parent != &cfqq->rb_node) {
>  			__cfqq = rb_entry(parent, struct cfq_queue, rb_node);
>  			rb_key += __cfqq->rb_key;
> -- 
> 2.12.0
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

      reply index

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-19  1:45 [PATCH -next v4 00/17] rbtree: cache leftmost node internally Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 01/17] " Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 02/17] rbtree: optimize root-check during rebalancing loop Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 03/17] rbtree: add some additional comments for rebalancing cases Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 04/17] lib/rbtree_test.c: make input module parameters Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 05/17] lib/rbtree_test.c: add (inorder) traversal test Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 06/17] lib/rbtree_test.c: support rb_root_cached Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 07/17] sched/fair: replace cfs_rq->rb_leftmost Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 08/17] sched/deadline: replace earliest dl and rq leftmost caching Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 09/17] locking/rtmutex: replace top-waiter and pi_waiters " Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 10/17] block/cfq: replace cfq_rb_root " Davidlohr Bueso
2017-07-19  7:46   ` Jan Kara
2017-07-19  1:45 ` [PATCH 11/17] lib/interval_tree: fast overlap detection Davidlohr Bueso
2017-07-22 17:52   ` Doug Ledford
2017-08-01 17:16   ` Michael S. Tsirkin
2017-07-19  1:45 ` [PATCH 12/17] lib/interval-tree: correct comment wrt generic flavor Davidlohr Bueso
2017-07-19  1:45 ` [PATCH 13/17] procfs: use faster rb_first_cached() Davidlohr Bueso
2017-07-19  1:46 ` [PATCH 14/17] fs/epoll: " Davidlohr Bueso
2017-07-19  1:46 ` [PATCH 15/17] fs/ext4: use cached rbtrees Davidlohr Bueso
2017-07-19  7:40   ` Jan Kara
2017-07-19 22:50     ` Davidlohr Bueso
2017-07-19  1:46 ` [PATCH 16/17] mem/memcg: cache rightmost node Davidlohr Bueso
2017-07-19  7:50   ` Michal Hocko
2017-07-26 21:09     ` Andrew Morton
2017-07-27  7:06       ` Michal Hocko
2017-07-19  1:46 ` [PATCH 17/17] block/cfq: cache rightmost rb_node Davidlohr Bueso
2017-07-19  7:59   ` Jan Kara [this message]

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170719075940.GC12546@quack2.suse.cz \
    --to=jack@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@fb.com \
    --cc=dave@stgolabs.net \
    --cc=dbueso@suse.de \
    --cc=hch@infradead.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=ldufour@linux.vnet.ibm.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org linux-kernel@archiver.kernel.org
	public-inbox-index lkml


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox