linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Waiman Long <longman@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
	Jan Kara <jack@suse.com>, Jeff Layton <jlayton@poochiereds.net>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	Tejun Heo <tj@kernel.org>,
	Christoph Lameter <cl@linux-foundation.org>,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Andi Kleen <andi@firstfloor.org>,
	Dave Chinner <dchinner@redhat.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Davidlohr Bueso <dbueso@suse.de>
Subject: Re: [PATCH v9 5/5] lib/dlock-list: Scale dlock_lists_empty()
Date: Thu, 4 Oct 2018 09:16:00 +0200	[thread overview]
Message-ID: <20181004071600.GC29482@quack2.suse.cz> (raw)
In-Reply-To: <1536780532-4092-6-git-send-email-longman@redhat.com>

On Wed 12-09-18 15:28:52, Waiman Long wrote:
> From: Davidlohr Bueso <dave@stgolabs.net>
> 
> Instead of the current O(N) implementation, at the cost
> of adding an atomic counter, we can convert the call to
> an atomic_read(). The counter only serves for accounting
> empty to non-empty transitions, and vice versa; therefore
> only modified twice for each of the lists during the
> lifetime of the dlock (while used).
> 
> In addition, to be able to unaccount a list_del(), we
> add a dlist pointer to each head, thus minimizing the
> overall memory footprint.
> 
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> Acked-by: Waiman Long <longman@redhat.com>

So I was wondering: Is this really worth it? AFAICS we have a single call
site for dlock_lists_empty() and that happens during umount where we don't
really care about this optimization. So it seems like unnecessary
complication to me at this point? If someone comes up with a usecase that
needs fast dlock_lists_empty(), then sure, we can do this...

								Honza

> ---
>  include/linux/dlock-list.h |  8 ++++++
>  lib/dlock-list.c           | 67 +++++++++++++++++++++++++++++++++++++++-------
>  2 files changed, 65 insertions(+), 10 deletions(-)
> 
> diff --git a/include/linux/dlock-list.h b/include/linux/dlock-list.h
> index 327cb9e..ac1a2e3 100644
> --- a/include/linux/dlock-list.h
> +++ b/include/linux/dlock-list.h
> @@ -32,10 +32,18 @@
>  struct dlock_list_head {
>  	struct list_head list;
>  	spinlock_t lock;
> +	struct dlock_list_heads *dlist;
>  } ____cacheline_aligned_in_smp;
>  
> +/*
> + * This is the main dlist data structure, with the array of heads
> + * and a counter that atomically tracks if any of the lists are
> + * being used. That is, empty to non-empty (and vice versa)
> + * head->list transitions.
> + */
>  struct dlock_list_heads {
>  	struct dlock_list_head *heads;
> +	atomic_t used_lists;
>  };
>  
>  /*
> diff --git a/lib/dlock-list.c b/lib/dlock-list.c
> index e286094..04da20d 100644
> --- a/lib/dlock-list.c
> +++ b/lib/dlock-list.c
> @@ -122,8 +122,11 @@ int __alloc_dlock_list_heads(struct dlock_list_heads *dlist,
>  
>  		INIT_LIST_HEAD(&head->list);
>  		head->lock = __SPIN_LOCK_UNLOCKED(&head->lock);
> +		head->dlist = dlist;
>  		lockdep_set_class(&head->lock, key);
>  	}
> +
> +	atomic_set(&dlist->used_lists, 0);
>  	return 0;
>  }
>  EXPORT_SYMBOL(__alloc_dlock_list_heads);
> @@ -139,29 +142,36 @@ void free_dlock_list_heads(struct dlock_list_heads *dlist)
>  {
>  	kfree(dlist->heads);
>  	dlist->heads = NULL;
> +	atomic_set(&dlist->used_lists, 0);
>  }
>  EXPORT_SYMBOL(free_dlock_list_heads);
>  
>  /**
>   * dlock_lists_empty - Check if all the dlock lists are empty
>   * @dlist: Pointer to the dlock_list_heads structure
> - * Return: true if list is empty, false otherwise.
>   *
> - * This can be a pretty expensive function call. If this function is required
> - * in a performance critical path, we may have to maintain a global count
> - * of the list entries in the global dlock_list_heads structure instead.
> + * Return: true if all dlock lists are empty, false otherwise.
>   */
>  bool dlock_lists_empty(struct dlock_list_heads *dlist)
>  {
> -	int idx;
> -
>  	/* Shouldn't be called before nr_dlock_lists is initialized */
>  	WARN_ON_ONCE(!nr_dlock_lists);
>  
> -	for (idx = 0; idx < nr_dlock_lists; idx++)
> -		if (!list_empty(&dlist->heads[idx].list))
> -			return false;
> -	return true;
> +	/*
> +	 * Serialize dlist->used_lists such that a 0->1 transition is not
> +	 * missed by another thread checking if any of the dlock lists are
> +	 * used.
> +	 *
> +	 * CPU0				    CPU1
> +	 * dlock_list_add()                 dlock_lists_empty()
> +	 *   [S] atomic_inc(used_lists);
> +	 *       smp_mb__after_atomic();
> +	 *					  smp_mb__before_atomic();
> +	 *				      [L] atomic_read(used_lists)
> +	 *       list_add()
> +	 */
> +	smp_mb__before_atomic();
> +	return !atomic_read(&dlist->used_lists);
>  }
>  EXPORT_SYMBOL(dlock_lists_empty);
>  
> @@ -177,11 +187,39 @@ void dlock_lists_add(struct dlock_list_node *node,
>  		     struct dlock_list_heads *dlist)
>  {
>  	struct dlock_list_head *head = &dlist->heads[this_cpu_read(cpu2idx)];
> +	bool list_empty_before_lock = false;
> +
> +	/*
> +	 * Optimistically bump the used_lists counter _before_ taking
> +	 * the head->lock such that we don't miss a thread adding itself
> +	 * to a list while spinning for the lock.
> +	 *
> +	 * Then, after taking the lock, recheck if the empty to non-empty
> +	 * transition changed and (un)account for ourselves, accordingly.
> +	 * Note that all these scenarios are corner cases, and not the
> +	 * common scenario, where the lists are actually populated most
> +	 * of the time.
> +	 */
> +	if (unlikely(list_empty_careful(&head->list))) {
> +		list_empty_before_lock = true;
> +		atomic_inc(&dlist->used_lists);
> +		smp_mb__after_atomic();
> +	}
>  
>  	/*
>  	 * There is no need to disable preemption
>  	 */
>  	spin_lock(&head->lock);
> +
> +	if (unlikely(!list_empty_before_lock && list_empty(&head->list))) {
> +		atomic_inc(&dlist->used_lists);
> +		smp_mb__after_atomic();
> +	}
> +	if (unlikely(list_empty_before_lock && !list_empty(&head->list))) {
> +		atomic_dec(&dlist->used_lists);
> +		smp_mb__after_atomic();
> +	}
> +
>  	WRITE_ONCE(node->head, head);
>  	list_add(&node->list, &head->list);
>  	spin_unlock(&head->lock);
> @@ -212,6 +250,15 @@ void dlock_lists_del(struct dlock_list_node *node)
>  		spin_lock(&head->lock);
>  		if (likely(head == READ_ONCE(node->head))) {
>  			list_del_init(&node->list);
> +
> +			if (unlikely(list_empty(&head->list))) {
> +				struct dlock_list_heads *dlist;
> +				dlist = node->head->dlist;
> +
> +				atomic_dec(&dlist->used_lists);
> +				smp_mb__after_atomic();
> +			}
> +
>  			WRITE_ONCE(node->head, NULL);
>  			retry = false;
>  		} else {
> -- 
> 1.8.3.1
> 
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

  reply	other threads:[~2018-10-04 14:07 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-12 19:28 [PATCH v9 0/5] vfs: Use dlock list for SB's s_inodes list Waiman Long
2018-09-12 19:28 ` [PATCH v9 1/5] lib/dlock-list: Distributed and lock-protected lists Waiman Long
2018-09-12 19:28 ` [PATCH v9 2/5] vfs: Remove unnecessary list_for_each_entry_safe() variants Waiman Long
2018-09-12 19:28 ` [PATCH v9 3/5] vfs: Use dlock list for superblock's inode list Waiman Long
2018-09-17 14:15   ` Davidlohr Bueso
2018-09-17 14:46     ` Waiman Long
2018-09-12 19:28 ` [PATCH v9 4/5] lib/dlock-list: Make sibling CPUs share the same linked list Waiman Long
2018-09-12 19:28 ` [PATCH v9 5/5] lib/dlock-list: Scale dlock_lists_empty() Waiman Long
2018-10-04  7:16   ` Jan Kara [this message]
2018-10-04 13:41     ` Waiman Long
2018-10-17  2:51       ` Davidlohr Bueso
2018-09-17 15:18 ` [PATCH v9 6/6] prefetch: Remove spin_lock_prefetch() Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181004071600.GC29482@quack2.suse.cz \
    --to=jack@suse.cz \
    --cc=andi@firstfloor.org \
    --cc=bfields@fieldses.org \
    --cc=boqun.feng@gmail.com \
    --cc=cl@linux-foundation.org \
    --cc=dave@stgolabs.net \
    --cc=dbueso@suse.de \
    --cc=dchinner@redhat.com \
    --cc=jack@suse.com \
    --cc=jlayton@poochiereds.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tj@kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    --subject='Re: [PATCH v9 5/5] lib/dlock-list: Scale dlock_lists_empty()' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).