All of lore.kernel.org
 help / color / mirror / Atom feed
From: Al Viro <viro@zeniv.linux.org.uk>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org,
	linux-cachefs@redhat.com, dhowells@redhat.com,
	gfs2@lists.linux.dev, dm-devel@lists.linux.dev,
	linux-security-module@vger.kernel.org, selinux@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 01/11] lib/dlock-list: Distributed and lock-protected lists
Date: Thu, 7 Dec 2023 02:23:57 +0000	[thread overview]
Message-ID: <20231207022357.GS1674809@ZenIV> (raw)
In-Reply-To: <20231206060629.2827226-2-david@fromorbit.com>

On Wed, Dec 06, 2023 at 05:05:30PM +1100, Dave Chinner wrote:

> +static inline struct dlock_list_node *
> +__dlock_list_next_entry(struct dlock_list_node *curr,
> +			struct dlock_list_iter *iter)
> +{
> +	/*
> +	 * Find next entry
> +	 */
> +	if (curr)
> +		curr = list_next_entry(curr, list);
> +
> +	if (!curr || (&curr->list == &iter->entry->list)) {

Hmm...  hlist, perhaps?  I mean, that way the thing becomes
	if (curr)
		curr = hlist_entry_safe(curr->node.next,
					struct dlock_list_node, node);
	if (!curr)
		curr = __dlock_list_next_list(iter);
	return curr;

BTW, does anybody have objections against

#define hlist_first_entry(head, type, member)
	hlist_entry_safe((head)->first, type, member)

#define hlist_next_entry(pos, member)
	hlist_entry_safe((pos)->member.next, typeof(*pos), member)

added in list.h?

> +static int __init cpu2idx_init(void)
> +{
> +	int idx, cpu;
> +
> +	idx = 0;
> +	for_each_possible_cpu(cpu)
> +		per_cpu(cpu2idx, cpu) = idx++;
> +	return 0;
> +}
> +postcore_initcall(cpu2idx_init);

Is it early enough?  Feels like that ought to be done from smp_init() or
right after it...

> +/**
> + * dlock_lists_empty - Check if all the dlock lists are empty
> + * @dlist: Pointer to the dlock_list_heads structure
> + * Return: true if list is empty, false otherwise.
> + *
> + * This can be a pretty expensive function call. If this function is required
> + * in a performance critical path, we may have to maintain a global count
> + * of the list entries in the global dlock_list_heads structure instead.
> + */
> +bool dlock_lists_empty(struct dlock_list_heads *dlist)
> +{
> +	int idx;
> +
> +	for (idx = 0; idx < nr_cpu_ids; idx++)
> +		if (!list_empty(&dlist->heads[idx].list))
> +			return false;
> +	return true;
> +}

Umm...  How would one use it, anyway?  You'd need to stop all insertions
first, wouldn't you?

> + */
> +struct dlock_list_node *__dlock_list_next_list(struct dlock_list_iter *iter)
> +{
> +	struct dlock_list_node *next;
> +	struct dlock_list_head *head;
> +
> +restart:
> +	if (iter->entry) {
> +		spin_unlock(&iter->entry->lock);
> +		iter->entry = NULL;
> +	}
> +
> +next_list:
> +	/*
> +	 * Try next list
> +	 */
> +	if (++iter->index >= nr_cpu_ids)
> +		return NULL;	/* All the entries iterated */
> +
> +	if (list_empty(&iter->head[iter->index].list))
> +		goto next_list;
> +
> +	head = iter->entry = &iter->head[iter->index];
> +	spin_lock(&head->lock);
> +	/*
> +	 * There is a slight chance that the list may become empty just
> +	 * before the lock is acquired. So an additional check is
> +	 * needed to make sure that a valid node will be returned.
> +	 */
> +	if (list_empty(&head->list))
> +		goto restart;
> +
> +	next = list_entry(head->list.next, struct dlock_list_node,
> +			  list);
> +	WARN_ON_ONCE(next->head != head);
> +
> +	return next;
> +}

Perhaps something like

	if (iter->entry) {
		spin_unlock(&iter->entry->lock);
		iter->entry = NULL;
	}
	while (++iter->index < nr_cpu_ids) {
		struct dlock_list_head *head = &iter->head[iter->index];

		if (list_empty(head->list))
			continue;

		spin_lock(&head->lock);
		// recheck under lock
		if (unlikely(list_empty(&head->list))) {
			spin_unlock(&head->lock);
			continue;
		}

		iter->entry = head;
		next = list_first_entry(&head->list,
					struct dlock_list_node, list);
		WARN_ON_ONCE(next->head != head);
		return next;
	}
	return NULL;

  reply	other threads:[~2023-12-07  2:24 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-06  6:05 [PATCH 0/11] vfs: inode cache scalability improvements Dave Chinner
2023-12-06  6:05 ` [PATCH 01/11] lib/dlock-list: Distributed and lock-protected lists Dave Chinner
2023-12-07  2:23   ` Al Viro [this message]
2023-12-06  6:05 ` [PATCH 02/11] vfs: Remove unnecessary list_for_each_entry_safe() variants Dave Chinner
2023-12-07  2:26   ` Al Viro
2023-12-07  4:18   ` Kent Overstreet
2023-12-06  6:05 ` [PATCH 03/11] vfs: Use dlock list for superblock's inode list Dave Chinner
2023-12-07  2:40   ` Al Viro
2023-12-07  4:59     ` Dave Chinner
2023-12-07  5:03       ` Kent Overstreet
2023-12-06  6:05 ` [PATCH 04/11] lib/dlock-list: Make sibling CPUs share the same linked list Dave Chinner
2023-12-07  4:31   ` Kent Overstreet
2023-12-07  5:42   ` Kent Overstreet
2023-12-07  6:25     ` Dave Chinner
2023-12-07  6:49   ` Al Viro
2023-12-06  6:05 ` [PATCH 05/11] selinux: use dlist for isec inode list Dave Chinner
2023-12-06 21:52   ` Paul Moore
2023-12-06 23:04     ` Dave Chinner
2023-12-07  0:36       ` Paul Moore
2023-12-06  6:05 ` [PATCH 06/11] vfs: factor out inode hash head calculation Dave Chinner
2023-12-07  3:02   ` Al Viro
2023-12-06  6:05 ` [PATCH 07/11] hlist-bl: add hlist_bl_fake() Dave Chinner
2023-12-07  3:05   ` Al Viro
2023-12-06  6:05 ` [PATCH 08/11] vfs: inode cache conversion to hash-bl Dave Chinner
2023-12-07  4:58   ` Kent Overstreet
2023-12-07  6:03     ` Dave Chinner
2023-12-07  6:42   ` Al Viro
2023-12-06  6:05 ` [PATCH 09/11] hash-bl: explicitly initialise hash-bl heads Dave Chinner
2023-12-07  3:15   ` Al Viro
2023-12-06  6:05 ` [PATCH 10/11] list_bl: don't use bit locks for PREEMPT_RT or lockdep Dave Chinner
2023-12-07  4:16   ` Kent Overstreet
2023-12-07  4:41     ` Dave Chinner
2023-12-06  6:05 ` [PATCH 11/11] hlist-bl: introduced nested locking for dm-snap Dave Chinner
2023-12-07 17:08 ` [PATCH 0/11] vfs: inode cache scalability improvements Kent Overstreet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231207022357.GS1674809@ZenIV \
    --to=viro@zeniv.linux.org.uk \
    --cc=david@fromorbit.com \
    --cc=dhowells@redhat.com \
    --cc=dm-devel@lists.linux.dev \
    --cc=gfs2@lists.linux.dev \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-cachefs@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-security-module@vger.kernel.org \
    --cc=selinux@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.