archive mirror
 help / color / mirror / Atom feed
From: Davidlohr Bueso <>
To: Waiman Long <>
Cc: Jan Kara <>, Alexander Viro <>,
	Jan Kara <>, Jeff Layton <>,
	"J. Bruce Fields" <>,
	Tejun Heo <>,
	Christoph Lameter <>,,,
	Ingo Molnar <>,
	Peter Zijlstra <>,
	Andi Kleen <>,
	Dave Chinner <>,
	Boqun Feng <>,
	Davidlohr Bueso <>
Subject: Re: [PATCH v9 5/5] lib/dlock-list: Scale dlock_lists_empty()
Date: Tue, 16 Oct 2018 19:51:47 -0700	[thread overview]
Message-ID: <20181017025147.wfk7cktcn3emlb6b@linux-r8p5> (raw)
In-Reply-To: <>

On Thu, 04 Oct 2018, Waiman Long wrote:

>On 10/04/2018 03:16 AM, Jan Kara wrote:
>> On Wed 12-09-18 15:28:52, Waiman Long wrote:
>>> From: Davidlohr Bueso <>
>>> Instead of the current O(N) implementation, at the cost
>>> of adding an atomic counter, we can convert the call to
>>> an atomic_read(). The counter only serves for accounting
>>> empty to non-empty transitions, and vice versa; therefore
>>> only modified twice for each of the lists during the
>>> lifetime of the dlock (while used).
>>> In addition, to be able to unaccount a list_del(), we
>>> add a dlist pointer to each head, thus minimizing the
>>> overall memory footprint.
>>> Signed-off-by: Davidlohr Bueso <>
>>> Acked-by: Waiman Long <>
>> So I was wondering: Is this really worth it? AFAICS we have a single call
>> site for dlock_lists_empty() and that happens during umount where we don't
>> really care about this optimization. So it seems like unnecessary
>> complication to me at this point? If someone comes up with a usecase that
>> needs fast dlock_lists_empty(), then sure, we can do this...
>Yes, that is true. We can skip this patch for the time being until a use
>case comes up which requires dlock_lists_empty() to be used in the fast

So fyi I ended up porting the epoll ready-list to this api, where
dlock_lists_empty() performance _does_ matter. However, the list
iteration is common enough operation to put perform the benefits of
the percpu add/delete operations. For example, when sending ready
events to userspace (ep_send_events_proc()), each item must drop the
iter lock, and also do a delete operation -- similarly for checking
for ready events (ep_read_events_proc). This ends hurting more than
benefiting workloads.

Anyway, so yeah I have no need for this patch, and the added complexity +
atomics is unjustified.


  reply	other threads:[~2018-10-17 10:45 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-12 19:28 [PATCH v9 0/5] vfs: Use dlock list for SB's s_inodes list Waiman Long
2018-09-12 19:28 ` [PATCH v9 1/5] lib/dlock-list: Distributed and lock-protected lists Waiman Long
2018-09-12 19:28 ` [PATCH v9 2/5] vfs: Remove unnecessary list_for_each_entry_safe() variants Waiman Long
2018-09-12 19:28 ` [PATCH v9 3/5] vfs: Use dlock list for superblock's inode list Waiman Long
2018-09-17 14:15   ` Davidlohr Bueso
2018-09-17 14:46     ` Waiman Long
2018-09-12 19:28 ` [PATCH v9 4/5] lib/dlock-list: Make sibling CPUs share the same linked list Waiman Long
2018-09-12 19:28 ` [PATCH v9 5/5] lib/dlock-list: Scale dlock_lists_empty() Waiman Long
2018-10-04  7:16   ` Jan Kara
2018-10-04 13:41     ` Waiman Long
2018-10-17  2:51       ` Davidlohr Bueso [this message]
2018-09-17 15:18 ` [PATCH v9 6/6] prefetch: Remove spin_lock_prefetch() Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181017025147.wfk7cktcn3emlb6b@linux-r8p5 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: [PATCH v9 5/5] lib/dlock-list: Scale dlock_lists_empty()' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).