linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RESEND PATCHv2 0/2] A couple of i_dir_seq fixes for fs/dcache.c
@ 2018-03-06 19:05 Will Deacon
  2018-03-06 19:05 ` [RESEND PATCHv2 1/2] fs: dcache: Avoid livelock between d_alloc_parallel and __d_add Will Deacon
  2018-03-06 19:05 ` [RESEND PATCHv2 2/2] fs: dcache: Use READ_ONCE when accessing i_dir_seq Will Deacon
  0 siblings, 2 replies; 3+ messages in thread
From: Will Deacon @ 2018-03-06 19:05 UTC (permalink / raw)
  To: linux-fsdevel, viro, torvalds; +Cc: peterz, willy, linux-kernel, Will Deacon

Hi again,

This is a resend of the patches I previously posted here:

v1: http://lkml.kernel.org/r/1518526731-26546-1-git-send-email-will.deacon@arm.com
v2: https://lkml.org/lkml/2018/2/19/360

Since this fixes a livelock that we're seeing in practice, I'd really like
to get these merged if possible.

Thanks,

Will

--->8

Will Deacon (2):
  fs: dcache: Avoid livelock between d_alloc_parallel and __d_add
  fs: dcache: Use READ_ONCE when accessing i_dir_seq

 fs/dcache.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [RESEND PATCHv2 1/2] fs: dcache: Avoid livelock between d_alloc_parallel and __d_add
  2018-03-06 19:05 [RESEND PATCHv2 0/2] A couple of i_dir_seq fixes for fs/dcache.c Will Deacon
@ 2018-03-06 19:05 ` Will Deacon
  2018-03-06 19:05 ` [RESEND PATCHv2 2/2] fs: dcache: Use READ_ONCE when accessing i_dir_seq Will Deacon
  1 sibling, 0 replies; 3+ messages in thread
From: Will Deacon @ 2018-03-06 19:05 UTC (permalink / raw)
  To: linux-fsdevel, viro, torvalds; +Cc: peterz, willy, linux-kernel, Will Deacon

If d_alloc_parallel runs concurrently with __d_add, it is possible for
d_alloc_parallel to continuously retry whilst i_dir_seq has been
incremented to an odd value by __d_add:

CPU0:
__d_add
	n = start_dir_add(dir);
		cmpxchg(&dir->i_dir_seq, n, n + 1) == n

CPU1:
d_alloc_parallel
retry:
	seq = smp_load_acquire(&parent->d_inode->i_dir_seq) & ~1;
	hlist_bl_lock(b);
		bit_spin_lock(0, (unsigned long *)b); // Always succeeds

CPU0:
	__d_lookup_done(dentry)
		hlist_bl_lock
			bit_spin_lock(0, (unsigned long *)b); // Never succeeds

CPU1:
	if (unlikely(parent->d_inode->i_dir_seq != seq)) {
		hlist_bl_unlock(b);
		goto retry;
	}

Since the simple bit_spin_lock used to implement hlist_bl_lock does not
provide any fairness guarantees, then CPU1 can starve CPU0 of the lock
and prevent it from reaching end_dir_add(dir), therefore CPU1 cannot
exit its retry loop because the sequence number always has the bottom
bit set.

This patch resolves the livelock by not taking hlist_bl_lock in
d_alloc_parallel if the sequence counter is odd, since any subsequent
masked comparison with i_dir_seq will fail anyway.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Reported-by: Naresh Madhusudana <naresh.madhusudana@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 fs/dcache.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/fs/dcache.c b/fs/dcache.c
index 7c38f39958bc..b243deec298c 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2474,7 +2474,7 @@ struct dentry *d_alloc_parallel(struct dentry *parent,
 
 retry:
 	rcu_read_lock();
-	seq = smp_load_acquire(&parent->d_inode->i_dir_seq) & ~1;
+	seq = smp_load_acquire(&parent->d_inode->i_dir_seq);
 	r_seq = read_seqbegin(&rename_lock);
 	dentry = __d_lookup_rcu(parent, name, &d_seq);
 	if (unlikely(dentry)) {
@@ -2495,6 +2495,12 @@ struct dentry *d_alloc_parallel(struct dentry *parent,
 		rcu_read_unlock();
 		goto retry;
 	}
+
+	if (unlikely(seq & 1)) {
+		rcu_read_unlock();
+		goto retry;
+	}
+
 	hlist_bl_lock(b);
 	if (unlikely(parent->d_inode->i_dir_seq != seq)) {
 		hlist_bl_unlock(b);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [RESEND PATCHv2 2/2] fs: dcache: Use READ_ONCE when accessing i_dir_seq
  2018-03-06 19:05 [RESEND PATCHv2 0/2] A couple of i_dir_seq fixes for fs/dcache.c Will Deacon
  2018-03-06 19:05 ` [RESEND PATCHv2 1/2] fs: dcache: Avoid livelock between d_alloc_parallel and __d_add Will Deacon
@ 2018-03-06 19:05 ` Will Deacon
  1 sibling, 0 replies; 3+ messages in thread
From: Will Deacon @ 2018-03-06 19:05 UTC (permalink / raw)
  To: linux-fsdevel, viro, torvalds; +Cc: peterz, willy, linux-kernel, Will Deacon

i_dir_seq is subject to concurrent modification by a cmpxchg or
store-release operation, so ensure that the relaxed access in
d_alloc_parallel uses READ_ONCE.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 fs/dcache.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/dcache.c b/fs/dcache.c
index b243deec298c..5716ab04e1db 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2502,7 +2502,7 @@ struct dentry *d_alloc_parallel(struct dentry *parent,
 	}
 
 	hlist_bl_lock(b);
-	if (unlikely(parent->d_inode->i_dir_seq != seq)) {
+	if (unlikely(READ_ONCE(parent->d_inode->i_dir_seq) != seq)) {
 		hlist_bl_unlock(b);
 		rcu_read_unlock();
 		goto retry;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-03-06 19:05 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-06 19:05 [RESEND PATCHv2 0/2] A couple of i_dir_seq fixes for fs/dcache.c Will Deacon
2018-03-06 19:05 ` [RESEND PATCHv2 1/2] fs: dcache: Avoid livelock between d_alloc_parallel and __d_add Will Deacon
2018-03-06 19:05 ` [RESEND PATCHv2 2/2] fs: dcache: Use READ_ONCE when accessing i_dir_seq Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).