All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mbcache: Avoid nesting of cache->c_list_lock under bit locks
@ 2022-09-08  9:10 Jan Kara
  2022-09-30  3:19 ` Theodore Ts'o
  0 siblings, 1 reply; 2+ messages in thread
From: Jan Kara @ 2022-09-08  9:10 UTC (permalink / raw)
  To: Ted Tso
  Cc: linux-ext4, Mike Galbraith, Sebastian Andrzej Siewior, LKML, Jan Kara

Commit 307af6c87937 ("mbcache: automatically delete entries from cache
on freeing") started nesting cache->c_list_lock under the bit locks
protecting hash buckets of the mbcache hash table in
mb_cache_entry_create(). This causes problems for real-time kernels
because there spinlocks are sleeping locks while bitlocks stay atomic.
Luckily the nesting is easy to avoid by holding entry reference until
the entry is added to the LRU list. This makes sure we cannot race with
entry deletion.

Fixes: 307af6c87937 ("mbcache: automatically delete entries from cache on freeing")
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/mbcache.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index 47ccfcbe0a22..e272ad738faf 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -90,8 +90,14 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
 		return -ENOMEM;
 
 	INIT_LIST_HEAD(&entry->e_list);
-	/* Initial hash reference */
-	atomic_set(&entry->e_refcnt, 1);
+	/*
+	 * We create entry with two references. One reference is kept by the
+	 * hash table, the other reference is used to protect us from
+	 * mb_cache_entry_delete_or_get() until the entry is fully setup. This
+	 * avoids nesting of cache->c_list_lock into hash table bit locks which
+	 * is problematic for RT.
+	 */
+	atomic_set(&entry->e_refcnt, 2);
 	entry->e_key = key;
 	entry->e_value = value;
 	entry->e_reusable = reusable;
@@ -106,15 +112,12 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
 		}
 	}
 	hlist_bl_add_head(&entry->e_hash_list, head);
-	/*
-	 * Add entry to LRU list before it can be found by
-	 * mb_cache_entry_delete() to avoid races
-	 */
+	hlist_bl_unlock(head);
 	spin_lock(&cache->c_list_lock);
 	list_add_tail(&entry->e_list, &cache->c_list);
 	cache->c_entry_count++;
 	spin_unlock(&cache->c_list_lock);
-	hlist_bl_unlock(head);
+	mb_cache_entry_put(cache, entry);
 
 	return 0;
 }
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] mbcache: Avoid nesting of cache->c_list_lock under bit locks
  2022-09-08  9:10 [PATCH] mbcache: Avoid nesting of cache->c_list_lock under bit locks Jan Kara
@ 2022-09-30  3:19 ` Theodore Ts'o
  0 siblings, 0 replies; 2+ messages in thread
From: Theodore Ts'o @ 2022-09-30  3:19 UTC (permalink / raw)
  To: Jan Kara; +Cc: Theodore Ts'o, linux-ext4, bigeasy, LKML, efault

On Thu, 8 Sep 2022 11:10:32 +0200, Jan Kara wrote:
> Commit 307af6c87937 ("mbcache: automatically delete entries from cache
> on freeing") started nesting cache->c_list_lock under the bit locks
> protecting hash buckets of the mbcache hash table in
> mb_cache_entry_create(). This causes problems for real-time kernels
> because there spinlocks are sleeping locks while bitlocks stay atomic.
> Luckily the nesting is easy to avoid by holding entry reference until
> the entry is added to the LRU list. This makes sure we cannot race with
> entry deletion.
> 
> [...]

Applied, thanks!

[1/1] mbcache: Avoid nesting of cache->c_list_lock under bit locks
      commit: 9cbf99ae41e3a051cc9ec738f2c436ec1725e0e8

Best regards,
-- 
Theodore Ts'o <tytso@mit.edu>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-09-30  3:21 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-08  9:10 [PATCH] mbcache: Avoid nesting of cache->c_list_lock under bit locks Jan Kara
2022-09-30  3:19 ` Theodore Ts'o

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.