linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Sterba <dsterba@suse.cz>
To: Nikolay Borisov <nborisov@suse.com>
Cc: dsterba@suse.cz, Filipe Manana <fdmanana@kernel.org>,
	linux-btrfs@vger.kernel.org, dsterba@suse.com
Subject: Re: [PATCH 0/3] btrfs: fix a couple sleeps while holding a spinlock
Date: Fri, 15 Jul 2022 18:52:33 +0200	[thread overview]
Message-ID: <20220715165232.GC13489@twin.jikos.cz> (raw)
In-Reply-To: <383472a6-75e2-8c70-2c1e-a27234e7b761@suse.com>

On Fri, Jul 15, 2022 at 03:47:34PM +0300, Nikolay Borisov wrote:
> 
> 
> On 15.07.22 г. 15:01 ч., David Sterba wrote:
> > On Wed, Jul 13, 2022 at 02:59:55PM +0100, Filipe Manana wrote:
> >> On Wed, Jul 06, 2022 at 10:09:44AM +0100, fdmanana@kernel.org wrote:
> >>> From: Filipe Manana <fdmanana@suse.com>
> >>>
> >>> After the recent conversions of a couple radix trees to XArrays, we now
> >>> can end up attempting to sleep while holding a spinlock. This happens
> >>> because if xa_insert() allocates memory (using GFP_NOFS) it may need to
> >>> sleep (more likely to happen when under memory pressure). In the old
> >>> code this did not happen because we had radix_tree_preload() called
> >>> before taking the spinlocks.
> >>>
> >>> Filipe Manana (3):
> >>>    btrfs: fix sleep while under a spinlock when allocating delayed inode
> >>>    btrfs: fix sleep while under a spinlock when inserting a fs root
> >>>    btrfs: free qgroup metadata without holding the fs roots lock
> >>
> >> David, are you going to pick these up or revert the patches that did the
> >> radix tree to xarray conversion?
> > 
> > Switching sping lock to mutex seems quite heavy weight, and reverting
> > xarray conversion is intrusive, so it's choosing from two bad options,
> > also that we haven't identified the problems earlier. Doing such changes
> > in rc6 is quite unpleasant, I'll explore the options.
> 
> 
> I'm actually in favor of using the mutexes. For example looking at the 
> users of root->inode_lock:

We want to do the xarray conversion eventually but this would be better
done in the whole cycle, so I'm going to send the revert.

      reply	other threads:[~2022-07-15 16:58 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-06  9:09 [PATCH 0/3] btrfs: fix a couple sleeps while holding a spinlock fdmanana
2022-07-06  9:09 ` [PATCH 1/3] btrfs: fix sleep while under a spinlock when allocating delayed inode fdmanana
2022-07-06  9:09 ` [PATCH 2/3] btrfs: fix sleep while under a spinlock when inserting a fs root fdmanana
2022-07-06  9:09 ` [PATCH 3/3] btrfs: free qgroup metadata without holding the fs roots lock fdmanana
2022-07-07 16:31 ` [PATCH 0/3] btrfs: fix a couple sleeps while holding a spinlock David Sterba
2022-07-08  0:24   ` Matthew Wilcox
2022-07-12 11:45 ` Nikolay Borisov
2022-07-13 13:59 ` Filipe Manana
2022-07-15 12:01   ` David Sterba
2022-07-15 12:47     ` Nikolay Borisov
2022-07-15 16:52       ` David Sterba [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220715165232.GC13489@twin.jikos.cz \
    --to=dsterba@suse.cz \
    --cc=dsterba@suse.com \
    --cc=fdmanana@kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=nborisov@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).