linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: David Ryskalczyk <david@rysk.us>, linux-btrfs@vger.kernel.org
Subject: Re: Kernel panic due to stack recursion when copying data from a damaged filesystem
Date: Thu, 23 Mar 2023 15:57:57 +0800	[thread overview]
Message-ID: <9628f5a7-2752-4f74-70e1-f8efd345bdc7@gmx.com> (raw)
In-Reply-To: <ZBwC7n9crUsk4Pfi@infradead.org>



On 2023/3/23 15:42, Christoph Hellwig wrote:
> On Thu, Mar 23, 2023 at 02:41:53PM +0800, Qu Wenruo wrote:
>>> [  252.806147] BTRFS error (device sdh): level verify failed on logical 27258408435712 mirror 2 wanted 0 found 1
>>> [  252.848313] BTRFS error (device sdh): level verify failed on logical 27258408435712 mirror 3 wanted 0 found 1
>>> [  252.898989] BTRFS error (device sdh): level verify failed on logical 27258408435712 mirror 4 wanted 0 found 1
>>> ** Above four lines repeated an additional 24 times
>>
>> It's the data repair path, and the involved bad tree block seems to be the
>> csum tree block.
>>
>> CC Christoph, as he did quite some updates on the newer read repair path.
> 
> Well, the above is the metadata verifiatiom, which I haven't really
> touched (yet).
> 
> What is interesting above is that it tries to recover from 4 mirrors,
> which seems very unusual.  I wonder if something went wrong
> in btrfs_read_extent_buffer or btrfs_num_copies.

It's metadata, but that's not the cause of the stack recursion.

If you go with the frames with certainty, the stack would look like this:

stack_trace_save (kernel/stacktrace.c:123)
kasan_save_stack (mm/kasan/common.c:46)
__kasan_record_aux_stack (mm/kasan/generic.c:493)
insert_work (./include/linux/instrumented.h:72 
./include/asm-generic/bitops/instrumented-non-atomic.h:141 
kernel/workqueue.c:635 kernel/workqueue.c:642 kernel/workqueue.c:1361)
__queue_work (kernel/workqueue.c:1520)
mod_delayed_work_on (./arch/x86/include/asm/irqflags.h:137 
kernel/workqueue.c:1740)
kblockd_mod_delayed_work_on (block/blk-core.c:1039)
blk_mq_sched_insert_requests (./include/linux/rcupdate.h:771 
./include/linux/percpu-refcount.h:330 
./include/linux/percpu-refcount.h:351 block/blk-mq-sched.c:494)
blk_mq_flush_plug_list (block/blk-mq.c:2808)
__blk_flush_plug (block/blk-core.c:1152)
io_schedule (kernel/sched/core.c:8871)
folio_wait_bit_common (mm/filemap.c:1286)
read_extent_buffer_pages (./include/linux/pagemap.h:1024 
./include/linux/pagemap.h:1036 fs/btrfs/extent_io.c:5029) btrfs
btrfs_read_extent_buffer (fs/btrfs/disk-io.c:303) btrfs
read_tree_block (fs/btrfs/disk-io.c:1025) btrfs
read_block_for_search (fs/btrfs/ctree.c:1620) btrfs
btrfs_search_slot (fs/btrfs/ctree.c:2225) btrfs
btrfs_lookup_csum (fs/btrfs/file-item.c:221) btrfs
btrfs_lookup_bio_sums (fs/btrfs/file-item.c:315 
fs/btrfs/file-item.c:484) btrfs
btrfs_submit_data_read_bio (fs/btrfs/inode.c:2787) btrfs
btrfs_repair_one_sector (fs/btrfs/extent_io.c:775) btrfs
end_compressed_bio_read (fs/btrfs/compression.c:197) btrfs
btrfs_repair_one_sector (fs/btrfs/extent_io.c:775) btrfs
end_compressed_bio_read (fs/btrfs/compression.c:197) btrfs
btrfs_repair_one_sector (fs/btrfs/extent_io.c:775) btrfs
end_compressed_bio_read (fs/btrfs/compression.c:197) btrfs
btrfs_repair_one_sector (fs/btrfs/extent_io.c:775) btrfs
end_compressed_bio_read (fs/btrfs/compression.c:197) btrfs
btrfs_repair_one_sector (fs/btrfs/extent_io.c:775) btrfs
...

Thus it's still data repair path causing the stack recursion, the 
metadata is just the unfortunately one triggered it.

Thanks,
Qu

  reply	other threads:[~2023-03-23  7:58 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-23  1:20 Kernel panic due to stack recursion when copying data from a damaged filesystem David Ryskalczyk
2023-03-23  6:41 ` Qu Wenruo
2023-03-23  7:42   ` Christoph Hellwig
2023-03-23  7:57     ` Qu Wenruo [this message]
2023-03-23  8:09       ` Christoph Hellwig
2023-03-23 13:17         ` David Ryskalczyk
2023-03-24  1:01           ` Christoph Hellwig
2023-03-24 11:44             ` David Ryskalczyk
2023-03-24 23:06               ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9628f5a7-2752-4f74-70e1-f8efd345bdc7@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=david@rysk.us \
    --cc=hch@infradead.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).