linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Erik Jensen <erikjensen@rkjnsn.net>
To: Hugo Mills <hugo@carfax.org.uk>,
	Erik Jensen <erikjensen@rkjnsn.net>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: "bad tree block start" when trying to mount on ARM
Date: Wed, 26 Jun 2019 00:04:14 -0700	[thread overview]
Message-ID: <CAMj6ewOHrJVdwfKrgXZxwfwE=eoTaB9MS57zha33yb1_iOLWiw@mail.gmail.com> (raw)
In-Reply-To: <CAMj6ewPKbRA_eT7JYA9ob9Qk9ZROoshOqaJE=4N_X9bPaskLUw@mail.gmail.com>

I'm still seeing this. Anything else I can try?

On Wed, May 22, 2019 at 9:02 AM Erik Jensen <erikjensen@rkjnsn.net> wrote:
>
> On Tue, May 21, 2019 at 2:18 AM Hugo Mills <hugo@carfax.org.uk> wrote:
> >
> > On Tue, May 21, 2019 at 01:34:42AM -0700, Erik Jensen wrote:
> > > I have a 5-drive btrfs filesystem. (raid-5 data, dup metadata). I can
> > > mount it fine on my x86_64 system, and running `btrfs check` there
> > > reveals no errors. However, I am not able to mount the filesystem on
> > > my 32-bit ARM board, which I am hoping to use for lower-power file
> > > serving. dmesg shows the following:
> > >
> > > [   83.066301] BTRFS info (device dm-3): disk space caching is enabled
> > > [   83.072817] BTRFS info (device dm-3): has skinny extents
> > > [   83.553973] BTRFS error (device dm-3): bad tree block start, want
> > > 17628726968320 have 396461950000496896
> > > [   83.554089] BTRFS error (device dm-3): bad tree block start, want
> > > 17628727001088 have 5606876608493751477
> > > [   83.601176] BTRFS error (device dm-3): bad tree block start, want
> > > 17628727001088 have 5606876608493751477
> > > [   83.610811] BTRFS error (device dm-3): failed to verify dev extents
> > > against chunks: -5
> > > [   83.639058] BTRFS error (device dm-3): open_ctree failed
> > >
> > > Is this expected to work? I did notice that there are gotchas on the
> > > wiki related to filesystems over 8TiB on 32-bit systems, but it
> > > sounded like they were mostly related to running the tools, as opposed
> > > to the filesystem driver itself. (Each of the five drives is
> > > 8TB/7.28TiB)
> >
> >    Yes, it should work. We had problems with ARM several years ago,
> > because of its unusual behaviour with unaligned word accesses, but
> > those were in userspace, and, as far as I know, fixed now. Looking at
> > the want/have numbers, it doesn't look like an endianness problem or
> > an ARM-unaligned-access problem.
> >
> > > If this isn't expected, what should I do to help track down the issue?
> >
> >    Can you show us the output of "btrfs check --readonly", on both the
> > x86_64 machine and the ARM machine? It might give some more insight
> > into the nature of the breakage.
>
> On x86_64:
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/storage1
> UUID: aafd9149-9cfe-4970-ae21-f1065c36ed63
> [1/7] checking root items
> [2/7] checking extents
> [3/7] checking free space cache
> [4/7] checking fs roots
> [5/7] checking only csums items (without verifying data)
> [6/7] checking root refs
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 17647861833728 bytes used, no error found
> total csum bytes: 17211131512
> total tree bytes: 19333480448
> total fs tree bytes: 202801152
> total extent tree bytes: 183894016
> btree space waste bytes: 1474174626
> file data blocks allocated: 17628822319104
>  referenced 17625817141248
>
> On ARM:
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/storage1
> UUID: aafd9149-9cfe-4970-ae21-f1065c36ed63
> [1/7] checking root items
> [2/7] checking extents
> [3/7] checking free space cache
> [4/7] checking fs roots
> [5/7] checking only csums items (without verifying data)
> [6/7] checking root refs
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 17647861833728 bytes used, no error found
> total csum bytes: 17211131512
> total tree bytes: 19333480448
> total fs tree bytes: 202801152
> total extent tree bytes: 183894016
> btree space waste bytes: 1474174626
> file data blocks allocated: 17628822319104
>  referenced 17625817141248
>
> >    Possibly also "btrfs inspect dump-super" on both machines.
>
> On x86_64:
> superblock: bytenr=65536, device=/dev/dm-5
> ---------------------------------------------------------
> csum_type        0 (crc32c)
> csum_size        4
> csum            0x737fcf72 [match]
> bytenr            65536
> flags            0x1
>             ( WRITTEN )
> magic            _BHRfS_M [match]
> fsid            aafd9149-9cfe-4970-ae21-f1065c36ed63
> label            Storage
> generation        97532
> root            30687232
> sys_array_size        129
> chunk_root_generation    97526
> root_level        1
> chunk_root        20971520
> chunk_root_level    1
> log_root        0
> log_root_transid    0
> log_root_level        0
> total_bytes        40007732224000
> bytes_used        17647861833728
> sectorsize        4096
> nodesize        16384
> leafsize (deprecated)        16384
> stripesize        4096
> root_dir        6
> num_devices        5
> compat_flags        0x0
> compat_ro_flags        0x0
> incompat_flags        0x1e1
>             ( MIXED_BACKREF |
>               BIG_METADATA |
>               EXTENDED_IREF |
>               RAID56 |
>               SKINNY_METADATA )
> cache_generation    97532
> uuid_tree_generation    97532
> dev_item.uuid        39a9463d-65f5-499b-bca8-dae6b52eb729
> dev_item.fsid        aafd9149-9cfe-4970-ae21-f1065c36ed63 [match]
> dev_item.type        0
> dev_item.total_bytes    8001546444800
> dev_item.bytes_used    4436709605376
> dev_item.io_align    4096
> dev_item.io_width    4096
> dev_item.sector_size    4096
> dev_item.devid        5
> dev_item.dev_group    0
> dev_item.seek_speed    0
> dev_item.bandwidth    0
> dev_item.generation    0
>
> On ARM:
> superblock: bytenr=65536, device=/dev/dm-2
> ---------------------------------------------------------
> csum_type        0 (crc32c)
> csum_size        4
> csum            0x737fcf72 [match]
> bytenr            65536
> flags            0x1
>             ( WRITTEN )
> magic            _BHRfS_M [match]
> fsid            aafd9149-9cfe-4970-ae21-f1065c36ed63
> metadata_uuid        aafd9149-9cfe-4970-ae21-f1065c36ed63
> label            Storage
> generation        97532
> root            30687232
> sys_array_size        129
> chunk_root_generation    97526
> root_level        1
> chunk_root        20971520
> chunk_root_level    1
> log_root        0
> log_root_transid    0
> log_root_level        0
> total_bytes        40007732224000
> bytes_used        17647861833728
> sectorsize        4096
> nodesize        16384
> leafsize (deprecated)    16384
> stripesize        4096
> root_dir        6
> num_devices        5
> compat_flags        0x0
> compat_ro_flags        0x0
> incompat_flags        0x1e1
>             ( MIXED_BACKREF |
>               BIG_METADATA |
>               EXTENDED_IREF |
>               RAID56 |
>               SKINNY_METADATA )
> cache_generation    97532
> uuid_tree_generation    97532
> dev_item.uuid        39a9463d-65f5-499b-bca8-dae6b52eb729
> dev_item.fsid        aafd9149-9cfe-4970-ae21-f1065c36ed63 [match]
> dev_item.type        0
> dev_item.total_bytes    8001546444800
> dev_item.bytes_used    4436709605376
> dev_item.io_align    4096
> dev_item.io_width    4096
> dev_item.sector_size    4096
> dev_item.devid        5
> dev_item.dev_group    0
> dev_item.seek_speed    0
> dev_item.bandwidth    0
> dev_item.generation    0
>
> Only difference appears to the extra metadata_uuid line on ARM. I
> assume that's because the ARM system is running btrfs-progs v4.20.2 vs
> v4.19 on the x86_64 system.
>
> > > Also potentially relevant: The x86_64 system is currently running
> > > 4.19.27, while the ARM system is running 5.1.3.
> >
> >    Shouldn't make a difference.
> >
> > > Finally, just in case it's relevant, I just finished reencrypting the
> > > array, which involved doing a `btrfs replace` on each device in the
> > > array.
> >
> >    If you can still mount on x86_64, then the FS is at least
> > reasonably complete and undamaged. I don't think this will make a
> > difference.  However, it's worth checking whether there are any
> > funnies about your encryption layer on ARM (I wouldn't expect any,
> > since it's recognising the decrypted device as btrfs, rather than
> > random crud).
>
> I took the sha256 hash of the first GiB of plaintext on each drive,
> and got the same result on both systems, so I think things should be
> okay, there.
>
> >    Hugo.
> >
> > --
> > Hugo Mills             | Prisoner unknown: Return to Zenda.
> > hugo@... carfax.org.uk |
> > http://carfax.org.uk/  |
> > PGP: E2AB1DE4          |

  reply	other threads:[~2019-06-26  7:04 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-21  8:34 "bad tree block start" when trying to mount on ARM Erik Jensen
2019-05-21  8:56 ` Patrik Lundquist
2019-05-21  9:01   ` Erik Jensen
2019-05-21  9:18 ` Hugo Mills
2019-05-22 16:02   ` Erik Jensen
2019-06-26  7:04     ` Erik Jensen [this message]
2019-06-26  8:10       ` Qu Wenruo
     [not found]         ` <CAMj6ewO229vq6=s+T7GhUegwDADv4dzhqPiM0jo10QiKujvytA@mail.gmail.com>
2019-06-28  8:15           ` Qu Wenruo
2021-01-18 10:50             ` Erik Jensen
     [not found]             ` <CAMj6ewMqXLtrBQgTJuz04v3MBZ0W95fU4pT0jP6kFhuP830TuA@mail.gmail.com>
2021-01-18 11:07               ` Qu Wenruo
2021-01-18 11:55                 ` Erik Jensen
2021-01-18 12:01                   ` Qu Wenruo
2021-01-18 12:12                     ` Erik Jensen
2021-01-19  5:22                       ` Erik Jensen
2021-01-19  9:28                         ` Erik Jensen
2021-01-20  8:21                           ` Qu Wenruo
2021-01-20  8:30                             ` Qu Wenruo
     [not found]                               ` <CAMj6ewOqCJTGjykDijun9_LWYELA=92HrE+KjGo-ehJTutR_+w@mail.gmail.com>
2021-01-26  4:54                                 ` Erik Jensen
2021-01-29  6:39                                   ` Erik Jensen
2021-02-01  2:35                                     ` Qu Wenruo
2021-02-01  5:49                                       ` Su Yue
2021-02-04  6:16                                         ` Erik Jensen
2021-02-06  1:57                                           ` Erik Jensen
2021-02-10  5:47                                             ` Qu Wenruo
2021-02-10 22:17                                               ` Erik Jensen
2021-02-10 23:47                                                 ` Qu Wenruo
2021-02-18  1:24                                                   ` Qu Wenruo
2021-02-18  4:03                                                     ` Erik Jensen
2021-02-18  5:24                                                       ` Qu Wenruo
2021-02-18  5:49                                                         ` Erik Jensen
2021-02-18  6:09                                                           ` Qu Wenruo
2021-02-18  6:59                                                             ` Erik Jensen
2021-02-18  7:24                                                               ` Qu Wenruo
2021-02-18  7:59                                                                 ` Erik Jensen
2021-02-18  8:38                                                                   ` Qu Wenruo
2021-02-18  8:52                                                                     ` Erik Jensen
2021-02-18  8:59                                                                       ` Qu Wenruo
2021-02-20  2:47                                                                         ` Erik Jensen
2021-02-20  3:16                                                                           ` Qu Wenruo
2021-02-20  4:28                                                                             ` Erik Jensen
2021-02-20  6:01                                                                               ` Qu Wenruo
2021-02-21  5:36                                                                                 ` Erik Jensen
2021-02-18  7:25                                                               ` Erik Jensen
2019-05-21 10:17 ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMj6ewOHrJVdwfKrgXZxwfwE=eoTaB9MS57zha33yb1_iOLWiw@mail.gmail.com' \
    --to=erikjensen@rkjnsn.net \
    --cc=hugo@carfax.org.uk \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).