All of lore.kernel.org
 help / color / mirror / Atom feed
* RE: XFS Disk Repair failing with err 117 (Help Recovering Data)
@ 2020-12-23 18:06 nitsuga5124
  0 siblings, 0 replies; 5+ messages in thread
From: nitsuga5124 @ 2020-12-23 18:06 UTC (permalink / raw)
  To: linux-xfs

Hi, I'm continuing a series of emails I had with Eric Sandeen 
<sandeen@sandeen.net> on September 19, 2020, about trying to recover 
data from a corrupted file system where xfs_repair fails with `fatal 
error -- couldn't map inode 132, err = 117`

Last time the issue was declared to be a hardware problem; I have bought 
an equal drive to that one, and I have used ddrescue to clone the disk 
over to the new drive, without the encryption, so the data is not 
mangled and possible data logs should be more readable.
Trying to run xfs_repair on that drive clone leads to the same error, 
but using `photorec` does show that all the files are still readable, so 
there must be a way to recover them while keeping the file structure and 
filenames they had, as `photorec` does not do this. `testdisk` does do 
this, but it is not supported on xfs, so it's sadly not an option.

Since I have a drive clone now, I'm able to do more risky repair 
procedures like trying to use dd to rewrite corrupted areas to make 
xfs_repair work, or similar; but I'm unable to find any information 
about locating inodes on the drive, so help doing this would be appreciated.
The end goal is to have a readable drive, to then be able to backup all 
the files with their file structure, I know all the data is there and 
readable, so broken hardware is no longer an excuse, as the cloned data 
is on a brand new drive with 0 bad sectors and all OK S.M.A.R.T.

- Agustín (Austin in English)


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: XFS Disk Repair failing with err 117 (Help Recovering Data)
       [not found]               ` <CAFTiD3=6huVFn0gqpKGc2r4avTXWEMTuX2UjKCUTAmd1Gxi8OA@mail.gmail.com>
@ 2020-09-19 17:30                 ` Eric Sandeen
  0 siblings, 0 replies; 5+ messages in thread
From: Eric Sandeen @ 2020-09-19 17:30 UTC (permalink / raw)
  To: Agustín Casasampere Fernandez, xfs

(argh and somehow I lost the list cc: again)

To recap:

> Sep 16 21:47:44 ArchPC kernel: ata3.00: exception Emask 0x0 SAct 0x20 SErr 0x0 action 0x0
> Sep 16 21:47:44 ArchPC kernel: ata3.00: irq_stat 0x40000008
> Sep 16 21:47:44 ArchPC kernel: ata3.00: failed command: READ FPDMA QUEUED
> Sep 16 21:47:44 ArchPC kernel: ata3.00: cmd 60/80:28:80:40:9b/00:00:1b:00:00/40 tag 5 ncq dma 65536 in
>                                         res 43/40:80:b0:40:9b/00:00:1b:00:00/00 Emask 0x409 (media error) <F>
> Sep 16 21:47:44 ArchPC kernel: ata3.00: status: { DRDY SENSE ERR }
> Sep 16 21:47:44 ArchPC kernel: ata3.00: error: { UNC }
> Sep 16 21:47:44 ArchPC kernel: audit: type=1130 audit(1600285664.564:3248): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=spdynu comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> Sep 16 21:47:44 ArchPC kernel: ata3.00: configured for UDMA/133
> Sep 16 21:47:44 ArchPC kernel: sd 2:0:0:0: [sdc] tag#5 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
> Sep 16 21:47:44 ArchPC kernel: sd 2:0:0:0: [sdc] tag#5 Sense Key : Medium Error [current] 
> Sep 16 21:47:44 ArchPC kernel: sd 2:0:0:0: [sdc] tag#5 Add. Sense: Unrecovered read error - auto reallocate failed
> Sep 16 21:47:44 ArchPC kernel: sd 2:0:0:0: [sdc] tag#5 CDB: Read(16) 88 00 00 00 00 00 1b 9b 40 80 00 00 00 80 00 00
> Sep 16 21:47:44 ArchPC kernel: blk_update_request: I/O error, dev sdc, sector 463159472 op 0x0:(READ) flags 0x0 phys_seg 10 prio class 0
> Sep 16 21:47:44 ArchPC kernel: ata3: EH complete

...

>>> From the dmesg, you have errors on sdc.  What is the physical volume behind dm-1?  Is this sdc, or is this a cdrom?
>> /dev/sdc is the only drive with XFS on my system, i don't have a cdrom.
> 
> Then I think your hardware is failing.


On 9/19/20 12:17 PM, Agustín Casasampere Fernandez wrote:
>> Then I think your hardware is failing.
> Is there any way to recover the data?

There are data recovery firms out there, but I can't help you with hardware issues.

> like, clear the inode that can't be mapped, so xfs_repair can at least try to do something, or some utility that will allow me to copy some of the files over from /dev/mapper/storage over to a different hard drive?
> 
> I still think this is not a hardware issue, the drive is way too new, and it's not that heavy on use, it has never been filled, and it's location has been static.

Then perhaps it's a driver error, but the dmesg says "media error" and "Unrecovered read error"

This isn't something XFS can fix or recover from; the problem seems to lie
below the filesystem.

-Eric

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: XFS Disk Repair failing with err 117 (Help Recovering Data)
       [not found]       ` <CAFTiD3myam_0wHvBRuuYt9xs0Pj0H-QBz=5sptn2=5zgoPnZEQ@mail.gmail.com>
@ 2020-09-19 16:58         ` Eric Sandeen
       [not found]           ` <CAFTiD3m2Z-G3=iRMv4hJiXj1fg4fkxzU1z4fdp6GVxp7ckTKgg@mail.gmail.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2020-09-19 16:58 UTC (permalink / raw)
  To: Agustín Casasampere Fernandez, linux-xfs



On 9/19/20 10:53 AM, Agustín Casasampere Fernandez wrote:
> 
>> What kernel version
> 
> The computer is running Arch Linux, using the zen kernel:
> `uname -a` : Linux ArchPC 5.8.8-zen1-1-zen #1 ZEN SMP PREEMPT Wed, 09 Sep 2020 19:01:48 +0000 x86_64 GNU/Linux
> 
>> What xfsprogs version
> 
> Using the latest from arch linux core/xfsprogs with version 5.8.0-1
> 
>> What were the prior kernel messages
> 
> `sudo dmesg` new output after running xfs_repair:
> ```
> [189493.940996] audit: type=1106 audit(1600527058.983:1136): pid=101786 uid=0 auid=1000 ses=2 msg='op=PAM:session_close grantors=pam_limits,pam_unix,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/4 res=success'
> [189493.941178] audit: type=1104 audit(1600527058.983:1137): pid=101786 uid=0 auid=1000 ses=2 msg='op=PAM:setcred grantors=pam_faillock,pam_permit,pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/4 res=success'
> [189501.583038] audit: type=1101 audit(1600527066.626:1138): pid=101795 uid=1000 auid=1000 ses=2 msg='op=PAM:accounting grantors=pam_permit,pam_time acct="nitsuga" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
> [189501.583166] audit: type=1110 audit(1600527066.626:1139): pid=101795 uid=0 auid=1000 ses=2 msg='op=PAM:setcred grantors=pam_faillock,pam_permit,pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
> [189501.584727] audit: type=1105 audit(1600527066.628:1140): pid=101795 uid=0 auid=1000 ses=2 msg='op=PAM:session_open grantors=pam_limits,pam_unix,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
> [189501.727237] audit: type=1106 audit(1600527066.770:1141): pid=101795 uid=0 auid=1000 ses=2 msg='op=PAM:session_close grantors=pam_limits,pam_unix,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
> [189501.727288] audit: type=1104 audit(1600527066.770:1142): pid=101795 uid=0 auid=1000 ses=2 msg='op=PAM:setcred grantors=pam_faillock,pam_permit,pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
> [189503.255383] audit: type=1101 audit(1600527068.298:1143): pid=101955 uid=1000 auid=1000 ses=2 msg='op=PAM:accounting grantors=pam_permit,pam_time acct="nitsuga" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/4 res=success'
> [189503.255567] audit: type=1110 audit(1600527068.299:1144): pid=101955 uid=0 auid=1000 ses=2 msg='op=PAM:setcred grantors=pam_faillock,pam_permit,pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/4 res=success'
> [189503.257034] audit: type=1105 audit(1600527068.300:1145): pid=101955 uid=0 auid=1000 ses=2 msg='op=PAM:session_open grantors=pam_limits,pam_unix,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/4 res=success'
> ```
> I have also attached the dmesg_logs part from the moment the drive became unreadable until just after the command `sudo badblocks -v /dev/sdc > badsectors.txt` finished running.
> before badblocks was ran, `sudo xfs_repair /dev/mapper/storage` was ran too, and followed with `sudo xfs_repair -L /dev/mapper/storage`, as it failed without -L saying that -L was "required".


Sep 16 21:02:33 ArchPC kernel: XFS (dm-1): Corruption warning: Metadata has LSN (-1868060818:-756175860) ahead of current LSN (925218271:1124568). Please unmount and run xfs_repair (>= v4.3) to resolve.
Sep 16 21:02:33 ArchPC kernel: XFS (dm-1): Metadata CRC error detected at xfs_allocbt_read_verify+0x15/0xd0 [xfs], xfs_cntbt block 0x1b8d1230 
Sep 16 21:02:33 ArchPC kernel: XFS (dm-1): Unmount and run xfs_repair
Sep 16 21:02:33 ArchPC kernel: XFS (dm-1): First 128 bytes of corrupted metadata buffer:
Sep 16 21:02:33 ArchPC kernel: 00000000: 1c 96 2b 47 25 5e 7a 80 ed c2 7c 85 78 0e b2 35  ..+G%^z...|.x..5
Sep 16 21:02:33 ArchPC kernel: 00000010: ca 7a 5e 3b 69 ca 8b 79 90 a7 a7 6e d2 ed ac 0c  .z^;i..y...n....
Sep 16 21:02:33 ArchPC kernel: 00000020: 71 c3 35 e5 95 ff 5c 68 75 19 75 9e 75 7d bb d5  q.5...\hu.u.u}..
Sep 16 21:02:33 ArchPC kernel: 00000030: b1 b6 0a a4 e7 79 60 6c 51 b4 98 59 8a 09 19 72  .....y`lQ..Y...r
Sep 16 21:02:33 ArchPC kernel: 00000040: cf d6 c5 9c cc 6c 8d a9 b7 6a 88 0f 8d c2 ca b9  .....l...j......
Sep 16 21:02:33 ArchPC kernel: 00000050: 8f 2d 3f 5f 1c a0 8c 5e 3f a7 57 ea dd d8 83 6f  .-?_...^?.W....o
Sep 16 21:02:33 ArchPC kernel: 00000060: 60 51 ca 74 72 5c 9b 61 f6 f2 e3 1c 2e 77 79 e6  `Q.tr\.a.....wy.
Sep 16 21:02:33 ArchPC kernel: 00000070: e4 52 3a 90 cd 10 01 cd 48 b1 35 3f a9 33 8b 54  .R:.....H.5?.3.T
Sep 16 21:02:33 ArchPC kernel: XFS (dm-1): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0xbc/0x100 [xfs]" at daddr 0x1b8d1230 len 8 error 74
Sep 16 21:02:33 ArchPC kernel: XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 312 of file fs/xfs/xfs_trans_buf.c. Return address = 00000000ac9ecd5c
Sep 16 21:02:33 ArchPC kernel: XFS (dm-1): I/O Error Detected. Shutting down filesystem
Sep 16 21:02:33 ArchPC kernel: XFS (dm-1): Please unmount the filesystem and rectify the problem(s)

That looks like a real mess; it's pure garbage data.  Maybe encrypted data?

From the dmesg, you have errors on sdc.  What is the physical volume behind dm-1?  Is this sdc, or is this a cdrom?

Sep 16 21:47:44 ArchPC kernel: ata3.00: exception Emask 0x0 SAct 0x20 SErr 0x0 action 0x0
Sep 16 21:47:44 ArchPC kernel: ata3.00: irq_stat 0x40000008
Sep 16 21:47:44 ArchPC kernel: ata3.00: failed command: READ FPDMA QUEUED
Sep 16 21:47:44 ArchPC kernel: ata3.00: cmd 60/80:28:80:40:9b/00:00:1b:00:00/40 tag 5 ncq dma 65536 in
                                        res 43/40:80:b0:40:9b/00:00:1b:00:00/00 Emask 0x409 (media error) <F>
Sep 16 21:47:44 ArchPC kernel: ata3.00: status: { DRDY SENSE ERR }
Sep 16 21:47:44 ArchPC kernel: ata3.00: error: { UNC }
Sep 16 21:47:44 ArchPC kernel: audit: type=1130 audit(1600285664.564:3248): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=spdynu comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 16 21:47:44 ArchPC kernel: ata3.00: configured for UDMA/133
Sep 16 21:47:44 ArchPC kernel: sd 2:0:0:0: [sdc] tag#5 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Sep 16 21:47:44 ArchPC kernel: sd 2:0:0:0: [sdc] tag#5 Sense Key : Medium Error [current] 
Sep 16 21:47:44 ArchPC kernel: sd 2:0:0:0: [sdc] tag#5 Add. Sense: Unrecovered read error - auto reallocate failed
Sep 16 21:47:44 ArchPC kernel: sd 2:0:0:0: [sdc] tag#5 CDB: Read(16) 88 00 00 00 00 00 1b 9b 40 80 00 00 00 80 00 00
Sep 16 21:47:44 ArchPC kernel: blk_update_request: I/O error, dev sdc, sector 463159472 op 0x0:(READ) flags 0x0 phys_seg 10 prio class 0
Sep 16 21:47:44 ArchPC kernel: ata3: EH complete



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: XFS Disk Repair failing with err 117 (Help Recovering Data)
  2020-09-19 13:40 nitsuga5124
@ 2020-09-19 14:21 ` Eric Sandeen
       [not found]   ` <CAFTiD3mSLZ6nBk+kZJX=jaOFA4JzfhJ9FW5c42z5UqoTpiXaKg@mail.gmail.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2020-09-19 14:21 UTC (permalink / raw)
  To: nitsuga5124, linux-xfs

On 9/19/20 8:40 AM, nitsuga5124 wrote:
> First of all, i want to say that i think this is not a hardware issue, the hard drive sounds fine and it didn't show any signs of slowness, it's also not very old, got it the 10th of January.
> 
> The entire disk is behind luks encryption:
> ```
> #lsblk
> sdc         8:32   0   3.6T  0 disk
> └─storage 254:1    0   3.6T  0 crypt
> ```
> 
> Recently i've been having some issues where the drive suddenly becomes unreadable (seems to be caused by kited) but all of the times it happened, using `xfs_repair` with the following syntax `sudo xfs_repair /dev/mapper/storage -v`; made the drive readable again, but this time i didn't have this luck, and the drive was unable to be repaired, leading to this error at the end:
> 
> ```
> Phase 6 - check inode connectivity...
>         - resetting contents of realtime bitmap and summary inodes
>         - traversing filesystem ...
>         - agno = 0
> bad hash table for directory inode 128 (no data entry): rebuilding
> rebuilding directory inode 128
> Invalid inode number 0x0
> Metadata corruption detected at 0x56135ef283e8, inode 0x84 data fork
> 
> fatal error -- couldn't map inode 132, err = 117

We need more info:

What kernel version
What xfsprogs version

What were the prior kernel messages
What were the prior xfs_repair messages

> ```
> 
> I ran `xfs_db` to see what was happening on that inode:
> `sudo xfs_db -x -c 'blockget inode 132' /dev/mapper/storage > xfs_db.log`

FWIW, "blockget inode 132" is not a valid command in xfs_db.

Perhaps you mean "blockget -i 132?"

In any case, "blockget" scans the entire disk, and apparently finds lots of
corruption along the way.

Did you unmount and open/unlock the LUKS volume before you ran repair and/or
xfs_db?

> and this are the outputs:
> 
> - stderr:
> ```
> Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 0x1b275ab8/0x1000
> Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 0x1b2d32e0/0x1000

<snip>

> ```
> 
> - stdout:
> It's a 14GB file, i will not send it.
> 
> Help trying to recover the data (move it to another drive) would be great.
> ideally i would fix the error, and i would move the data to another drive, shrink the partition, make a new partition with EXT4 instead of XFS, and move the data to the new partition, shrink, expand, and keep doing that until everything is moved to EXT4 or some other file system, or XFS again without whole drive encryption; i think this issues are occurring because using luks on XFS is not a common thing, so it's probably untested and unstable.

It really should be fine.

> 
> http://xfs.9218.n7.nabble.com/Assert-in-xfs-repair-Phase-7-and-other-xfs-restore-problems-td33368.html

(email above is from 8 years ago)

> I have found a possible solution for this problem in this mailing list.
> The solution looks to be to clear the corrupted inode (132)
> but i'm unable to find a way to do this while the drive is unmounted. Is this possible? if so, how?
> 
> https://forums.unraid.net/topic/66749-xfs-file-system-corruption-safest-way-to-fix/
> 
> I have also found this post on the unraid forums about the same error, (where i also found this email address), and the solution was the use a way older version of xfs_repair, but sadly that solution didn't work (probably because it's way too old?); Can you also confirm that the issue is fixed on xfs_repair xfs_repair version 5.8.0?

It's not clear what the issue is at this point.

XFS doesn't generally see self-induced fs-wide corruption, it is almost always the
result of something gone wrong in the layers below.

While that may sound like deflection, it's my assessment after very many years of
looking into these sorts of issues.

-Eric

^ permalink raw reply	[flat|nested] 5+ messages in thread

* XFS Disk Repair failing with err 117 (Help Recovering Data)
@ 2020-09-19 13:40 nitsuga5124
  2020-09-19 14:21 ` Eric Sandeen
  0 siblings, 1 reply; 5+ messages in thread
From: nitsuga5124 @ 2020-09-19 13:40 UTC (permalink / raw)
  To: linux-xfs

First of all, i want to say that i think this is not a hardware issue, 
the hard drive sounds fine and it didn't show any signs of slowness, 
it's also not very old, got it the 10th of January.

The entire disk is behind luks encryption:
```
#lsblk
sdc         8:32   0   3.6T  0 disk
└─storage 254:1    0   3.6T  0 crypt
```

Recently i've been having some issues where the drive suddenly becomes 
unreadable (seems to be caused by kited) but all of the times it 
happened, using `xfs_repair` with the following syntax `sudo xfs_repair 
/dev/mapper/storage -v`; made the drive readable again, but this time i 
didn't have this luck, and the drive was unable to be repaired, leading 
to this error at the end:

```
Phase 6 - check inode connectivity...
         - resetting contents of realtime bitmap and summary inodes
         - traversing filesystem ...
         - agno = 0
bad hash table for directory inode 128 (no data entry): rebuilding
rebuilding directory inode 128
Invalid inode number 0x0
Metadata corruption detected at 0x56135ef283e8, inode 0x84 data fork

fatal error -- couldn't map inode 132, err = 117
```

I ran `xfs_db` to see what was happening on that inode:
`sudo xfs_db -x -c 'blockget inode 132' /dev/mapper/storage > xfs_db.log`
and this are the outputs:

- stderr:
```
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b275ab8/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b2d32e0/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b338de0/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b38ead0/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b3e9750/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b4383b0/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b47c868/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b547748/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b619828/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b63b9d8/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x1b687f38/0x1000
Metadata CRC error detected at 0x557ecac7bc85, xfs_cntbt block 
0x1b8d1230/0x1000
Metadata CRC error detected at 0x557ecac7bc85, xfs_cntbt block 
0x1b8d1290/0x1000
Metadata corruption detected at 0x557ecacb369b, xfs_refcountbt block 
0x1b227dd8/0x1000
Metadata corruption detected at 0x557ecacb0600, xfs_inobt block 
0x1afb5e68/0x1000
Metadata corruption detected at 0x557ecacb0600, xfs_finobt block 
0x1afe9ef0/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x8b928790/0x1000
Metadata corruption detected at 0x557ecacb369b, xfs_refcountbt block 
0x8b928780/0x1000
Metadata corruption detected at 0x557ecacb0600, xfs_inobt block 
0x8b928310/0x1000
Metadata corruption detected at 0x557ecacb0600, xfs_finobt block 
0x8b928468/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x11660d9e8/0x1000
Metadata corruption detected at 0x557ecacb369b, xfs_refcountbt block 
0x1166000a0/0x1000
Metadata corruption detected at 0x557ecacb0600, xfs_inobt block 
0x116531ea8/0x1000
Metadata corruption detected at 0x557ecacb0600, xfs_finobt block 
0x116531ed0/0x1000
Metadata corruption detected at 0x557ecac7bbfd, xfs_bnobt block 
0x176489440/0x1000
Metadata corruption detected at 0x557ecacb369b, xfs_refcountbt block 
0x17603e178/0x1000
Metadata corruption detected at 0x557ecacb0600, xfs_inobt block 
0x1760313d0/0x1000
Metadata corruption detected at 0x557ecacb0600, xfs_finobt block 
0x176031920/0x1000
```

- stdout:
It's a 14GB file, i will not send it.

Help trying to recover the data (move it to another drive) would be great.
ideally i would fix the error, and i would move the data to another 
drive, shrink the partition, make a new partition with EXT4 instead of 
XFS, and move the data to the new partition, shrink, expand, and keep 
doing that until everything is moved to EXT4 or some other file system, 
or XFS again without whole drive encryption; i think this issues are 
occurring because using luks on XFS is not a common thing, so it's 
probably untested and unstable.

http://xfs.9218.n7.nabble.com/Assert-in-xfs-repair-Phase-7-and-other-xfs-restore-problems-td33368.html 


I have found a possible solution for this problem in this mailing list.
The solution looks to be to clear the corrupted inode (132)
but i'm unable to find a way to do this while the drive is unmounted. Is 
this possible? if so, how?

https://forums.unraid.net/topic/66749-xfs-file-system-corruption-safest-way-to-fix/

I have also found this post on the unraid forums about the same error, 
(where i also found this email address), and the solution was the use a 
way older version of xfs_repair, but sadly that solution didn't work 
(probably because it's way too old?); Can you also confirm that the 
issue is fixed on xfs_repair xfs_repair version 5.8.0?

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-12-23 18:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-23 18:06 XFS Disk Repair failing with err 117 (Help Recovering Data) nitsuga5124
  -- strict thread matches above, loose matches on Subject: below --
2020-09-19 13:40 nitsuga5124
2020-09-19 14:21 ` Eric Sandeen
     [not found]   ` <CAFTiD3mSLZ6nBk+kZJX=jaOFA4JzfhJ9FW5c42z5UqoTpiXaKg@mail.gmail.com>
     [not found]     ` <3eabe989-ba3c-bf8e-8b45-511d343cd4c7@sandeen.net>
     [not found]       ` <CAFTiD3myam_0wHvBRuuYt9xs0Pj0H-QBz=5sptn2=5zgoPnZEQ@mail.gmail.com>
2020-09-19 16:58         ` Eric Sandeen
     [not found]           ` <CAFTiD3m2Z-G3=iRMv4hJiXj1fg4fkxzU1z4fdp6GVxp7ckTKgg@mail.gmail.com>
     [not found]             ` <4f69f4ce-6533-9480-23b0-4d1b0d5dc646@sandeen.net>
     [not found]               ` <CAFTiD3=6huVFn0gqpKGc2r4avTXWEMTuX2UjKCUTAmd1Gxi8OA@mail.gmail.com>
2020-09-19 17:30                 ` Eric Sandeen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.