On 2019/11/10 下午2:47, Timothy Pearson wrote: > > > ----- Original Message ----- >> From: "Qu Wenruo" >> To: "Timothy Pearson" , "linux-btrfs" >> Sent: Saturday, November 9, 2019 9:38:21 PM >> Subject: Re: Unusual crash -- data rolled back ~2 weeks? > >> On 2019/11/10 上午6:33, Timothy Pearson wrote: >>> We just experienced a very unusual crash on a Linux 5.3 file server using NFS to >>> serve a BTRFS filesystem. NFS went into deadlock (D wait) with no apparent >>> underlying disk subsystem problems, and when the server was hard rebooted to >>> clear the D wait the BTRFS filesystem remounted itself in the state that it was >>> in approximately two weeks earlier (!). >> >> This means during two weeks, the btrfs is not committed. > > Is there any hope of getting the data from that interval back via btrfs-recover or a similar tool, or does the lack of commit mean the data was stored in RAM only and is therefore gone after the server reboot? If it's deadlock preventing new transaction to be committed, then no metadata is even written back to disk, so no way to recover metadata. Maybe you can find some data written, but without metadata it makes no sense. > > If the latter, I'm somewhat surprised given the I/O load on the disk array in question, but it would also offer a clue as to why it hard locked the filesystem eventually (presumably on memory exhaustion -- the server has something like 128GB of RAM, so it could go quite a while before hitting the physical RAM limits). > >> >>> There was also significant corruption of certain files (e.g. LDAP MDB and MySQL >>> InnoDB) noted -- we restored from backup for those files, but are concerned >>> about the status of the entire filesystem at this point. >> >> Btrfs check is needed to ensure no metadata corruption. >> >> Also, we need sysrq+w output to determine where we are deadlocking. >> Otherwise, it's really hard to find any clue from the report. > > It would have been gathered if we'd known the filesystem was in this bad state. At the time, the priority was on restoring service and we had assumed NFS had just wedged itself (again). It was only after reboot and remount that the damage slowly came to light. > > Do the described symptoms (what little we know of them at this point) line up with the issues fixed by https://patchwork.kernel.org/patch/11141559/ ? Right now we're hoping that this particular issue was fixed by that series, but if not we might consider increasing backup frequency to nightly for this particular array and seeing if it happens again. That fix is already in v5.3, thus I don't think that's the case. Thanks, Qu > > Thanks! >