linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Unsync-ed LVM Mirror
@ 2018-02-03  9:43 Liwei
  2018-02-05  3:21 ` Liwei
  2018-02-05  7:27 ` Eric Ren
  0 siblings, 2 replies; 7+ messages in thread
From: Liwei @ 2018-02-03  9:43 UTC (permalink / raw)
  To: linux-lvm

Hi list,
    I had a LV that I was converting from linear to mirrored (not
raid1) whose source device failed partway-through during the initial
sync.

    I've since recovered the source device, but it seems like the
mirror is still acting as if some blocks are not readable? I'm getting
this in my logs, and the FS is full of errors:

[  +1.613126] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.000278] device-mapper: raid1: Primary mirror (253:25) failed
while out-of-sync: Reads may fail.
[  +0.085916] device-mapper: raid1: Mirror read failed.
[  +0.196562] device-mapper: raid1: Mirror read failed.
[  +0.000237] Buffer I/O error on dev dm-27, logical block 5371800560,
async page read
[  +0.592135] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.082882] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.246945] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.107374] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.083344] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.114949] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.085056] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.203929] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.157953] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +3.065247] recovery_complete: 23 callbacks suppressed
[  +0.000001] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.128064] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.103100] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.107827] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.140871] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.132844] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.124698] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.138502] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.117827] device-mapper: raid1: Unable to read primary mirror
during recovery
[  +0.125705] device-mapper: raid1: Unable to read primary mirror
during recovery
[Feb 3 17:09] device-mapper: raid1: Mirror read failed.
[  +0.167553] device-mapper: raid1: Mirror read failed.
[  +0.000268] Buffer I/O error on dev dm-27, logical block 5367765816,
async page read
[  +0.135138] device-mapper: raid1: Mirror read failed.
[  +0.000238] Buffer I/O error on dev dm-27, logical block 5367765816,
async page read
[  +0.000365] device-mapper: raid1: Mirror read failed.
[  +0.000315] device-mapper: raid1: Mirror read failed.
[  +0.000213] Buffer I/O error on dev dm-27, logical block 5367896888,
async page read
[  +0.000276] device-mapper: raid1: Mirror read failed.
[  +0.000199] Buffer I/O error on dev dm-27, logical block 5367765816,
async page read

    However, if I take down the destination device and restart the LV
with --activateoption partial, I can read my data and everything
checks out.

    My theory (and what I observed) is that lvm continued the initial
sync even after the source drive stopped responding, and has now
mapped the blocks that it 'synced' as dead. How can I make lvm retry
those blocks again?

    In fact, I don't trust the mirror anymore, is there a way I can
conduct a scrub of the mirror after the initial sync is done? I read
about --syncaction check, but seems like it only notes the number of
inconsistencies. Can I have lvm re-mirror the inconsistencies from the
source to destination device? I trust the source device because we ran
a btrfs scrub on it and it reported that all checksums are valid.

    It took months for the mirror sync to get to this stage (actually,
why does it take months to mirror 20TB?), I don't want to start it all
over again.

Warm regards,
Liwei

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-02-05 10:07 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-03  9:43 [linux-lvm] Unsync-ed LVM Mirror Liwei
2018-02-05  3:21 ` Liwei
2018-02-05  7:27 ` Eric Ren
2018-02-05  7:42   ` Liwei
2018-02-05  8:43     ` Eric Ren
2018-02-05  9:26       ` Liwei
2018-02-05 10:07     ` Eric Ren

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).