archive mirror
 help / color / mirror / Atom feed
From: Steve Dodd <>
Subject: Re: [linux-lvm] Scrub errors after extending LVM RAID1 mirror [full email]
Date: Sat, 2 Feb 2019 16:09:44 +0000	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

[-- Attachment #1: Type: text/plain, Size: 2583 bytes --]

I think I've finally nailed down a test case, if anyone is bored and wants
to play:

I can't file upstream as RH bugzilla is down at the moment..


On Sat, 2 Feb 2019 at 13:34, Steve Dodd <> wrote:

> Weirdly, I thought I had failed to reproduce this bug, but my auto-scrub
> job ran this morning (first Sat of month), and I got:
> 03:15:15: Starting scrub of rvg/test ...
> 03:15:15: ... scrub started ...
> 03:18:36: FAILED:      7926656 mismatches
> So I really have no idea what's going on there. I will wade through my
> bash history and see if I can see what I did last week and what triggered
> this..
> S.
> On Wed, 23 Jan 2019 at 10:46, Steve Dodd <> wrote:
>> Sorry, user error sent the last email before I'd finished typing, trying
>> again..
>> Hi everyone,
>> I am experiencing a mystery scrub failure after extending a particular LV
>> which is a raid1 type mirror. I am using Ubuntu 18.04, LVM
>> 2.02.176-4.1ubuntu3, Ubuntu kernel 4.15.0-29-generic. I mentioned this on
>> IRC, thought an email might reach more people and allow me to provide more
>> detail.
>> As far as I can tell, the LV was *not* created with --nosync:
>> # lvs rvg/backups
>>>   LV      VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log
>>> Cpy%Sync Convert
>>>   backups rvg rwi-aor--- 96.64G
>>> 100.00
>> The only odd thing I tend to do is specify extents for the extension
>> manually, being a bit OCD about on disk segment layouts. Having mined
>> .bash_history, it seems that last time I ran:
>> lvextend -l+2561 rvg/backups /dev/sdc3:20480-23041 /dev/sdb3:80097-82658
>> After that, a *lvchange --syncaction check rvg/backups* showed a huge
>> number for raid_mismatch_count (seemed roughly consistent with the newly
>> extended portion not being synced), but dumping the actual filesystem with
>> partclone from both legs of the mirror through md5sum showed no
>> inconsistencies; the contents are mostly borg repositories and for good
>> measure I verified the data in those using borg as well - no problems.
>> After a full resync all is well again. This is the second time this
>> happened to me on the same LV (I think - certainly the same VG.)
>> Any clues? Any known bugs fixed recently that might not have made it into
>> Ubuntu 1804? I am trying to reproduce with a test LV but can't. Only other
>> thing I can think might be relevant was that the volume was mounted (but
>> quiescent) at the time.
>> Thanks,
>> Steve

[-- Attachment #2: Type: text/html, Size: 4238 bytes --]

      reply	other threads:[~2019-02-02 17:32 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-23 10:34 [linux-lvm] Scrub errors after extending LVM RAID1 mirror Steve Dodd
2019-01-23 10:46 ` [linux-lvm] Scrub errors after extending LVM RAID1 mirror [full email] Steve Dodd
2019-02-02 13:34   ` Steve Dodd
2019-02-02 16:09     ` Steve Dodd [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).