All of lore.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: jn <jn@forever.cz>, linux-btrfs@vger.kernel.org
Subject: Re: slow single -> raid1 conversion (heavy write to original LVM volume)
Date: Wed, 15 Jan 2020 11:26:55 +0800	[thread overview]
Message-ID: <424ffd2a-2a9b-1cef-c3ac-ad2814f037a1@gmx.com> (raw)
In-Reply-To: <107f8e94-78bc-f891-0e1b-2db7903e8bde@forever.cz>


[-- Attachment #1.1: Type: text/plain, Size: 6885 bytes --]



On 2020/1/14 下午9:16, jn wrote:
> Hello,
> 
> I am experiencing very slow conversion from single disk BTRFS to raid1
> balanced (new disk was added):
> 
> what I have done:
> 
> I have added new disk to nearly full (cca 85%) BTRFS filesystem on LVM
> volume with intention to convert it into raid1:
> 
> btrfs balance start -dconvert raid1 -mconvert raid1 /data/
> 
>> Jan 10 08:14:04 sopa kernel: [155893.485617] BTRFS info (device dm-0):
>> disk added /dev/sdb3
>> Jan 10 08:15:06 sopa kernel: [155955.958561] BTRFS info (device dm-0):
>> relocating block group 2078923554816 flags data
>> Jan 10 08:15:07 sopa kernel: [155956.991293] BTRFS info (device dm-0):
>> relocating block group 2077849812992 flags data
>> Jan 10 08:15:10 sopa kernel: [155960.357846] BTRFS info (device dm-0):
>> relocating block group 2076776071168 flags data
>> Jan 10 08:15:13 sopa kernel: [155962.772534] BTRFS info (device dm-0):
>> relocating block group 2075702329344 flags data
>> Jan 10 08:15:14 sopa kernel: [155964.195237] BTRFS info (device dm-0):
>> relocating block group 2074628587520 flags data
>> Jan 10 08:15:45 sopa kernel: [155994.546695] BTRFS info (device dm-0):
>> relocating block group 2062817427456 flags data
>> Jan 10 08:15:52 sopa kernel: [156001.952247] BTRFS info (device dm-0):
>> relocating block group 2059596201984 flags data
>> Jan 10 08:15:58 sopa kernel: [156007.787071] BTRFS info (device dm-0):
>> relocating block group 2057448718336 flags data
>> Jan 10 08:16:00 sopa kernel: [156010.094565] BTRFS info (device dm-0):
>> relocating block group 2056374976512 flags data
>> Jan 10 08:16:06 sopa kernel: [156015.585343] BTRFS info (device dm-0):
>> relocating block group 2054227492864 flags data
>> Jan 10 08:16:12 sopa kernel: [156022.305629] BTRFS info (device dm-0):
>> relocating block group 2051006267392 flags data
>> Jan 10 08:16:23 sopa kernel: [156033.373144] BTRFS info (device dm-0):
>> found 75 extents
>> Jan 10 08:16:29 sopa kernel: [156038.666672] BTRFS info (device dm-0):
>> found 75 extents
>> Jan 10 08:16:36 sopa kernel: [156045.909270] BTRFS info (device dm-0):
>> found 75 extents
>> Jan 10 08:16:42 sopa kernel: [156052.292789] BTRFS info (device dm-0):
>> found 75 extents
>> Jan 10 08:16:46 sopa kernel: [156055.643452] BTRFS info (device dm-0):
>> found 75 extents
>> Jan 10 08:16:54 sopa kernel: [156063.608344] BTRFS info (device dm-0):
>> found 75 extents

This repeating result looks like a bug in recent kernels.

And due to another bug in balance canceling, we can't cancel it.

For the unable to cancel part, there are patches to address it, and
would get upstream in v5.6 release.


For the repeating line bug, I am still investigating. The ETA would be
5.7 if we're lucky enough.

Thanks,
Qu

> after 6hours of processing with 0% progress reported by balance status,
> I decided to cancel it to empty more space and rerun balance with some
> filters:
> 
> btrfs balance cancel /data
> 
>> Jan 10 14:38:11 sopa kernel: [178941.189217] BTRFS info (device dm-0):
>> found 68 extents
>> Jan 10 14:38:14 sopa kernel: [178943.619787] BTRFS info (device dm-0):
>> found 68 extents
>> Jan 10 14:38:20 sopa kernel: [178950.275334] BTRFS info (device dm-0):
>> found 68 extents
>> Jan 10 14:38:24 sopa kernel: [178954.018770] INFO: task btrfs:30196
>> blocked for more than 845 seconds.
>> Jan 10 14:38:24 sopa kernel: [178954.018844] 
>> btrfs_cancel_balance+0xf8/0x170 [btrfs]
>> Jan 10 14:38:24 sopa kernel: [178954.018878] 
>> btrfs_ioctl+0x13af/0x20d0 [btrfs]
>> Jan 10 14:38:28 sopa kernel: [178957.999108] BTRFS info (device dm-0):
>> found 68 extents
>> Jan 10 14:38:29 sopa kernel: [178958.837674] BTRFS info (device dm-0):
>> found 68 extents
>> Jan 10 14:38:30 sopa kernel: [178959.835118] BTRFS info (device dm-0):
>> found 68 extents
>> Jan 10 14:38:31 sopa kernel: [178960.915305] BTRFS info (device dm-0):
>> found 68 extents
>> Jan 10 14:40:25 sopa kernel: [179074.851376] 
>> btrfs_cancel_balance+0xf8/0x170 [btrfs]
>> Jan 10 14:40:25 sopa kernel: [179074.851408] 
>> btrfs_ioctl+0x13af/0x20d0 [btrfs]
> 
> now nearly 4 days later (and after some data deleted) both balance start
> and balance cancel processes are still running and system reports:
> 
>> root@sopa:/var/log# btrfs balance status /data/
>> Balance on '/data/' is running, cancel requested
>> 0 out of about 1900 chunks balanced (29 considered), 100% left
> 
>> root@sopa:~# uname -a
>> Linux sopa 5.4.8-050408-generic #202001041436 SMP Sat Jan 4 19:40:55
>> UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
>>
>> root@sopa:~#   btrfs --version
>> btrfs-progs v4.15.1
>>
>> root@sopa:~#   btrfs fi show
>> Label: 'SOPADATA'  uuid: 37b8a62c-68e8-44e4-a3b2-eb572385c3e8
>>     Total devices 2 FS bytes used 1.04TiB
>>     devid    1 size 1.86TiB used 1.86TiB path /dev/mapper/sopa-data
>>     devid    2 size 1.86TiB used 0.00B path /dev/sdb3
>>
>> root@sopa:~# btrfs subvolume list /data
>> ID 1021 gen 7564583 top level 5 path nfs
>> ID 1022 gen 7564590 top level 5 path motion
> 
>> root@sopa:~#   btrfs fi df /data
>> Data, single: total=1.84TiB, used=1.04TiB
>> System, DUP: total=8.00MiB, used=224.00KiB
>> System, single: total=4.00MiB, used=0.00B
>> Metadata, DUP: total=6.50GiB, used=2.99GiB
>> Metadata, single: total=8.00MiB, used=0.00B
>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>
> is it normal that  it have written nearly 5TB of data to the original
> disk ??:
> 
>> root@sopa:/var/log# ps ax | grep balance
>> 16014 ?        D    21114928:30 btrfs balance start -dconvert raid1
>> -mconvert raid1 /data/
>> 30196 ?        D      0:00 btrfs balance cancel /data
> 
>> root@sopa:/var/log# cat /proc/16014/io | grep bytes
>> read_bytes: 1150357504
>> write_bytes: 5812039966720
>> root@sopa:/sys/block# cat  /sys/block/sdb/sdb3/stat
>>      404        0    39352      956  4999199     1016 40001720
>> 71701953        0 14622628 67496136        0        0        0        0
> 
>> [520398.089952] btrfs(16014): WRITE block 131072 on sdb3 (8 sectors)
>> [520398.089975] btrfs(16014): WRITE block 536870912 on sdb3 (8 sectors)
>> [520398.089995] btrfs(16014): WRITE block 128 on dm-0 (8 sectors)
>> [520398.090021] btrfs(16014): WRITE block 131072 on dm-0 (8 sectors)
>> [520398.090040] btrfs(16014): WRITE block 536870912 on dm-0 (8 sectors)
>> [520398.154382] btrfs(16014): WRITE block 14629168 on dm-0 (512 sectors)
>> [520398.155017] btrfs(16014): WRITE block 17748832 on dm-0 (512 sectors)
>> [520398.155545] btrfs(16014): WRITE block 17909352 on dm-0 (512 sectors)
>> [520398.156091] btrfs(16014): WRITE block 20534680 on dm-0 (512 sectors)
>>
> regards
> 
> jn
> 
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  parent reply	other threads:[~2020-01-15  3:27 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-14 13:16 slow single -> raid1 conversion (heavy write to original LVM volume) jn
2020-01-14 17:18 ` Chris Murphy
2020-01-14 17:41   ` jakub nantl
2020-01-14 20:09     ` Chris Murphy
2020-01-14 20:53       ` jn
2020-01-15  2:23         ` Chris Murphy
2020-01-15 11:28           ` jakub nantl
2020-01-15  3:15         ` Remi Gauvin
2020-01-15  3:26 ` Qu Wenruo [this message]
2020-04-14  7:09   ` balance canceling was: " jakub nantl
2020-04-14  9:07     ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=424ffd2a-2a9b-1cef-c3ac-ad2814f037a1@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=jn@forever.cz \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.