* Balance: invalid convert data profile raid10
@ 2018-11-27 21:16 Mikko Merikivi
2018-11-28 1:14 ` Qu Wenruo
0 siblings, 1 reply; 4+ messages in thread
From: Mikko Merikivi @ 2018-11-27 21:16 UTC (permalink / raw)
To: linux-btrfs
I seem unable to convert an existing btrfs device array to RAID 10.
Since it's pretty much RAID 0 and 1 combined, and 5 and 6 are
unstable, it's what I would like to use.
After I did tried this with 4.19.2-arch1-1-ARCH and btrfs-progs v4.19,
I updated my system and tried btrfs balance again with this system
information:
[mikko@localhost lvdata]$ uname -a
Linux localhost 4.19.4-arch1-1-ARCH #1 SMP PREEMPT Fri Nov 23 09:06:58
UTC 2018 x86_64 GNU/Linux
[mikko@localhost lvdata]$ btrfs --version
btrfs-progs v4.19
[mikko@localhost lvdata]$ sudo btrfs fi show
Label: 'main1' uuid: c7cbb9c3-8c55-45f1-b03c-48992efe2f11
Total devices 1 FS bytes used 2.90TiB
devid 1 size 3.64TiB used 2.91TiB path /dev/mapper/main
Label: 'red' uuid: f3c781a8-0f3e-4019-acbf-0b783cf566d0
Total devices 2 FS bytes used 640.00KiB
devid 1 size 931.51GiB used 2.03GiB path /dev/mapper/red1
devid 2 size 931.51GiB used 2.03GiB path /dev/mapper/red2
[mikko@localhost lvdata]$ btrfs fi df /mnt/red/
Data, RAID1: total=1.00GiB, used=512.00KiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=1.00GiB, used=112.00KiB
GlobalReserve, single: total=16.00MiB, used=0.00B
---
Here are the steps I originally used:
[mikko@localhost lvdata]$ sudo cryptsetup luksFormat -s 512
--use-random /dev/sdc
[mikko@localhost lvdata]$ sudo cryptsetup luksFormat -s 512
--use-random /dev/sdd
[mikko@localhost lvdata]$ sudo cryptsetup luksOpen /dev/sdc red1
[mikko@localhost lvdata]$ sudo cryptsetup luksOpen /dev/sdd red2
[mikko@localhost lvdata]$ sudo mkfs.btrfs -L red /dev/mapper/red1
btrfs-progs v4.19
See http://btrfs.wiki.kernel.org for more information.
Label: red
UUID: f3c781a8-0f3e-4019-acbf-0b783cf566d0
Node size: 16384
Sector size: 4096
Filesystem size: 931.51GiB
Block group profiles:
Data: single 8.00MiB
Metadata: DUP 1.00GiB
System: DUP 8.00MiB
SSD detected: no
Incompat features: extref, skinny-metadata
Number of devices: 1
Devices:
ID SIZE PATH
1 931.51GiB /dev/mapper/red1
[mikko@localhost lvdata]$ sudo mount -t btrfs -o
defaults,noatime,nodiratime,autodefrag,compress=lzo /dev/mapper/red1
/mnt/red
[mikko@localhost lvdata]$ sudo btrfs device add /dev/mapper/red2 /mnt/red
[mikko@localhost lvdata]$ sudo btrfs balance start -dconvert=raid10
-mconvert=raid10 /mnt/red
ERROR: error during balancing '/mnt/red': Invalid argument
There may be more info in syslog - try dmesg | tail
code 1
[mikko@localhost lvdata]$ dmesg | tail
[12026.263243] BTRFS info (device dm-1): disk space caching is enabled
[12026.263244] BTRFS info (device dm-1): has skinny extents
[12026.263245] BTRFS info (device dm-1): flagging fs with big metadata feature
[12026.275153] BTRFS info (device dm-1): checking UUID tree
[12195.431766] BTRFS info (device dm-1): enabling auto defrag
[12195.431769] BTRFS info (device dm-1): use lzo compression, level 0
[12195.431770] BTRFS info (device dm-1): disk space caching is enabled
[12195.431771] BTRFS info (device dm-1): has skinny extents
[12205.815941] BTRFS info (device dm-1): disk added /dev/mapper/red2
[12744.788747] BTRFS error (device dm-1): balance: invalid convert
data profile raid10
Converting to RAID 1 did work but what can I do to make it RAID 10?
With the up-to-date system it still says "invalid convert data profile
raid10".
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Balance: invalid convert data profile raid10
2018-11-27 21:16 Balance: invalid convert data profile raid10 Mikko Merikivi
@ 2018-11-28 1:14 ` Qu Wenruo
2018-11-28 7:20 ` Mikko Merikivi
0 siblings, 1 reply; 4+ messages in thread
From: Qu Wenruo @ 2018-11-28 1:14 UTC (permalink / raw)
To: Mikko Merikivi, linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 3821 bytes --]
On 2018/11/28 上午5:16, Mikko Merikivi wrote:
> I seem unable to convert an existing btrfs device array to RAID 10.
> Since it's pretty much RAID 0 and 1 combined, and 5 and 6 are
> unstable, it's what I would like to use.
>
> After I did tried this with 4.19.2-arch1-1-ARCH and btrfs-progs v4.19,
> I updated my system and tried btrfs balance again with this system
> information:
> [mikko@localhost lvdata]$ uname -a
> Linux localhost 4.19.4-arch1-1-ARCH #1 SMP PREEMPT Fri Nov 23 09:06:58
> UTC 2018 x86_64 GNU/Linux
> [mikko@localhost lvdata]$ btrfs --version
> btrfs-progs v4.19
> [mikko@localhost lvdata]$ sudo btrfs fi show
> Label: 'main1' uuid: c7cbb9c3-8c55-45f1-b03c-48992efe2f11
> Total devices 1 FS bytes used 2.90TiB
> devid 1 size 3.64TiB used 2.91TiB path /dev/mapper/main
>
> Label: 'red' uuid: f3c781a8-0f3e-4019-acbf-0b783cf566d0
> Total devices 2 FS bytes used 640.00KiB
> devid 1 size 931.51GiB used 2.03GiB path /dev/mapper/red1
> devid 2 size 931.51GiB used 2.03GiB path /dev/mapper/red2
RAID10 needs at least 4 devices.
Thanks,
Qu
> [mikko@localhost lvdata]$ btrfs fi df /mnt/red/
> Data, RAID1: total=1.00GiB, used=512.00KiB
> System, RAID1: total=32.00MiB, used=16.00KiB
> Metadata, RAID1: total=1.00GiB, used=112.00KiB
> GlobalReserve, single: total=16.00MiB, used=0.00B
>
> ---
>
> Here are the steps I originally used:
>
> [mikko@localhost lvdata]$ sudo cryptsetup luksFormat -s 512
> --use-random /dev/sdc
> [mikko@localhost lvdata]$ sudo cryptsetup luksFormat -s 512
> --use-random /dev/sdd
> [mikko@localhost lvdata]$ sudo cryptsetup luksOpen /dev/sdc red1
> [mikko@localhost lvdata]$ sudo cryptsetup luksOpen /dev/sdd red2
> [mikko@localhost lvdata]$ sudo mkfs.btrfs -L red /dev/mapper/red1
> btrfs-progs v4.19
> See http://btrfs.wiki.kernel.org for more information.
>
> Label: red
> UUID: f3c781a8-0f3e-4019-acbf-0b783cf566d0
> Node size: 16384
> Sector size: 4096
> Filesystem size: 931.51GiB
> Block group profiles:
> Data: single 8.00MiB
> Metadata: DUP 1.00GiB
> System: DUP 8.00MiB
> SSD detected: no
> Incompat features: extref, skinny-metadata
> Number of devices: 1
> Devices:
> ID SIZE PATH
> 1 931.51GiB /dev/mapper/red1
>
> [mikko@localhost lvdata]$ sudo mount -t btrfs -o
> defaults,noatime,nodiratime,autodefrag,compress=lzo /dev/mapper/red1
> /mnt/red
> [mikko@localhost lvdata]$ sudo btrfs device add /dev/mapper/red2 /mnt/red
> [mikko@localhost lvdata]$ sudo btrfs balance start -dconvert=raid10
> -mconvert=raid10 /mnt/red
> ERROR: error during balancing '/mnt/red': Invalid argument
> There may be more info in syslog - try dmesg | tail
> code 1
>
> [mikko@localhost lvdata]$ dmesg | tail
> [12026.263243] BTRFS info (device dm-1): disk space caching is enabled
> [12026.263244] BTRFS info (device dm-1): has skinny extents
> [12026.263245] BTRFS info (device dm-1): flagging fs with big metadata feature
> [12026.275153] BTRFS info (device dm-1): checking UUID tree
> [12195.431766] BTRFS info (device dm-1): enabling auto defrag
> [12195.431769] BTRFS info (device dm-1): use lzo compression, level 0
> [12195.431770] BTRFS info (device dm-1): disk space caching is enabled
> [12195.431771] BTRFS info (device dm-1): has skinny extents
> [12205.815941] BTRFS info (device dm-1): disk added /dev/mapper/red2
> [12744.788747] BTRFS error (device dm-1): balance: invalid convert
> data profile raid10
>
> Converting to RAID 1 did work but what can I do to make it RAID 10?
> With the up-to-date system it still says "invalid convert data profile
> raid10".
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Balance: invalid convert data profile raid10
2018-11-28 1:14 ` Qu Wenruo
@ 2018-11-28 7:20 ` Mikko Merikivi
2018-11-28 13:54 ` Qu Wenruo
0 siblings, 1 reply; 4+ messages in thread
From: Mikko Merikivi @ 2018-11-28 7:20 UTC (permalink / raw)
To: quwenruo.btrfs; +Cc: linux-btrfs
Well, excuse me for thinking it wouldn't since in md-raid it worked.
https://wiki.archlinux.org/index.php/RAID#RAID_level_comparison
Anyway, the error message is truly confusing for someone who doesn't
know about btrfs's implementation. I suppose in md-raid the near
layout is actually RAID 1 and far2 uses twice as much space.
https://en.wikipedia.org/wiki/Non-standard_RAID_levels#LINUX-MD-RAID-10
Well, I guess I'll go with RAID 1, then. Thanks for clearing up the confusion.
ke 28. marrask. 2018 klo 3.14 Qu Wenruo (quwenruo.btrfs@gmx.com) kirjoitti:
>
>
>
> On 2018/11/28 上午5:16, Mikko Merikivi wrote:
> > I seem unable to convert an existing btrfs device array to RAID 10.
> > Since it's pretty much RAID 0 and 1 combined, and 5 and 6 are
> > unstable, it's what I would like to use.
> >
> > After I did tried this with 4.19.2-arch1-1-ARCH and btrfs-progs v4.19,
> > I updated my system and tried btrfs balance again with this system
> > information:
> > [mikko@localhost lvdata]$ uname -a
> > Linux localhost 4.19.4-arch1-1-ARCH #1 SMP PREEMPT Fri Nov 23 09:06:58
> > UTC 2018 x86_64 GNU/Linux
> > [mikko@localhost lvdata]$ btrfs --version
> > btrfs-progs v4.19
> > [mikko@localhost lvdata]$ sudo btrfs fi show
> > Label: 'main1' uuid: c7cbb9c3-8c55-45f1-b03c-48992efe2f11
> > Total devices 1 FS bytes used 2.90TiB
> > devid 1 size 3.64TiB used 2.91TiB path /dev/mapper/main
> >
> > Label: 'red' uuid: f3c781a8-0f3e-4019-acbf-0b783cf566d0
> > Total devices 2 FS bytes used 640.00KiB
> > devid 1 size 931.51GiB used 2.03GiB path /dev/mapper/red1
> > devid 2 size 931.51GiB used 2.03GiB path /dev/mapper/red2
>
> RAID10 needs at least 4 devices.
>
> Thanks,
> Qu
>
> > [mikko@localhost lvdata]$ btrfs fi df /mnt/red/
> > Data, RAID1: total=1.00GiB, used=512.00KiB
> > System, RAID1: total=32.00MiB, used=16.00KiB
> > Metadata, RAID1: total=1.00GiB, used=112.00KiB
> > GlobalReserve, single: total=16.00MiB, used=0.00B
> >
> > ---
> >
> > Here are the steps I originally used:
> >
> > [mikko@localhost lvdata]$ sudo cryptsetup luksFormat -s 512
> > --use-random /dev/sdc
> > [mikko@localhost lvdata]$ sudo cryptsetup luksFormat -s 512
> > --use-random /dev/sdd
> > [mikko@localhost lvdata]$ sudo cryptsetup luksOpen /dev/sdc red1
> > [mikko@localhost lvdata]$ sudo cryptsetup luksOpen /dev/sdd red2
> > [mikko@localhost lvdata]$ sudo mkfs.btrfs -L red /dev/mapper/red1
> > btrfs-progs v4.19
> > See http://btrfs.wiki.kernel.org for more information.
> >
> > Label: red
> > UUID: f3c781a8-0f3e-4019-acbf-0b783cf566d0
> > Node size: 16384
> > Sector size: 4096
> > Filesystem size: 931.51GiB
> > Block group profiles:
> > Data: single 8.00MiB
> > Metadata: DUP 1.00GiB
> > System: DUP 8.00MiB
> > SSD detected: no
> > Incompat features: extref, skinny-metadata
> > Number of devices: 1
> > Devices:
> > ID SIZE PATH
> > 1 931.51GiB /dev/mapper/red1
> >
> > [mikko@localhost lvdata]$ sudo mount -t btrfs -o
> > defaults,noatime,nodiratime,autodefrag,compress=lzo /dev/mapper/red1
> > /mnt/red
> > [mikko@localhost lvdata]$ sudo btrfs device add /dev/mapper/red2 /mnt/red
> > [mikko@localhost lvdata]$ sudo btrfs balance start -dconvert=raid10
> > -mconvert=raid10 /mnt/red
> > ERROR: error during balancing '/mnt/red': Invalid argument
> > There may be more info in syslog - try dmesg | tail
> > code 1
> >
> > [mikko@localhost lvdata]$ dmesg | tail
> > [12026.263243] BTRFS info (device dm-1): disk space caching is enabled
> > [12026.263244] BTRFS info (device dm-1): has skinny extents
> > [12026.263245] BTRFS info (device dm-1): flagging fs with big metadata feature
> > [12026.275153] BTRFS info (device dm-1): checking UUID tree
> > [12195.431766] BTRFS info (device dm-1): enabling auto defrag
> > [12195.431769] BTRFS info (device dm-1): use lzo compression, level 0
> > [12195.431770] BTRFS info (device dm-1): disk space caching is enabled
> > [12195.431771] BTRFS info (device dm-1): has skinny extents
> > [12205.815941] BTRFS info (device dm-1): disk added /dev/mapper/red2
> > [12744.788747] BTRFS error (device dm-1): balance: invalid convert
> > data profile raid10
> >
> > Converting to RAID 1 did work but what can I do to make it RAID 10?
> > With the up-to-date system it still says "invalid convert data profile
> > raid10".
> >
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Balance: invalid convert data profile raid10
2018-11-28 7:20 ` Mikko Merikivi
@ 2018-11-28 13:54 ` Qu Wenruo
0 siblings, 0 replies; 4+ messages in thread
From: Qu Wenruo @ 2018-11-28 13:54 UTC (permalink / raw)
To: Mikko Merikivi; +Cc: linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 4820 bytes --]
On 2018/11/28 下午3:20, Mikko Merikivi wrote:
> Well, excuse me for thinking it wouldn't since in md-raid it worked.
> https://wiki.archlinux.org/index.php/RAID#RAID_level_comparison
>
> Anyway, the error message is truly confusing for someone who doesn't
> know about btrfs's implementation. I suppose in md-raid the near
> layout is actually RAID 1 and far2 uses twice as much space.
> https://en.wikipedia.org/wiki/Non-standard_RAID_levels#LINUX-MD-RAID-10
>
> Well, I guess I'll go with RAID 1, then. Thanks for clearing up the confusion.
You should check mkfs.btrfs(8).
It has a pretty good datasheet for all supported profiles under PROFILES
section, and it mentions min/max devices.
Thanks,
Qu
> ke 28. marrask. 2018 klo 3.14 Qu Wenruo (quwenruo.btrfs@gmx.com) kirjoitti:
>>
>>
>>
>> On 2018/11/28 上午5:16, Mikko Merikivi wrote:
>>> I seem unable to convert an existing btrfs device array to RAID 10.
>>> Since it's pretty much RAID 0 and 1 combined, and 5 and 6 are
>>> unstable, it's what I would like to use.
>>>
>>> After I did tried this with 4.19.2-arch1-1-ARCH and btrfs-progs v4.19,
>>> I updated my system and tried btrfs balance again with this system
>>> information:
>>> [mikko@localhost lvdata]$ uname -a
>>> Linux localhost 4.19.4-arch1-1-ARCH #1 SMP PREEMPT Fri Nov 23 09:06:58
>>> UTC 2018 x86_64 GNU/Linux
>>> [mikko@localhost lvdata]$ btrfs --version
>>> btrfs-progs v4.19
>>> [mikko@localhost lvdata]$ sudo btrfs fi show
>>> Label: 'main1' uuid: c7cbb9c3-8c55-45f1-b03c-48992efe2f11
>>> Total devices 1 FS bytes used 2.90TiB
>>> devid 1 size 3.64TiB used 2.91TiB path /dev/mapper/main
>>>
>>> Label: 'red' uuid: f3c781a8-0f3e-4019-acbf-0b783cf566d0
>>> Total devices 2 FS bytes used 640.00KiB
>>> devid 1 size 931.51GiB used 2.03GiB path /dev/mapper/red1
>>> devid 2 size 931.51GiB used 2.03GiB path /dev/mapper/red2
>>
>> RAID10 needs at least 4 devices.
>>
>> Thanks,
>> Qu
>>
>>> [mikko@localhost lvdata]$ btrfs fi df /mnt/red/
>>> Data, RAID1: total=1.00GiB, used=512.00KiB
>>> System, RAID1: total=32.00MiB, used=16.00KiB
>>> Metadata, RAID1: total=1.00GiB, used=112.00KiB
>>> GlobalReserve, single: total=16.00MiB, used=0.00B
>>>
>>> ---
>>>
>>> Here are the steps I originally used:
>>>
>>> [mikko@localhost lvdata]$ sudo cryptsetup luksFormat -s 512
>>> --use-random /dev/sdc
>>> [mikko@localhost lvdata]$ sudo cryptsetup luksFormat -s 512
>>> --use-random /dev/sdd
>>> [mikko@localhost lvdata]$ sudo cryptsetup luksOpen /dev/sdc red1
>>> [mikko@localhost lvdata]$ sudo cryptsetup luksOpen /dev/sdd red2
>>> [mikko@localhost lvdata]$ sudo mkfs.btrfs -L red /dev/mapper/red1
>>> btrfs-progs v4.19
>>> See http://btrfs.wiki.kernel.org for more information.
>>>
>>> Label: red
>>> UUID: f3c781a8-0f3e-4019-acbf-0b783cf566d0
>>> Node size: 16384
>>> Sector size: 4096
>>> Filesystem size: 931.51GiB
>>> Block group profiles:
>>> Data: single 8.00MiB
>>> Metadata: DUP 1.00GiB
>>> System: DUP 8.00MiB
>>> SSD detected: no
>>> Incompat features: extref, skinny-metadata
>>> Number of devices: 1
>>> Devices:
>>> ID SIZE PATH
>>> 1 931.51GiB /dev/mapper/red1
>>>
>>> [mikko@localhost lvdata]$ sudo mount -t btrfs -o
>>> defaults,noatime,nodiratime,autodefrag,compress=lzo /dev/mapper/red1
>>> /mnt/red
>>> [mikko@localhost lvdata]$ sudo btrfs device add /dev/mapper/red2 /mnt/red
>>> [mikko@localhost lvdata]$ sudo btrfs balance start -dconvert=raid10
>>> -mconvert=raid10 /mnt/red
>>> ERROR: error during balancing '/mnt/red': Invalid argument
>>> There may be more info in syslog - try dmesg | tail
>>> code 1
>>>
>>> [mikko@localhost lvdata]$ dmesg | tail
>>> [12026.263243] BTRFS info (device dm-1): disk space caching is enabled
>>> [12026.263244] BTRFS info (device dm-1): has skinny extents
>>> [12026.263245] BTRFS info (device dm-1): flagging fs with big metadata feature
>>> [12026.275153] BTRFS info (device dm-1): checking UUID tree
>>> [12195.431766] BTRFS info (device dm-1): enabling auto defrag
>>> [12195.431769] BTRFS info (device dm-1): use lzo compression, level 0
>>> [12195.431770] BTRFS info (device dm-1): disk space caching is enabled
>>> [12195.431771] BTRFS info (device dm-1): has skinny extents
>>> [12205.815941] BTRFS info (device dm-1): disk added /dev/mapper/red2
>>> [12744.788747] BTRFS error (device dm-1): balance: invalid convert
>>> data profile raid10
>>>
>>> Converting to RAID 1 did work but what can I do to make it RAID 10?
>>> With the up-to-date system it still says "invalid convert data profile
>>> raid10".
>>>
>>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-11-28 13:54 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-27 21:16 Balance: invalid convert data profile raid10 Mikko Merikivi
2018-11-28 1:14 ` Qu Wenruo
2018-11-28 7:20 ` Mikko Merikivi
2018-11-28 13:54 ` Qu Wenruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).