* [linux-lvm] ThinPool performance problem with NVMe
@ 2023-06-23 23:22 Anton Kulshenko
2023-07-14 15:07 ` ComputerAdvancedTechnologySYSTEM
2023-07-17 12:26 ` Zdenek Kabelac
0 siblings, 2 replies; 3+ messages in thread
From: Anton Kulshenko @ 2023-06-23 23:22 UTC (permalink / raw)
To: linux-lvm
[-- Attachment #1.1: Type: text/plain, Size: 1353 bytes --]
Hello.
Please help me figure out what my problem is. No matter how I configure the
system, I can't get high performance, especially on writes.
OS: Oracle Linux 8.6, 5.4.17-2136.311.6.el8uek.x86_64
Platform: Gigabyte R282-Z94 with 2x 7702 64cores AMD EPYC and 2 TB of RAM
Disks: NVMe Samsung PM1733 7.68 TB
What I do:
vgcreate vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
/dev/nvme4n1
lvcreate -n thin_pool_1 -L 20T vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
/dev/nvme3n1 -i 4 -I 4
-i4 for striping between all disks, -I4 strip size. Also I tried 8, 16,
32... In my setup I can't find a big difference.
lvcreate -n pool_meta -L 15G vg1 /dev/nvme4n1
lvconvert --type thin-pool --poolmetadata vg1/pool_meta vg1/thin_pool_1
lvchange -Zn vg1/thin_pool_1
lvcreate -V 15000G --thin -n data vg1/thin_pool_1
After that I create a load using the FIO with parameters:
fio --filename=/dev/mapper/vg1-data --rw=randwrite --bs=4k --name=test
--numjobs=32 --iodepth=32 --random_generator=tausworthe64
--numa_cpu_nodes=0 --direct=1
I only get 40k iops, while one drive at the same load easily gives 130k
iops.
I have tried different block sizes, strip sizes, etc. with no result. When
I look in iostat I see the load on the disk where the metadata is:
80 WMB/s, 12500 wrqm/s, 68 %wrqm
I don't understand what I'm missing when configuring the system.
[-- Attachment #1.2: Type: text/html, Size: 5316 bytes --]
[-- Attachment #2: Type: text/plain, Size: 202 bytes --]
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [linux-lvm] ThinPool performance problem with NVMe
2023-06-23 23:22 [linux-lvm] ThinPool performance problem with NVMe Anton Kulshenko
@ 2023-07-14 15:07 ` ComputerAdvancedTechnologySYSTEM
2023-07-17 12:26 ` Zdenek Kabelac
1 sibling, 0 replies; 3+ messages in thread
From: ComputerAdvancedTechnologySYSTEM @ 2023-07-14 15:07 UTC (permalink / raw)
To: LVM general discussion and development, shallriseagain
[-- Attachment #1.1: Type: text/plain, Size: 3537 bytes --]
shallriseagain@gmail.com
problem architektute
MAINBOATD
have 1x slot PCI4x 4x
SAS this system dual SATA disk magnetic
create PV=1 for format GPT create MBR and prefix cache
info hdparm NVMe cache
create VG for more one disk.
create conflict ID
if
VG = id1 and -L 100%FREE for LV = A
VG = id1 and -L 100%FREE for LV = B
VG = id1 and -L 100%FREE for LV = C
VG = id1 and -L 100%FREE for LV = D
you test of DATABASE
*hdparm -Tt /dev/id1/A*
slot NVMe
*hdparm -Tt /dev/id1/B*
change NVMe for slot PCIe x8
adapter NVMe x8 PCIe
not SAS for macnetic slow transmision data
*system OS not use disk NVMe or SAS devices*
fitst create OS DEBIAN live
init0 make create ram0 partition
copy iso virtual disk 8Gb for ram0
and mount iso OS, jump
system init1
have cut system for
SAS and NVMe controler
yours system
*25Gb/s **speed system PV0 ram0 *
*10Gb/s speed database PV1/id1/A *
sters speed disc transfer data never colision OS debian system operation
*if needed script of configure* init0
pleace pay for €500
we add of 100pdf of IT LINUX programer
use Python C many more service script
Developer London IT europe
Computer.Alarm.Technology.SYSTEM
🏭 2003—2023
📩 service.hofman@gmail.com
📞 +48 883937952
💬 //t.me/s/CATsystem_plan
💷 POUND
PL44124036791789001109272570
💶 EURO
PL41124036791978001109272583
💵 PLN
PL14124036791111001108735292
💸BIC/SWIFT PKOPPLPW
🎫 REG. MicroSoft W936403
🎫 REG. Acrobat MASTER2015
🎫 REG. G.E. MasterATM 13/05/2003
🎫 REG. S.E.P. D1/017/21 30kV
🎫 REG. V.A.T. 572-106-528
🎫 REG. ID06
★safe_construction_2027 ★
★Mobile_Platform_Safety
★Manual_Handling_Safety
★Working_at_Height_Safety
Eryk Hofman
10.07.2023 8:47 AM "Anton Kulshenko" <shallriseagain@gmail.com> napisał(a):
> Hello.
>
> Please help me figure out what my problem is. No matter how I configure
> the system, I can't get high performance, especially on writes.
>
> OS: Oracle Linux 8.6, 5.4.17-2136.311.6.el8uek.x86_64
> Platform: Gigabyte R282-Z94 with 2x 7702 64cores AMD EPYC and 2 TB of RAM
> Disks: NVMe Samsung PM1733 7.68 TB
>
> What I do:
> vgcreate vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
> /dev/nvme4n1
> lvcreate -n thin_pool_1 -L 20T vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
> /dev/nvme3n1 -i 4 -I 4
>
> -i4 for striping between all disks, -I4 strip size. Also I tried 8, 16,
> 32... In my setup I can't find a big difference.
>
> lvcreate -n pool_meta -L 15G vg1 /dev/nvme4n1
> lvconvert --type thin-pool --poolmetadata vg1/pool_meta vg1/thin_pool_1
> lvchange -Zn vg1/thin_pool_1
> lvcreate -V 15000G --thin -n data vg1/thin_pool_1
>
> After that I create a load using the FIO with parameters:
> fio --filename=/dev/mapper/vg1-data --rw=randwrite --bs=4k --name=test
> --numjobs=32 --iodepth=32 --random_generator=tausworthe64
> --numa_cpu_nodes=0 --direct=1
>
> I only get 40k iops, while one drive at the same load easily gives 130k
> iops.
> I have tried different block sizes, strip sizes, etc. with no result. When
> I look in iostat I see the load on the disk where the metadata is:
> 80 WMB/s, 12500 wrqm/s, 68 %wrqm
>
> I don't understand what I'm missing when configuring the system.
>
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
[-- Attachment #1.2: Type: text/html, Size: 8620 bytes --]
[-- Attachment #2: R282-Z94_BlockDiagram.png --]
[-- Type: image/png, Size: 260595 bytes --]
[-- Attachment #3: Type: text/plain, Size: 202 bytes --]
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [linux-lvm] ThinPool performance problem with NVMe
2023-06-23 23:22 [linux-lvm] ThinPool performance problem with NVMe Anton Kulshenko
2023-07-14 15:07 ` ComputerAdvancedTechnologySYSTEM
@ 2023-07-17 12:26 ` Zdenek Kabelac
1 sibling, 0 replies; 3+ messages in thread
From: Zdenek Kabelac @ 2023-07-17 12:26 UTC (permalink / raw)
To: LVM general discussion and development, Anton Kulshenko
Dne 24. 06. 23 v 1:22 Anton Kulshenko napsal(a):
> Hello.
>
> Please help me figure out what my problem is. No matter how I configure the
> system, I can't get high performance, especially on writes.
>
> OS: Oracle Linux 8.6, 5.4.17-2136.311.6.el8uek.x86_64
> Platform: Gigabyte R282-Z94 with 2x 7702 64cores AMD EPYC and 2 TB of RAM
> Disks: NVMe Samsung PM1733 7.68 TB
>
> What I do:
> vgcreate vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
> lvcreate -n thin_pool_1 -L 20T vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
> /dev/nvme3n1 -i 4 -I 4
>
> -i4 for striping between all disks, -I4 strip size. Also I tried 8, 16, 32...
> In my setup I can't find a big difference.
>
stripe size needs to be aligned with some 'hw' properties.
In the case of 'NVMe' where the write unit for optimal performance is usually
0.5M or more - using 4K block is basically massively destroying your
performance since you generate for each large write huge amount of splits.
> I only get 40k iops, while one drive at the same load easily gives 130k iops.
> I have tried different block sizes, strip sizes, etc. with no result. When I
> look in iostat I see the load on the disk where the metadata is:
> 80 WMB/s, 12500 wrqm/s, 68 %wrqm
>
> I don't understand what I'm missing when configuring the system.
>
As mentioned by Mathew - you likely should start with some 'initial' thin-pool
size - maybe sitting fully on single NVMe - possibly deploy metadata on 2nd.
NVMe for better bus utilization.
For striping - you would need to go with 512K units at least - then it's the
question how it fits your workload...
Anyway now you have way more things to experiment and benchmark and figure out
what is the best on your particular hw.
One more thing - increasing chunksize to 256K or 512K also may significantly
raise the performance - but at the price of reduced sharing in case of taking
a snapshot of a thin volume...
Regards
Zdenek
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-07-17 12:26 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-23 23:22 [linux-lvm] ThinPool performance problem with NVMe Anton Kulshenko
2023-07-14 15:07 ` ComputerAdvancedTechnologySYSTEM
2023-07-17 12:26 ` Zdenek Kabelac
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).