All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-lvm] what is the IOPS behavior when partitions of single disk are used in an LVM?
@ 2018-10-07  3:01 Sherpa Sherpa
  0 siblings, 0 replies; only message in thread
From: Sherpa Sherpa @ 2018-10-07  3:01 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 4203 bytes --]

I have an ubuntu 14.04.1 LTS server which have LVM with logical volume and
a volume group named "dbstore-lv" and "dbstore-vg" which have sdb1 sdb2
sdb3 created from same sdb disk. The system as 42 cores and about 128G
memory. Although i dont see CPU spikes in htop the load average output from
uptime is ~43+ as well as vmstat shows constant iowait of 20-40 where
the context
switches is constantly around 80,000-150000 and even more at peak hours,
the cpu idle time is also hovers around 70-85. Below is output of iostat
-xp 1 where the %util is constantly 100%

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           8.91    0.00    1.31   10.98    0.00   78.80

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00   264.00    0.00   58.00     0.00  1428.00
49.24     0.02    0.28    0.00    0.28   0.21   1.20
sda1              0.00     0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00    0.00    0.00   0.00   0.00
sda2              0.00     0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00    0.00    0.00   0.00   0.00
sda3              0.00   264.00    0.00   58.00     0.00  1428.00
49.24     0.02    0.28    0.00    0.28   0.21   1.20
sdb               0.00   316.00    4.00   86.00   512.00  1608.00
47.11    36.02    0.27    5.00    0.05  11.11 100.00
sdb1              0.00   312.00    4.00   63.00   3512.00  4500.00
60.06    34.02    100.00    5.00    0.00  14.93 100.00
sdb2              0.00     0.00    0.00   821.00     450.00    84.00
  8.00     82.00    99.19    0.00    0.19  47.62 100.00
sdb3              0.00     4.00    0.00    2.00     0.00    24.00
24.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00    6.00     0.00    24.00
8.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-1              0.00     0.00    4.00  396.00   512.00  1584.00
10.48    36.02    8180.00  5.00   8180.00 2.50 100.00
dm-2              0.00     0.00    0.00  329.00     0.00  3896.00
23.68     0.85    2.58    0.00    2.58   0.05   1.60
dm-3              0.00     0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00    0.00    0.00   0.00   0.00

Similarly the TPS/iops is around 600-1000 most of the time(eg. iostat
outptu below)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          22.24    0.35    2.56   32.08    0.00   42.77

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             527.00      3828.00      1536.00       3828       1536
sdb             576.00      8532.00      2804.00       8532       2804
sdc              42.00       280.00       156.00        280        156
dm-0              0.00         0.00         0.00          0          0
dm-1            956.00      8400.00      2804.00       8400       2804
dm-2            569.00      4108.00      1692.00       4108       1692
dm-3              0.00         0.00         0.00          0          0

Below is excerpt of lsblk which shows lvm associated to disks

sdb                                8:16   0  19.7T  0 disk
├─sdb1                             8:17   0   7.7T  0 part
│ └─dbstore-lv (dm-1)              252:1    0   9.4T  0 lvm  /var/db/st01
├─sdb2                             8:18   0   1.7T  0 part
│ └─dbstore-lv (dm-1)              252:1    0   9.4T  0 lvm  /var/db/st01
└─sdb3                             8:19   0  10.3T  0 part
  └─archive--archivedbstore--lv (dm-0)     252:0    0  10.3T  0 lvm
/opt/archive/

I am assuming this is due to disk seek problem as the same disk partitions
are used for same LVM or may be its due to saturation of the disks(i dont
have the vendor provided IOPS data of this disk yet). As initial tuning i
have set vm.dirty_ratio to 5 and dirty_background_ratio to 2 + tried
deadline scheduler (currently noop) but this doesnt seem to help to reduce
the iowait. Any suggestions please ? Is this high IOwait and high load
average(40+) possibly due to multiple PVs(made from same sdb disk partitions)
in same volume group +same LV ?
Warm Regards
Sherpa

[-- Attachment #2: Type: text/html, Size: 11681 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2018-10-07  3:02 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-07  3:01 [linux-lvm] what is the IOPS behavior when partitions of single disk are used in an LVM? Sherpa Sherpa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.