All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ryan Wagoner <rswagoner@gmail.com>
To: linux-raid@vger.kernel.org
Subject: Fwd: High IO Wait with RAID 1
Date: Fri, 13 Mar 2009 07:21:31 -0500	[thread overview]
Message-ID: <7d86ddb90903130521s454b386eo1ec00eec17bdaae7@mail.gmail.com> (raw)
In-Reply-To: <7d86ddb90903130519p4268dc33vc8ad42b53aefa2e2@mail.gmail.com>

I tried rolling back the kernal and have the same issue. Here is an
example of the dstat output when writing with bonnie++ on RAID 1. As
soon as the write buffer fills up the wait climbs as it is waiting to
write to the disk. The output looks the same on both systems.

usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw
 0   0 100   0   0   0|   0     0 | 586B 1682B|   0     0 |1015   110
 0   0 100   0   0   0|   0     0 |  64B  412B|   0     0 |1022    96
 4   1  96   1   0   0|  40k    0 | 238B  664B|   0     0 |1011   124
43   6  50   1   0   0|4096B    0 | 375B  428B|   0     0 |1026    90
43   7  50   0   0   0|   0     0 |  64B  412B|   0     0 |1005    60
43   8  50   0   0   0|   0     0 |  64B  412B|   0     0 |1023    91
43   6  50   0   0   0|4096B    0 |  64B  412B|   0     0 |1006    77
40  14  44   0   0   1|   0    62M| 158B  396B|   0     0 |1194   160
40  10   0  46   0   3|   0   145M| 158B  522B|   0     0 |1297   128
38   8   0  52   0   3|4096B  127M|  64B  412B|   0     0 |1276   147
41   9   1  48   0   3|4096B  120M| 174B  366B|   0     0 |1252   129
43   8   3  45   0   0|   0    16k| 158B  412B|   0     0 |1012   113
40  16   6  36   0   1|4096B   41M|  64B  318B|   0     0 |1142   188
42  11   0  45   0   2|   0   130M|  64B  675B|   0     0 |1327   276
43   9   0  44   0   4|   0   138M|  64B  412B|   0     0 |1280   130
34   9  16  38   0   2|4096B  107M|  64B  412B|   0     0 |1229   120
44   9   4  44   0   0|   0  8192B|  64B  412B|   0     0 |1024   175
41  17   0  41   0   0|   0    33M| 192B  366B|   0     0 |1096   193
37   9   1  51   0   3|4096B  126M|  64B  428B|   0     0 |1288   173
44   8   0  44   0   3|   0   142M|  64B  412B|   0     0 |1289   164


Here is the dstat output with the same bonnie command on the RAID 5
volume. This machine has two VMware guests running so the system
wasn't idle when grabbing the output.

usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw
  1  14  83   2   0   0| 120k  537k|   0     0 |   0   0.2 |3084    10k
 18  11  68   1   0   0|  40k 8192B|9180B 1192B|   0     0 |3074    12k
 36  18  43   2   0   0| 272k   24M| 123k 4865B|   0     0 |3858    14k
 30  26  42   1   1   2| 808k   67M|  21k 1418B|   0     0 |5253    18k
 39  19  43   0   0   0|4096B    0 |4600B  692B|   0     0 |3079    11k
 36  19  29  17   0   0| 116k 1104k|3024B 2464B|   0     0 |3221    10k
 40  17  14  30   0   1| 136k  400k|  86k 5828B|   0     0 |3189    10k
 37  21  17  23   0   0| 380k   35M|  30k 1708B|   0     0 |4223    16k
 30  29  37   2   2   2|1160k  115M| 390B  550B|   0     0 |6647    24k
 31  29  37   1   1   2|1112k  127M| 664B  314B|   0     0 |6745    24k
 33  26  28  11   0   1| 728k   71M|3074B  526B|   0     0 |4608    16k
 37  24   2  37   0   0|   0    16k|1616B   14k|   0     0 |3086    10k
 34  21  11  33   1   1| 388k   33M|  26k 1280B|   0     0 |3939    13k
 30  32  36   1   1   1|1304k  111M|  60B  420B|   0     0 |5083    19k
 31  35  30   2   1   2|1296k  125M|  19k 2051B|   0     0 |5987    20k
 38  22  19  22   0   1| 692k   28M|3084B 2480B|   0     0 |3744    11k
 41  17  38   3   0   0| 736k 2184k| 120B  298B|   0     0 |3785    11k
 34  30  21  14   0   0| 360k   35M|  48k 2862B|   0     0 |4178    12k
 37  26  35   1   1   1|1056k  136M|  13k 1394B|   0     0 |4331    11k
 34  28  33   2   0   1|1228k  134M|  30k 1658B|   0     0 |4132    11k
 36  21  28  14   0   0| 332k   23M| 151k 5798B|   0     0 |3368  9166
 37  18  18  28   0   0|  16k   88k|  13k  990B|   0     0 |3092  8403
 38  23  23  16   1   0| 316k   39M|  30k 1920B|   0     0 |3635  9723
 32  33  33   2   0   1|1180k  132M| 295B  404B|   0     0 |3907  9935
 31  31  35   2   1   1|1120k  123M|  43k 2424B|   0     0 |4746    14k
 32  29  37   2   1   1|1380k   71M|3084B 2440B|   0     0 |5341    19k
 37  24  36   1   0   0| 700k   53M| 459B  496B|   0     0 |4402    20k
 35  20  29  14   1   1|1808k   61M|4596B  500B|   0     0 |5551    19k
 30  30  35   2   1   3|1076k  107M| 246B  620B|   0     0 |6769    24k
 36  25  30   7   1   2|1088k   66M| 165k   10k|   0     0 |5093    17k

On Fri, Mar 13, 2009 at 5:17 AM, Alain Williams <addw@phcomp.co.uk> wrote:
> On Thu, Mar 12, 2009 at 10:21:28PM -0500, Ryan Wagoner wrote:
>> I'm glad I'm not the only one experiencing the issue. Luckily the
>> issues on both my systems aren't as bad. I don't have any errors
>> showing in /var/log/messages on either system. I've been trying to
>> track down this issue for about a year now. I just recently my the
>> connection with RAID 1 and mdadm when copying data on the second
>> system.
>
> Did you have the problem straight from install, or perhaps when a new
> kernel started being used ?
>
> My system worked well for some months, there was no kernel update and
> it started to go wrong a couple of weeks ago. I also see errors
> when I run 'badblocks' -- which makes it smell of a hardware issue,
> but the disks were tested, on return, by the hardware supplier and
> they did not find any problem with them.
>
> Given that you are not seeing anything in /var/log/messages make me
> think that I do have some other problem -- perhaps in addition to
> what you have.
>
> --
> Alain Williams
> Linux/GNU Consultant - Mail systems, Web sites, Networking, Programmer, IT Lecturer.
> +44 (0) 787 668 0256  http://www.phcomp.co.uk/
> Parliament Hill Computers Ltd. Registration Information: http://www.phcomp.co.uk/contact.php
> Past chairman of UKUUG: http://www.ukuug.org/
> #include <std_disclaimer.h>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2009-03-13 12:21 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-12 23:46 High IO Wait with RAID 1 Ryan Wagoner
2009-03-13  0:48 ` Alain Williams
2009-03-13  3:21   ` Ryan Wagoner
2009-03-13  9:39     ` Robin Hill
2009-03-13 10:17     ` Alain Williams
     [not found]       ` <7d86ddb90903130519p4268dc33vc8ad42b53aefa2e2@mail.gmail.com>
2009-03-13 12:21         ` Ryan Wagoner [this message]
2009-03-13 16:22     ` Bill Davidsen
2009-03-13 17:42       ` Ryan Wagoner
2009-03-13 18:37         ` David Rees
2009-03-13 18:42   ` David Rees
2009-03-13 14:48 ` John Robinson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7d86ddb90903130521s454b386eo1ec00eec17bdaae7@mail.gmail.com \
    --to=rswagoner@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.