All of lore.kernel.org
 help / color / mirror / Atom feed
* frequent disk activity with mdadm-3.3
@ 2014-09-11 22:05 Marco Schindler
  2014-09-11 22:24 ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Marco Schindler @ 2014-09-11 22:05 UTC (permalink / raw)
  To: linux-raid; +Cc: neilb

Hello,

Im seeing frequent disk activity on all raid drives from mdadm since upgrading to 3.3.1/3.3.2 from 3.2.
It keeps drives from sleeping (disk access every ~15 minutes). Is it intentional?

I reported a similar issue for udev a few weeks ago: https://bugs.gentoo.org/show_bug.cgi?id=518748

Marco

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-11 22:05 frequent disk activity with mdadm-3.3 Marco Schindler
@ 2014-09-11 22:24 ` NeilBrown
  2014-09-11 22:45   ` Marco Schindler
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-09-11 22:24 UTC (permalink / raw)
  To: Marco Schindler; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 749 bytes --]

On Fri, 12 Sep 2014 00:05:08 +0200 Marco Schindler
<marco.schindler@gmail.com> wrote:

> Hello,
> 
> Im seeing frequent disk activity on all raid drives from mdadm since upgrading to 3.3.1/3.3.2 from 3.2.
> It keeps drives from sleeping (disk access every ~15 minutes). Is it intentional?

No.

> 
> I reported a similar issue for udev a few weeks ago: https://bugs.gentoo.org/show_bug.cgi?id=518748

In that bug report you mention upgraded udev.  Here you mention upgrading
mdadm.... a bit confusing.

Can you use "blktrace" to gather details on exactly what is being read and
when, and  hopefully which process is doing it?

Is "mdadm --monitor" (or "-F") running?  If you kill it does the disk
activity go away?

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-11 22:24 ` NeilBrown
@ 2014-09-11 22:45   ` Marco Schindler
  2014-09-12 12:15     ` Marco Schindler
  0 siblings, 1 reply; 10+ messages in thread
From: Marco Schindler @ 2014-09-11 22:45 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

On 12.09.2014, at 00:24, NeilBrown <neilb@suse.de> wrote:

> On Fri, 12 Sep 2014 00:05:08 +0200 Marco Schindler
> <marco.schindler@gmail.com> wrote:
> 
> In that bug report you mention upgraded udev.  Here you mention upgrading
> mdadm.... a bit confusing.

totally unsure if this is related, it may be coincidence. 
just found it’s odd to have a similar issue with two different processes in such a short timeframe.

I have blocked >sys-fs/udev-212 and >sys-fs/mdadm-3.2 for now to give drives some rest.

> Can you use "blktrace" to gather details on exactly what is being read and
> when, and  hopefully which process is doing it?
> 
> Is "mdadm --monitor" (or "-F") running?  If you kill it does the disk
> activity go away?
> 
> NeilBrown

yes I have (and have been) running "mdadm --monitor --scan —daemonise"
I have /proc/sys/vm/block_dump enabled and it shows process mdadm acessing drives every 15 minutes.

I wouldn’t be aware of mdadm being invoked elsewhere, so I presume it won’t happen when daemon is not running.
Will see if I can get some more info with blktrace and report back..



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-11 22:45   ` Marco Schindler
@ 2014-09-12 12:15     ` Marco Schindler
  2014-09-15  0:18       ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Marco Schindler @ 2014-09-12 12:15 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

ok, that’s interesting.
the process id is _not_ mdadm daemon and it looks like the spindown itself is triggering the process.

ps ax | grep mdadm
28956 ?        Ss     0:00 mdadm --monitor --scan --daemonise --pid-file /var/run/mdadm.pid --syslog

grep sda /var/log/messages
Sep 12 13:55:04 alina spindown: sda is now inactive.
Sep 12 13:55:09 alina kernel: mdadm(29737): READ block 3907028992 on sda (8 sectors)
Sep 12 13:55:17 alina kernel: mdadm(29737): READ block 3907029152 on sda (8 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 0 on sda (8 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 3907029167 on sda (1 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 3907029166 on sda (1 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 0 on sda (1 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 0 on sda (1 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 3907029152 on sda (8 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 0 on sda (8 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
Sep 12 13:56:12 alina spindown: sda is now active.

here’s the output of blktrace -d /dev/sda during that time.
https://dl.dropboxusercontent.com/u/3464720/blktrace.tar.bz2

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-12 12:15     ` Marco Schindler
@ 2014-09-15  0:18       ` NeilBrown
  2014-09-15 10:52         ` Marco Schindler
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-09-15  0:18 UTC (permalink / raw)
  To: Marco Schindler; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2252 bytes --]

On Fri, 12 Sep 2014 14:15:35 +0200 Marco Schindler
<marco.schindler@gmail.com> wrote:

> ok, that’s interesting.
> the process id is _not_ mdadm daemon and it looks like the spindown itself is triggering the process.
> 
> ps ax | grep mdadm
> 28956 ?        Ss     0:00 mdadm --monitor --scan --daemonise --pid-file /var/run/mdadm.pid --syslog
> 
> grep sda /var/log/messages
> Sep 12 13:55:04 alina spindown: sda is now inactive.
> Sep 12 13:55:09 alina kernel: mdadm(29737): READ block 3907028992 on sda (8 sectors)
> Sep 12 13:55:17 alina kernel: mdadm(29737): READ block 3907029152 on sda (8 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 0 on sda (8 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 3907029167 on sda (1 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 3907029166 on sda (1 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 0 on sda (1 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 0 on sda (1 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 3907029152 on sda (8 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 0 on sda (8 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
> Sep 12 13:55:26 alina kernel: mdadm(29737): READ block 8 on sda (8 sectors)
> Sep 12 13:56:12 alina spindown: sda is now active.

It would help to get "udevadm monitor" info to correlate with this.
Presumably some uevent is generated when the spindown happens.  udev might
respond to this by reading from the device, which defeats the purpose...


> 
> here’s the output of blktrace -d /dev/sda during that time.
> https://dl.dropboxusercontent.com/u/3464720/blktrace.tar.bz2

That suggest that something is reading the metadata from the device almost
constantly.  Mostly a 'kworker' thread.  I don't know what would cause that.

Let's look at the 'udevadm monitor' trace first and see what that shows.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-15  0:18       ` NeilBrown
@ 2014-09-15 10:52         ` Marco Schindler
  2014-09-18 10:03           ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Marco Schindler @ 2014-09-15 10:52 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


On 15.09.2014, at 02:18, NeilBrown <neilb@suse.de> wrote:

> It would help to get "udevadm monitor" info to correlate with this.
> Presumably some uevent is generated when the spindown happens.  udev might
> respond to this by reading from the device, which defeats the purpose...
> 
> 
>> 
>> here’s the output of blktrace -d /dev/sda during that time.
>> https://dl.dropboxusercontent.com/u/3464720/blktrace.tar.bz2
> 
> That suggest that something is reading the metadata from the device almost
> constantly.  Mostly a 'kworker' thread.  I don't know what would cause that.
> 
> Let's look at the 'udevadm monitor' trace first and see what that shows.

here’s the output of udevadm monitor during the spindown cycle while mdadm-3.3 is installed.

monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

KERNEL[299719.336261] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
UDEV  [299720.646760] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
KERNEL[299780.202901] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda (block)
UDEV  [299781.567308] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda (block)
KERNEL[299841.090818] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
UDEV  [299842.407035] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)

please note that the issue immediately disappears when downgrading to mdadm-3.2 without touching anything else.
I see the udev rules have been updated in mdadm-3.3..

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-15 10:52         ` Marco Schindler
@ 2014-09-18 10:03           ` NeilBrown
  2014-09-18 10:38             ` Marco Schindler
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-09-18 10:03 UTC (permalink / raw)
  To: Marco Schindler; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2938 bytes --]

On Mon, 15 Sep 2014 12:52:07 +0200 Marco Schindler
<marco.schindler@gmail.com> wrote:

> 
> On 15.09.2014, at 02:18, NeilBrown <neilb@suse.de> wrote:
> 
> > It would help to get "udevadm monitor" info to correlate with this.
> > Presumably some uevent is generated when the spindown happens.  udev might
> > respond to this by reading from the device, which defeats the purpose...
> > 
> > 
> >> 
> >> here’s the output of blktrace -d /dev/sda during that time.
> >> https://dl.dropboxusercontent.com/u/3464720/blktrace.tar.bz2
> > 
> > That suggest that something is reading the metadata from the device almost
> > constantly.  Mostly a 'kworker' thread.  I don't know what would cause that.
> > 
> > Let's look at the 'udevadm monitor' trace first and see what that shows.
> 
> here’s the output of udevadm monitor during the spindown cycle while mdadm-3.3 is installed.
> 
> monitor will print the received events for:
> UDEV - the event which udev sends out after rule processing
> KERNEL - the kernel uevent
> 
> KERNEL[299719.336261] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
> UDEV  [299720.646760] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
> KERNEL[299780.202901] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda (block)
> UDEV  [299781.567308] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda (block)
> KERNEL[299841.090818] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
> UDEV  [299842.407035] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
> 
> please note that the issue immediately disappears when downgrading to mdadm-3.2 without touching anything else.
> I see the udev rules have been updated in mdadm-3.3..

Getting a "change" even on spindown is causing the problem I suspect.
A change in 3.3.1 causes "mdadm -I" to be run on a device when it 'changes'.
That will read from the device which will wake it up.
(commit 25392f5fc59f96fb76 - revert it and the symptom will probably go away).

I really think the "bug" here is that the change event is emitted on
'spindown', but maybe the bug is that the exact meaning of 'change' isn't
well documented.

I can probably get "mdadm -I" to use O_EXCL which will fail on devices
already in an array, but I'm not sure that is a complete solution.  You could
still get wakeups on other devices.

Can you rung the 'udevadm monitor' again, but this time with '--property'.
Maybe there is some property associated with spindown events which we can use
to ignore them.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-18 10:03           ` NeilBrown
@ 2014-09-18 10:38             ` Marco Schindler
  2014-09-18 11:09               ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Marco Schindler @ 2014-09-18 10:38 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


On 18.09.2014, at 12:03, NeilBrown <neilb@suse.de> wrote:

> Getting a "change" even on spindown is causing the problem I suspect.
> A change in 3.3.1 causes "mdadm -I" to be run on a device when it 'changes'.
> That will read from the device which will wake it up.
> (commit 25392f5fc59f96fb76 - revert it and the symptom will probably go away).
> 
> I really think the "bug" here is that the change event is emitted on
> 'spindown', but maybe the bug is that the exact meaning of 'change' isn't
> well documented.
> 
> I can probably get "mdadm -I" to use O_EXCL which will fail on devices
> already in an array, but I'm not sure that is a complete solution.  You could
> still get wakeups on other devices.
> 
> Can you rung the 'udevadm monitor' again, but this time with '--property'.
> Maybe there is some property associated with spindown events which we can use
> to ignore them.
> 
> NeilBrown

sure. I also took separate logs for standby and wakeup.

change events only occur when the drive goes standby (see below).
strangely enough, there are no events when the drive wakes up.

monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

KERNEL[558130.868218] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda (block)
ACTION=change
DEVNAME=/dev/sda
DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
DEVTYPE=disk
MAJOR=8
MINOR=0
SEQNUM=3239
SUBSYSTEM=block

KERNEL[558132.028485] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
ACTION=change
DEVNAME=/dev/sdb
DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb
DEVTYPE=disk
MAJOR=8
MINOR=16
SEQNUM=3240
SUBSYSTEM=block

UDEV  [558139.263973] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda (block)
ACTION=change
DEVLINKS=/dev/disk/by-id/ata-WDC_WD20EARS-00S8B1_WD-WCAVY1872131 /dev/disk/by-id/wwn-0x50014ee203dca984 /dev/disk/by-path/pci-0000:09:00.0-sas-0x4433221103000000-lun-0
DEVNAME=/dev/sda
DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
DEVTYPE=disk
ID_ATA=1
ID_ATA_DOWNLOAD_MICROCODE=1
ID_ATA_FEATURE_SET_AAM=1
ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
ID_ATA_FEATURE_SET_AAM_ENABLED=0
ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=128
ID_ATA_FEATURE_SET_HPA=1
ID_ATA_FEATURE_SET_HPA_ENABLED=1
ID_ATA_FEATURE_SET_PM=1
ID_ATA_FEATURE_SET_PM_ENABLED=1
ID_ATA_FEATURE_SET_PUIS=1
ID_ATA_FEATURE_SET_PUIS_ENABLED=0
ID_ATA_FEATURE_SET_SECURITY=1
ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=408
ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=408
ID_ATA_FEATURE_SET_SMART=1
ID_ATA_FEATURE_SET_SMART_ENABLED=1
ID_ATA_SATA=1
ID_ATA_SATA_SIGNAL_RATE_GEN1=1
ID_ATA_SATA_SIGNAL_RATE_GEN2=1
ID_ATA_WRITE_CACHE=1
ID_ATA_WRITE_CACHE_ENABLED=1
ID_BUS=ata
ID_FS_LABEL=alina.o81.5:media3
ID_FS_LABEL_ENC=alina.o81.5:media3
ID_FS_TYPE=linux_raid_member
ID_FS_USAGE=raid
ID_FS_UUID=058bd7b0-455c-0d7e-6de0-a845ea05ee38
ID_FS_UUID_ENC=058bd7b0-455c-0d7e-6de0-a845ea05ee38
ID_FS_UUID_SUB=da1c1cad-6661-174c-b0b9-4bc1a712902d
ID_FS_UUID_SUB_ENC=da1c1cad-6661-174c-b0b9-4bc1a712902d
ID_FS_VERSION=1.2
ID_MODEL=WDC_WD20EARS-00S8B1
ID_MODEL_ENC=WDC\x20WD20EARS-00S8B1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_PATH=pci-0000:09:00.0-sas-0x4433221103000000-lun-0
ID_PATH_TAG=pci-0000_09_00_0-sas-0x4433221103000000-lun-0
ID_REVISION=80.00A80
ID_SERIAL=WDC_WD20EARS-00S8B1_WD-WCAVY1872131
ID_SERIAL_SHORT=WD-WCAVY1872131
ID_TYPE=disk
ID_WWN=0x50014ee203dca984
ID_WWN_WITH_EXTENSION=0x50014ee203dca984
MAJOR=8
MINOR=0
SEQNUM=3239
SUBSYSTEM=block
USEC_INITIALIZED=2003

UDEV  [558139.293742] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
ACTION=change
DEVLINKS=/dev/disk/by-id/ata-WDC_WD20EARS-00S8B1_WD-WCAVY1879365 /dev/disk/by-id/wwn-0x50014ee25931e63a /dev/disk/by-path/pci-0000:09:00.0-sas-0x4433221102000000-lun-0
DEVNAME=/dev/sdb
DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb
DEVTYPE=disk
ID_ATA=1
ID_ATA_DOWNLOAD_MICROCODE=1
ID_ATA_FEATURE_SET_AAM=1
ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
ID_ATA_FEATURE_SET_AAM_ENABLED=0
ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=128
ID_ATA_FEATURE_SET_HPA=1
ID_ATA_FEATURE_SET_HPA_ENABLED=1
ID_ATA_FEATURE_SET_PM=1
ID_ATA_FEATURE_SET_PM_ENABLED=1
ID_ATA_FEATURE_SET_PUIS=1
ID_ATA_FEATURE_SET_PUIS_ENABLED=0
ID_ATA_FEATURE_SET_SECURITY=1
ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=408
ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=408
ID_ATA_FEATURE_SET_SMART=1
ID_ATA_FEATURE_SET_SMART_ENABLED=1
ID_ATA_SATA=1
ID_ATA_SATA_SIGNAL_RATE_GEN1=1
ID_ATA_SATA_SIGNAL_RATE_GEN2=1
ID_ATA_WRITE_CACHE=1
ID_ATA_WRITE_CACHE_ENABLED=1
ID_BUS=ata
ID_FS_LABEL=alina.o81.5:media3
ID_FS_LABEL_ENC=alina.o81.5:media3
ID_FS_TYPE=linux_raid_member
ID_FS_USAGE=raid
ID_FS_UUID=058bd7b0-455c-0d7e-6de0-a845ea05ee38
ID_FS_UUID_ENC=058bd7b0-455c-0d7e-6de0-a845ea05ee38
ID_FS_UUID_SUB=ade4cbb9-501c-cd2c-00b2-607f1699133b
ID_FS_UUID_SUB_ENC=ade4cbb9-501c-cd2c-00b2-607f1699133b
ID_FS_VERSION=1.2
ID_MODEL=WDC_WD20EARS-00S8B1
ID_MODEL_ENC=WDC\x20WD20EARS-00S8B1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_PATH=pci-0000:09:00.0-sas-0x4433221102000000-lun-0
ID_PATH_TAG=pci-0000_09_00_0-sas-0x4433221102000000-lun-0
ID_REVISION=80.00A80
ID_SERIAL=WDC_WD20EARS-00S8B1_WD-WCAVY1879365
ID_SERIAL_SHORT=WD-WCAVY1879365
ID_TYPE=disk
ID_WWN=0x50014ee25931e63a
ID_WWN_WITH_EXTENSION=0x50014ee25931e63a
MAJOR=8
MINOR=16
SEQNUM=3240
SUBSYSTEM=block
USEC_INITIALIZED=2361

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-18 10:38             ` Marco Schindler
@ 2014-09-18 11:09               ` NeilBrown
  2014-09-18 11:22                 ` Marco Schindler
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-09-18 11:09 UTC (permalink / raw)
  To: Marco Schindler; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 6946 bytes --]

On Thu, 18 Sep 2014 12:38:55 +0200 Marco Schindler
<marco.schindler@gmail.com> wrote:

> 
> On 18.09.2014, at 12:03, NeilBrown <neilb@suse.de> wrote:
> 
> > Getting a "change" even on spindown is causing the problem I suspect.
> > A change in 3.3.1 causes "mdadm -I" to be run on a device when it 'changes'.
> > That will read from the device which will wake it up.
> > (commit 25392f5fc59f96fb76 - revert it and the symptom will probably go away).
> > 
> > I really think the "bug" here is that the change event is emitted on
> > 'spindown', but maybe the bug is that the exact meaning of 'change' isn't
> > well documented.
> > 
> > I can probably get "mdadm -I" to use O_EXCL which will fail on devices
> > already in an array, but I'm not sure that is a complete solution.  You could
> > still get wakeups on other devices.
> > 
> > Can you rung the 'udevadm monitor' again, but this time with '--property'.
> > Maybe there is some property associated with spindown events which we can use
> > to ignore them.
> > 
> > NeilBrown
> 
> sure. I also took separate logs for standby and wakeup.
> 
> change events only occur when the drive goes standby (see below).
> strangely enough, there are no events when the drive wakes up.
> 
> monitor will print the received events for:
> UDEV - the event which udev sends out after rule processing
> KERNEL - the kernel uevent
...


Thanks.  There is nothing there which point to the device being spun down.
I tried spinning down disks on a couple of machines and no udev events were
created.  So I'm a bit suspicious that there is something I'm missing.

How exactly do you spin down the devices?
I use "hdparm -Y /dev/sda" or "hdparm -S 1 /dev/sda".

NeilBrown






> 
> KERNEL[558130.868218] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda (block)
> ACTION=change
> DEVNAME=/dev/sda
> DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
> DEVTYPE=disk
> MAJOR=8
> MINOR=0
> SEQNUM=3239
> SUBSYSTEM=block
> 
> KERNEL[558132.028485] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
> ACTION=change
> DEVNAME=/dev/sdb
> DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb
> DEVTYPE=disk
> MAJOR=8
> MINOR=16
> SEQNUM=3240
> SUBSYSTEM=block
> 
> UDEV  [558139.263973] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda (block)
> ACTION=change
> DEVLINKS=/dev/disk/by-id/ata-WDC_WD20EARS-00S8B1_WD-WCAVY1872131 /dev/disk/by-id/wwn-0x50014ee203dca984 /dev/disk/by-path/pci-0000:09:00.0-sas-0x4433221103000000-lun-0
> DEVNAME=/dev/sda
> DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
> DEVTYPE=disk
> ID_ATA=1
> ID_ATA_DOWNLOAD_MICROCODE=1
> ID_ATA_FEATURE_SET_AAM=1
> ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
> ID_ATA_FEATURE_SET_AAM_ENABLED=0
> ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=128
> ID_ATA_FEATURE_SET_HPA=1
> ID_ATA_FEATURE_SET_HPA_ENABLED=1
> ID_ATA_FEATURE_SET_PM=1
> ID_ATA_FEATURE_SET_PM_ENABLED=1
> ID_ATA_FEATURE_SET_PUIS=1
> ID_ATA_FEATURE_SET_PUIS_ENABLED=0
> ID_ATA_FEATURE_SET_SECURITY=1
> ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=408
> ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=408
> ID_ATA_FEATURE_SET_SMART=1
> ID_ATA_FEATURE_SET_SMART_ENABLED=1
> ID_ATA_SATA=1
> ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> ID_ATA_WRITE_CACHE=1
> ID_ATA_WRITE_CACHE_ENABLED=1
> ID_BUS=ata
> ID_FS_LABEL=alina.o81.5:media3
> ID_FS_LABEL_ENC=alina.o81.5:media3
> ID_FS_TYPE=linux_raid_member
> ID_FS_USAGE=raid
> ID_FS_UUID=058bd7b0-455c-0d7e-6de0-a845ea05ee38
> ID_FS_UUID_ENC=058bd7b0-455c-0d7e-6de0-a845ea05ee38
> ID_FS_UUID_SUB=da1c1cad-6661-174c-b0b9-4bc1a712902d
> ID_FS_UUID_SUB_ENC=da1c1cad-6661-174c-b0b9-4bc1a712902d
> ID_FS_VERSION=1.2
> ID_MODEL=WDC_WD20EARS-00S8B1
> ID_MODEL_ENC=WDC\x20WD20EARS-00S8B1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
> ID_PATH=pci-0000:09:00.0-sas-0x4433221103000000-lun-0
> ID_PATH_TAG=pci-0000_09_00_0-sas-0x4433221103000000-lun-0
> ID_REVISION=80.00A80
> ID_SERIAL=WDC_WD20EARS-00S8B1_WD-WCAVY1872131
> ID_SERIAL_SHORT=WD-WCAVY1872131
> ID_TYPE=disk
> ID_WWN=0x50014ee203dca984
> ID_WWN_WITH_EXTENSION=0x50014ee203dca984
> MAJOR=8
> MINOR=0
> SEQNUM=3239
> SUBSYSTEM=block
> USEC_INITIALIZED=2003
> 
> UDEV  [558139.293742] change   /devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb (block)
> ACTION=change
> DEVLINKS=/dev/disk/by-id/ata-WDC_WD20EARS-00S8B1_WD-WCAVY1879365 /dev/disk/by-id/wwn-0x50014ee25931e63a /dev/disk/by-path/pci-0000:09:00.0-sas-0x4433221102000000-lun-0
> DEVNAME=/dev/sdb
> DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:09:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb
> DEVTYPE=disk
> ID_ATA=1
> ID_ATA_DOWNLOAD_MICROCODE=1
> ID_ATA_FEATURE_SET_AAM=1
> ID_ATA_FEATURE_SET_AAM_CURRENT_VALUE=254
> ID_ATA_FEATURE_SET_AAM_ENABLED=0
> ID_ATA_FEATURE_SET_AAM_VENDOR_RECOMMENDED_VALUE=128
> ID_ATA_FEATURE_SET_HPA=1
> ID_ATA_FEATURE_SET_HPA_ENABLED=1
> ID_ATA_FEATURE_SET_PM=1
> ID_ATA_FEATURE_SET_PM_ENABLED=1
> ID_ATA_FEATURE_SET_PUIS=1
> ID_ATA_FEATURE_SET_PUIS_ENABLED=0
> ID_ATA_FEATURE_SET_SECURITY=1
> ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=408
> ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=408
> ID_ATA_FEATURE_SET_SMART=1
> ID_ATA_FEATURE_SET_SMART_ENABLED=1
> ID_ATA_SATA=1
> ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> ID_ATA_WRITE_CACHE=1
> ID_ATA_WRITE_CACHE_ENABLED=1
> ID_BUS=ata
> ID_FS_LABEL=alina.o81.5:media3
> ID_FS_LABEL_ENC=alina.o81.5:media3
> ID_FS_TYPE=linux_raid_member
> ID_FS_USAGE=raid
> ID_FS_UUID=058bd7b0-455c-0d7e-6de0-a845ea05ee38
> ID_FS_UUID_ENC=058bd7b0-455c-0d7e-6de0-a845ea05ee38
> ID_FS_UUID_SUB=ade4cbb9-501c-cd2c-00b2-607f1699133b
> ID_FS_UUID_SUB_ENC=ade4cbb9-501c-cd2c-00b2-607f1699133b
> ID_FS_VERSION=1.2
> ID_MODEL=WDC_WD20EARS-00S8B1
> ID_MODEL_ENC=WDC\x20WD20EARS-00S8B1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
> ID_PATH=pci-0000:09:00.0-sas-0x4433221102000000-lun-0
> ID_PATH_TAG=pci-0000_09_00_0-sas-0x4433221102000000-lun-0
> ID_REVISION=80.00A80
> ID_SERIAL=WDC_WD20EARS-00S8B1_WD-WCAVY1879365
> ID_SERIAL_SHORT=WD-WCAVY1879365
> ID_TYPE=disk
> ID_WWN=0x50014ee25931e63a
> ID_WWN_WITH_EXTENSION=0x50014ee25931e63a
> MAJOR=8
> MINOR=16
> SEQNUM=3240
> SUBSYSTEM=block
> USEC_INITIALIZED=2361

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: frequent disk activity with mdadm-3.3
  2014-09-18 11:09               ` NeilBrown
@ 2014-09-18 11:22                 ` Marco Schindler
  0 siblings, 0 replies; 10+ messages in thread
From: Marco Schindler @ 2014-09-18 11:22 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid


On 18.09.2014, at 13:09, NeilBrown <neilb@suse.de> wrote:

> Thanks.  There is nothing there which point to the device being spun down.
> I tried spinning down disks on a couple of machines and no udev events were
> created.  So I'm a bit suspicious that there is something I'm missing.
> 
> How exactly do you spin down the devices?
> I use "hdparm -Y /dev/sda" or "hdparm -S 1 /dev/sda".

I’m using spindown (https://code.google.com/p/spindown) which in turn uses sg3-utils (http://sg.danny.cz/sg/sg3_utils.html), issuing "sg_start —stop DEVICE"
I can confirm there’s no change events with hdparm -Y here either. but with sg_start —stop there are.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-09-18 11:22 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-11 22:05 frequent disk activity with mdadm-3.3 Marco Schindler
2014-09-11 22:24 ` NeilBrown
2014-09-11 22:45   ` Marco Schindler
2014-09-12 12:15     ` Marco Schindler
2014-09-15  0:18       ` NeilBrown
2014-09-15 10:52         ` Marco Schindler
2014-09-18 10:03           ` NeilBrown
2014-09-18 10:38             ` Marco Schindler
2014-09-18 11:09               ` NeilBrown
2014-09-18 11:22                 ` Marco Schindler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.