All of lore.kernel.org
 help / color / mirror / Atom feed
* Newly-created arrays don't auto-assemble - related to hostname change?
@ 2016-11-17  3:52 Andy Smith
  2016-11-17  6:09 ` NeilBrown
  2016-11-17 23:22 ` Peter Sangas
  0 siblings, 2 replies; 19+ messages in thread
From: Andy Smith @ 2016-11-17  3:52 UTC (permalink / raw)
  To: linux-raid

Hi,

I feel I am missing something very simple here, as I usually don't
have this issue, but here goes…

I've got a Debian jessie host on which I created four arrays during
install (md{0,1,2,3}), using the Debian installer and partman. These
auto-assemble fine.

After install the name of the server was changed from "tbd" to
"jfd". Another array was then created (md5), added to
/etc/mdadm/mdadm.conf and the initramfs was rebuilt
(update-initramfs -u).

md5 does not auto-assemble. It can be started fine after boot with:

    # mdadm --assemble /dev/md5

or:

    # mdadm --incremental /dev/sdc
    # mdadm --incremental /dev/sdd

/etc/mdadm/mdadm.conf:

    DEVICE /dev/sd*
    CREATE owner=root group=disk mode=0660 auto=yes
    HOMEHOST <ignore>
    MAILADDR root
    ARRAY /dev/md/0  metadata=1.2 UUID=400bac1d:e2c5d6ef:fea3b8c8:bcb70f8f
    ARRAY /dev/md/1  metadata=1.2 UUID=e29c8b89:705f0116:d888f77e:2b6e32f5
    ARRAY /dev/md/2  metadata=1.2 UUID=039b3427:4be5157a:6e2d53bd:fe898803
    ARRAY /dev/md/3  metadata=1.2 UUID=30f745ce:7ed41b53:4df72181:7406ea1d
    ARRAY /dev/md/5  metadata=1.2 UUID=957030cf:c09f023d:ceaebb27:e546f095

I've unpacked the initramfs and looked at the mdadm.conf in there
and it is identical.

Initially HOMEHOST was set to <system> (the default), but I noticed
when looking at --detail that md5 has:

           Name : jfd:5  (local to host jfd)

whereas the others have:

           Name : tbd:0

…so I changed it to <ignore> to see if that would help. It didn't.

So, I'd really appreciate any hints as to what I've missed here!

Here follows --detail and --examine of md5 and its members, then the
contents of /proc/mdstat after I have manually assembled md5.

$ sudo mdadm --detail /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Thu Nov 17 02:35:15 2016
     Raid Level : raid10
     Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
  Used Dev Size : 1875243008 (1788.37 GiB 1920.25 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Nov 17 02:35:15 2016
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : far=2
     Chunk Size : 512K

           Name : jfd:5  (local to host jfd)
           UUID : 957030cf:c09f023d:ceaebb27:e546f095
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       32        1      active sync   /dev/sdc

$ sudo mdadm --examine /dev/sd{c,d}
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
           Name : jfd:5  (local to host jfd)
  Creation Time : Thu Nov 17 02:35:15 2016
     Raid Level : raid10
   Raid Devices : 2

 Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
     Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
  Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=688 sectors
          State : clean
    Device UUID : 4ac82c29:2d109465:7fff9b22:8c411c1e

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Nov 17 02:35:15 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 96d669f1 - correct
         Events : 0

         Layout : far=2
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
           Name : jfd:5  (local to host jfd)
  Creation Time : Thu Nov 17 02:35:15 2016
     Raid Level : raid10
   Raid Devices : 2
 Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
     Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
  Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=688 sectors
          State : clean
    Device UUID : 3a067652:6e88afae:82722342:0036bae0

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Nov 17 02:35:15 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : eb608799 - correct
         Events : 0

         Layout : far=2
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

$ cat /proc/mdstat 
Personalities : [raid1] [raid10] 
md5 : active (auto-read-only) raid10 sdd[0] sdc[1]
      1875243008 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      bitmap: 0/14 pages [0KB], 65536KB chunk

md3 : active raid10 sda5[0] sdb5[1]
      12199936 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      
md2 : active (auto-read-only) raid10 sda3[0] sdb3[1]
      975872 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      
md1 : active raid10 sda2[0] sdb2[1]
      1951744 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      
md0 : active raid1 sda1[0] sdb1[1]
      498368 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Cheers,
Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-17  3:52 Newly-created arrays don't auto-assemble - related to hostname change? Andy Smith
@ 2016-11-17  6:09 ` NeilBrown
  2016-11-17 15:09   ` Andy Smith
  2016-11-17 23:22 ` Peter Sangas
  1 sibling, 1 reply; 19+ messages in thread
From: NeilBrown @ 2016-11-17  6:09 UTC (permalink / raw)
  To: Andy Smith, linux-raid

[-- Attachment #1: Type: text/plain, Size: 6347 bytes --]

On Thu, Nov 17 2016, Andy Smith wrote:

> Hi,
>
> I feel I am missing something very simple here, as I usually don't
> have this issue, but here goes…
>
> I've got a Debian jessie host on which I created four arrays during
> install (md{0,1,2,3}), using the Debian installer and partman. These
> auto-assemble fine.
>
> After install the name of the server was changed from "tbd" to
> "jfd". Another array was then created (md5), added to
> /etc/mdadm/mdadm.conf and the initramfs was rebuilt
> (update-initramfs -u).
>
> md5 does not auto-assemble. It can be started fine after boot with:
>
>     # mdadm --assemble /dev/md5
>
> or:
>
>     # mdadm --incremental /dev/sdc
>     # mdadm --incremental /dev/sdd

This is almost exactly what udev does when the devices are discovered,
so if it works here, it should work when udev does it.

My only guess is that maybe the "DEVICE /dev/sd*" line in the mdadm.conf
is causing confusion.  udev might be using a different name, though that
would be odd.

Can you try removing that line and see if it makes a difference?

NeilBrown


>
> /etc/mdadm/mdadm.conf:
>
>     DEVICE /dev/sd*
>     CREATE owner=root group=disk mode=0660 auto=yes
>     HOMEHOST <ignore>
>     MAILADDR root
>     ARRAY /dev/md/0  metadata=1.2 UUID=400bac1d:e2c5d6ef:fea3b8c8:bcb70f8f
>     ARRAY /dev/md/1  metadata=1.2 UUID=e29c8b89:705f0116:d888f77e:2b6e32f5
>     ARRAY /dev/md/2  metadata=1.2 UUID=039b3427:4be5157a:6e2d53bd:fe898803
>     ARRAY /dev/md/3  metadata=1.2 UUID=30f745ce:7ed41b53:4df72181:7406ea1d
>     ARRAY /dev/md/5  metadata=1.2 UUID=957030cf:c09f023d:ceaebb27:e546f095
>
> I've unpacked the initramfs and looked at the mdadm.conf in there
> and it is identical.
>
> Initially HOMEHOST was set to <system> (the default), but I noticed
> when looking at --detail that md5 has:
>
>            Name : jfd:5  (local to host jfd)
>
> whereas the others have:
>
>            Name : tbd:0
>
> …so I changed it to <ignore> to see if that would help. It didn't.
>
> So, I'd really appreciate any hints as to what I've missed here!
>
> Here follows --detail and --examine of md5 and its members, then the
> contents of /proc/mdstat after I have manually assembled md5.
>
> $ sudo mdadm --detail /dev/md5
> /dev/md5:
>         Version : 1.2
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 1875243008 (1788.37 GiB 1920.25 GB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
>
>   Intent Bitmap : Internal
>
>     Update Time : Thu Nov 17 02:35:15 2016
>           State : clean 
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>            Name : jfd:5  (local to host jfd)
>            UUID : 957030cf:c09f023d:ceaebb27:e546f095
>          Events : 0
>
>     Number   Major   Minor   RaidDevice State
>        0       8       48        0      active sync   /dev/sdd
>        1       8       32        1      active sync   /dev/sdc
>
> $ sudo mdadm --examine /dev/sd{c,d}
> /dev/sdc:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
>            Name : jfd:5  (local to host jfd)
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>    Raid Devices : 2
>
>  Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262056 sectors, after=688 sectors
>           State : clean
>     Device UUID : 4ac82c29:2d109465:7fff9b22:8c411c1e
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Thu Nov 17 02:35:15 2016
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : 96d669f1 - correct
>          Events : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>    Device Role : Active device 1
>    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdd:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
>            Name : jfd:5  (local to host jfd)
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>    Raid Devices : 2
>  Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262056 sectors, after=688 sectors
>           State : clean
>     Device UUID : 3a067652:6e88afae:82722342:0036bae0
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Thu Nov 17 02:35:15 2016
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : eb608799 - correct
>          Events : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>    Device Role : Active device 0
>    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>
> $ cat /proc/mdstat 
> Personalities : [raid1] [raid10] 
> md5 : active (auto-read-only) raid10 sdd[0] sdc[1]
>       1875243008 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       bitmap: 0/14 pages [0KB], 65536KB chunk
>
> md3 : active raid10 sda5[0] sdb5[1]
>       12199936 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       
> md2 : active (auto-read-only) raid10 sda3[0] sdb3[1]
>       975872 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       
> md1 : active raid10 sda2[0] sdb2[1]
>       1951744 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       
> md0 : active raid1 sda1[0] sdb1[1]
>       498368 blocks super 1.2 [2/2] [UU]
>       
> unused devices: <none>
>
> Cheers,
> Andy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-17  6:09 ` NeilBrown
@ 2016-11-17 15:09   ` Andy Smith
  2016-11-17 22:43     ` NeilBrown
  0 siblings, 1 reply; 19+ messages in thread
From: Andy Smith @ 2016-11-17 15:09 UTC (permalink / raw)
  To: linux-raid

Hi Neil,

On Thu, Nov 17, 2016 at 05:09:28PM +1100, NeilBrown wrote:
> On Thu, Nov 17 2016, Andy Smith wrote:
> > After install the name of the server was changed from "tbd" to
> > "jfd". Another array was then created (md5), added to
> > /etc/mdadm/mdadm.conf and the initramfs was rebuilt
> > (update-initramfs -u).
> >
> > md5 does not auto-assemble. It can be started fine after boot with:
> >
> >     # mdadm --assemble /dev/md5
> >
> > or:
> >
> >     # mdadm --incremental /dev/sdc
> >     # mdadm --incremental /dev/sdd
> 
> This is almost exactly what udev does when the devices are discovered,
> so if it works here, it should work when udev does it.

Indeed. So confusing. :(

> My only guess is that maybe the "DEVICE /dev/sd*" line in the mdadm.conf
> is causing confusion.  udev might be using a different name, though that
> would be odd.
> 
> Can you try removing that line and see if it makes a difference?

I've now tried that and it hasn't made a difference.

I don't know anything about udev but I guess that assembly is
handled by /lib/udev/rules.d/64-md-raid-assembly.rules whose only
relevant ACTION lines are:

# remember you can limit what gets auto/incrementally assembled by
# mdadm.conf(5)'s 'AUTO' and selectively whitelist using 'ARRAY'
ACTION=="add|change", IMPORT{program}="/sbin/mdadm --incremental --export $tempnode --offroot ${DEVLINKS}"
ACTION=="add|change", ENV{MD_STARTED}=="*unsafe*", ENV{MD_FOREIGN}=="no", ENV{SYSTEMD_WANTS}+="mdadm-last-resort@$env{MD_DEVICE}.timer"

…but I can't work out why they wouldn't be working here.

Time for a Debian bug report?

Cheers,
Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-17 15:09   ` Andy Smith
@ 2016-11-17 22:43     ` NeilBrown
  2016-11-18  2:31       ` Andy Smith
  0 siblings, 1 reply; 19+ messages in thread
From: NeilBrown @ 2016-11-17 22:43 UTC (permalink / raw)
  To: Andy Smith, linux-raid

[-- Attachment #1: Type: text/plain, Size: 2678 bytes --]

On Fri, Nov 18 2016, Andy Smith wrote:

> Hi Neil,
>
> On Thu, Nov 17, 2016 at 05:09:28PM +1100, NeilBrown wrote:
>> On Thu, Nov 17 2016, Andy Smith wrote:
>> > After install the name of the server was changed from "tbd" to
>> > "jfd". Another array was then created (md5), added to
>> > /etc/mdadm/mdadm.conf and the initramfs was rebuilt
>> > (update-initramfs -u).
>> >
>> > md5 does not auto-assemble. It can be started fine after boot with:
>> >
>> >     # mdadm --assemble /dev/md5
>> >
>> > or:
>> >
>> >     # mdadm --incremental /dev/sdc
>> >     # mdadm --incremental /dev/sdd
>> 
>> This is almost exactly what udev does when the devices are discovered,
>> so if it works here, it should work when udev does it.
>
> Indeed. So confusing. :(
>
>> My only guess is that maybe the "DEVICE /dev/sd*" line in the mdadm.conf
>> is causing confusion.  udev might be using a different name, though that
>> would be odd.
>> 
>> Can you try removing that line and see if it makes a difference?
>
> I've now tried that and it hasn't made a difference.
>
> I don't know anything about udev but I guess that assembly is
> handled by /lib/udev/rules.d/64-md-raid-assembly.rules whose only
> relevant ACTION lines are:
>
> # remember you can limit what gets auto/incrementally assembled by
> # mdadm.conf(5)'s 'AUTO' and selectively whitelist using 'ARRAY'
> ACTION=="add|change", IMPORT{program}="/sbin/mdadm --incremental --export $tempnode --offroot ${DEVLINKS}"
> ACTION=="add|change", ENV{MD_STARTED}=="*unsafe*", ENV{MD_FOREIGN}=="no", ENV{SYSTEMD_WANTS}+="mdadm-last-resort@$env{MD_DEVICE}.timer"
>
> …but I can't work out why they wouldn't be working here.
>
> Time for a Debian bug report?

Maybe, though as they are using *exactly* the upstream mdadm-udev files
it probably isn't something they've broken.
Something you could try, after boot and while the arrays are still not
assembled, is

  echo change > /sys/block/sdc/uevent
  echo change > /sys/block/sdd/uevent

That should cause udev to assemble the array.
If you had "udevadm monitor" running at the same time, you would even
see the events.

Alternately you could use "udevadm trigger" instead of the "echo"
commands. That effectively sends 'change' to all devices.

If that doesn't work, the looking over the udev logs, and possibly
turning on extra udev logging, might lead to an answer.

If it does work, then we need to work out why it doesn't work earlier in
boot.
/etc/init.d/udev runs "udevadm trigger --action=add" which should pick
up anything that the initrd missed.  Maybe adding some tracing around
that would help.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-17  3:52 Newly-created arrays don't auto-assemble - related to hostname change? Andy Smith
  2016-11-17  6:09 ` NeilBrown
@ 2016-11-17 23:22 ` Peter Sangas
  2016-11-18  2:03   ` Glenn Enright
  1 sibling, 1 reply; 19+ messages in thread
From: Peter Sangas @ 2016-11-17 23:22 UTC (permalink / raw)
  To: 'Andy Smith', linux-raid

Andy, Your question as prompted me to think about the following:   I'm using Ubuntu 16 and have a running system with RAID1.   If I change the hostname of the system do I need to make any changes to /etc/mdadm/mdadm.conf file and if so how do I do that?

I see the host name is listed at the end of /etc/mdadm/mdadm.conf (name=hostname:0) for example.  

Thank you,
Pete


-----Original Message-----
From: Andy Smith [mailto:andy@strugglers.net] 
Sent: Wednesday, November 16, 2016 7:53 PM
To: linux-raid@vger.kernel.org
Subject: Newly-created arrays don't auto-assemble - related to hostname change?

Hi,

I feel I am missing something very simple here, as I usually don't have this issue, but here goes…

I've got a Debian jessie host on which I created four arrays during install (md{0,1,2,3}), using the Debian installer and partman. These auto-assemble fine.

After install the name of the server was changed from "tbd" to "jfd". Another array was then created (md5), added to /etc/mdadm/mdadm.conf and the initramfs was rebuilt (update-initramfs -u).

md5 does not auto-assemble. It can be started fine after boot with:

    # mdadm --assemble /dev/md5

or:

    # mdadm --incremental /dev/sdc
    # mdadm --incremental /dev/sdd

/etc/mdadm/mdadm.conf:

    DEVICE /dev/sd*
    CREATE owner=root group=disk mode=0660 auto=yes
    HOMEHOST <ignore>
    MAILADDR root
    ARRAY /dev/md/0  metadata=1.2 UUID=400bac1d:e2c5d6ef:fea3b8c8:bcb70f8f
    ARRAY /dev/md/1  metadata=1.2 UUID=e29c8b89:705f0116:d888f77e:2b6e32f5
    ARRAY /dev/md/2  metadata=1.2 UUID=039b3427:4be5157a:6e2d53bd:fe898803
    ARRAY /dev/md/3  metadata=1.2 UUID=30f745ce:7ed41b53:4df72181:7406ea1d
    ARRAY /dev/md/5  metadata=1.2 UUID=957030cf:c09f023d:ceaebb27:e546f095

I've unpacked the initramfs and looked at the mdadm.conf in there and it is identical.

Initially HOMEHOST was set to <system> (the default), but I noticed when looking at --detail that md5 has:

           Name : jfd:5  (local to host jfd)

whereas the others have:

           Name : tbd:0

…so I changed it to <ignore> to see if that would help. It didn't.

So, I'd really appreciate any hints as to what I've missed here!

Here follows --detail and --examine of md5 and its members, then the contents of /proc/mdstat after I have manually assembled md5.

$ sudo mdadm --detail /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Thu Nov 17 02:35:15 2016
     Raid Level : raid10
     Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
  Used Dev Size : 1875243008 (1788.37 GiB 1920.25 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Nov 17 02:35:15 2016
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : far=2
     Chunk Size : 512K

           Name : jfd:5  (local to host jfd)
           UUID : 957030cf:c09f023d:ceaebb27:e546f095
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       32        1      active sync   /dev/sdc

$ sudo mdadm --examine /dev/sd{c,d}
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
           Name : jfd:5  (local to host jfd)
  Creation Time : Thu Nov 17 02:35:15 2016
     Raid Level : raid10
   Raid Devices : 2

 Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
     Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
  Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=688 sectors
          State : clean
    Device UUID : 4ac82c29:2d109465:7fff9b22:8c411c1e

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Nov 17 02:35:15 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 96d669f1 - correct
         Events : 0

         Layout : far=2
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
           Name : jfd:5  (local to host jfd)
  Creation Time : Thu Nov 17 02:35:15 2016
     Raid Level : raid10
   Raid Devices : 2
 Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
     Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
  Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=688 sectors
          State : clean
    Device UUID : 3a067652:6e88afae:82722342:0036bae0

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Nov 17 02:35:15 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : eb608799 - correct
         Events : 0

         Layout : far=2
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

$ cat /proc/mdstat
Personalities : [raid1] [raid10]
md5 : active (auto-read-only) raid10 sdd[0] sdc[1]
      1875243008 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      bitmap: 0/14 pages [0KB], 65536KB chunk

md3 : active raid10 sda5[0] sdb5[1]
      12199936 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      
md2 : active (auto-read-only) raid10 sda3[0] sdb3[1]
      975872 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      
md1 : active raid10 sda2[0] sdb2[1]
      1951744 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
      
md0 : active raid1 sda1[0] sdb1[1]
      498368 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Cheers,
Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-17 23:22 ` Peter Sangas
@ 2016-11-18  2:03   ` Glenn Enright
  0 siblings, 0 replies; 19+ messages in thread
From: Glenn Enright @ 2016-11-18  2:03 UTC (permalink / raw)
  To: mdraid

Could the /usr/share/mdadm/mkconf script help? It can be used to print
out the running equivalent of mdadm.conf

It might be a good thing to use for comparisons, I use it a fair bit
in my scripting with good results.

Regards, Glenn

On 18 November 2016 at 12:22, Peter Sangas <pete@wnsdev.com> wrote:
> Andy, Your question as prompted me to think about the following:   I'm using Ubuntu 16 and have a running system with RAID1.   If I change the hostname of the system do I need to make any changes to /etc/mdadm/mdadm.conf file and if so how do I do that?
>
> I see the host name is listed at the end of /etc/mdadm/mdadm.conf (name=hostname:0) for example.
>
> Thank you,
> Pete
>
>
> -----Original Message-----
> From: Andy Smith [mailto:andy@strugglers.net]
> Sent: Wednesday, November 16, 2016 7:53 PM
> To: linux-raid@vger.kernel.org
> Subject: Newly-created arrays don't auto-assemble - related to hostname change?
>
> Hi,
>
> I feel I am missing something very simple here, as I usually don't have this issue, but here goes…
>
> I've got a Debian jessie host on which I created four arrays during install (md{0,1,2,3}), using the Debian installer and partman. These auto-assemble fine.
>
> After install the name of the server was changed from "tbd" to "jfd". Another array was then created (md5), added to /etc/mdadm/mdadm.conf and the initramfs was rebuilt (update-initramfs -u).
>
> md5 does not auto-assemble. It can be started fine after boot with:
>
>     # mdadm --assemble /dev/md5
>
> or:
>
>     # mdadm --incremental /dev/sdc
>     # mdadm --incremental /dev/sdd
>
> /etc/mdadm/mdadm.conf:
>
>     DEVICE /dev/sd*
>     CREATE owner=root group=disk mode=0660 auto=yes
>     HOMEHOST <ignore>
>     MAILADDR root
>     ARRAY /dev/md/0  metadata=1.2 UUID=400bac1d:e2c5d6ef:fea3b8c8:bcb70f8f
>     ARRAY /dev/md/1  metadata=1.2 UUID=e29c8b89:705f0116:d888f77e:2b6e32f5
>     ARRAY /dev/md/2  metadata=1.2 UUID=039b3427:4be5157a:6e2d53bd:fe898803
>     ARRAY /dev/md/3  metadata=1.2 UUID=30f745ce:7ed41b53:4df72181:7406ea1d
>     ARRAY /dev/md/5  metadata=1.2 UUID=957030cf:c09f023d:ceaebb27:e546f095
>
> I've unpacked the initramfs and looked at the mdadm.conf in there and it is identical.
>
> Initially HOMEHOST was set to <system> (the default), but I noticed when looking at --detail that md5 has:
>
>            Name : jfd:5  (local to host jfd)
>
> whereas the others have:
>
>            Name : tbd:0
>
> …so I changed it to <ignore> to see if that would help. It didn't.
>
> So, I'd really appreciate any hints as to what I've missed here!
>
> Here follows --detail and --examine of md5 and its members, then the contents of /proc/mdstat after I have manually assembled md5.
>
> $ sudo mdadm --detail /dev/md5
> /dev/md5:
>         Version : 1.2
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 1875243008 (1788.37 GiB 1920.25 GB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
>
>   Intent Bitmap : Internal
>
>     Update Time : Thu Nov 17 02:35:15 2016
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>            Name : jfd:5  (local to host jfd)
>            UUID : 957030cf:c09f023d:ceaebb27:e546f095
>          Events : 0
>
>     Number   Major   Minor   RaidDevice State
>        0       8       48        0      active sync   /dev/sdd
>        1       8       32        1      active sync   /dev/sdc
>
> $ sudo mdadm --examine /dev/sd{c,d}
> /dev/sdc:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
>            Name : jfd:5  (local to host jfd)
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>    Raid Devices : 2
>
>  Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262056 sectors, after=688 sectors
>           State : clean
>     Device UUID : 4ac82c29:2d109465:7fff9b22:8c411c1e
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Thu Nov 17 02:35:15 2016
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : 96d669f1 - correct
>          Events : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>    Device Role : Active device 1
>    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdd:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
>            Name : jfd:5  (local to host jfd)
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>    Raid Devices : 2
>  Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262056 sectors, after=688 sectors
>           State : clean
>     Device UUID : 3a067652:6e88afae:82722342:0036bae0
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Thu Nov 17 02:35:15 2016
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : eb608799 - correct
>          Events : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>    Device Role : Active device 0
>    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>
> $ cat /proc/mdstat
> Personalities : [raid1] [raid10]
> md5 : active (auto-read-only) raid10 sdd[0] sdc[1]
>       1875243008 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       bitmap: 0/14 pages [0KB], 65536KB chunk
>
> md3 : active raid10 sda5[0] sdb5[1]
>       12199936 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>
> md2 : active (auto-read-only) raid10 sda3[0] sdb3[1]
>       975872 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>
> md1 : active raid10 sda2[0] sdb2[1]
>       1951744 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>
> md0 : active raid1 sda1[0] sdb1[1]
>       498368 blocks super 1.2 [2/2] [UU]
>
> unused devices: <none>
>
> Cheers,
> Andy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-17 22:43     ` NeilBrown
@ 2016-11-18  2:31       ` Andy Smith
  2016-11-18  3:02         ` NeilBrown
  0 siblings, 1 reply; 19+ messages in thread
From: Andy Smith @ 2016-11-18  2:31 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Hi Neil,

On Fri, Nov 18, 2016 at 09:43:44AM +1100, NeilBrown wrote:
> Something you could try, after boot and while the arrays are still not
> assembled, is
> 
>   echo change > /sys/block/sdc/uevent
>   echo change > /sys/block/sdd/uevent
> 
> That should cause udev to assemble the array.

Nothing appeared to happen. There was nothing interesting in syslog
or systemd's journal. Here's the output from "udevadm monitor":

$ sudo udevadm monitor
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

KERNEL[12376.719100] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc (block)
UDEV  [12376.722996] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc (block)
KERNEL[12388.329566] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:1/end_device-10:1/target10:0:1/10:0:1:0/block/sdd (block)
UDEV  [12388.333441] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:1/end_device-10:1/target10:0:1/10:0:1:0/block/sdd (block)

> If that doesn't work, the looking over the udev logs, and possibly
> turning on extra udev logging, might lead to an answer.

I don't know how to do that, but I'll look into it - thanks.

Cheers,
Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-18  2:31       ` Andy Smith
@ 2016-11-18  3:02         ` NeilBrown
  2016-11-18  3:47           ` Andy Smith
  0 siblings, 1 reply; 19+ messages in thread
From: NeilBrown @ 2016-11-18  3:02 UTC (permalink / raw)
  To: Andy Smith; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1606 bytes --]

On Fri, Nov 18 2016, Andy Smith wrote:

> Hi Neil,
>
> On Fri, Nov 18, 2016 at 09:43:44AM +1100, NeilBrown wrote:
>> Something you could try, after boot and while the arrays are still not
>> assembled, is
>> 
>>   echo change > /sys/block/sdc/uevent
>>   echo change > /sys/block/sdd/uevent
>> 
>> That should cause udev to assemble the array.
>
> Nothing appeared to happen. There was nothing interesting in syslog
> or systemd's journal. Here's the output from "udevadm monitor":

OK, we are getting closer.  That is good.

>
> $ sudo udevadm monitor
> monitor will print the received events for:
> UDEV - the event which udev sends out after rule processing
> KERNEL - the kernel uevent
>
> KERNEL[12376.719100] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc (block)
> UDEV  [12376.722996] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc (block)
> KERNEL[12388.329566] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:1/end_device-10:1/target10:0:1/10:0:1:0/block/sdd (block)
> UDEV  [12388.333441] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:1/end_device-10:1/target10:0:1/10:0:1:0/block/sdd (block)
>
>> If that doesn't work, the looking over the udev logs, and possibly
>> turning on extra udev logging, might lead to an answer.
>
> I don't know how to do that, but I'll look into it - thanks.

 udevadm control -l debug

might help.

NeilBrown

>
> Cheers,
> Andy

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-18  3:02         ` NeilBrown
@ 2016-11-18  3:47           ` Andy Smith
  2016-11-18  4:08             ` NeilBrown
  0 siblings, 1 reply; 19+ messages in thread
From: Andy Smith @ 2016-11-18  3:47 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Hi Neil,

I've managed to workaround the issue.

In scrolling back on the serial console to view the early boot
messages and debug logging from udev, I noticed that only sd{a,b}
drives were detected during the initramfs stage. sd{c,d,e,f} were
absent.

This server has two internal flash devices, then the rest of the
drives are on a SAS controller. Evidently the initramfs does not
contain the driver for the SAS controller (mpt3sas). Possibly the
initramfs script correctly deduced that the system could be booted
with only sd{a,b} and did not bother including mpt3sas.

Anyway, I added mpt3sas to /etc/initramfs-tools/modules and rebuilt
the initramfs, and now all drives appear at boot time and all arrays
are assembled:

Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... Begin: Assembling all MD arrays ... [   40.1
58317] random: nonblocking pool is initialized
[   40.161795] md: bind<sdf1>
[   40.162161] md: bind<sde1>
[   40.163226] md: raid1 personality registered for level 1
[   40.163654] md/raid1:md0: active with 2 out of 2 mirrors
[   40.163745] md0: detected capacity change from 0 to 510328832
[   40.164259]  md0: unknown partition table
mdadm: /dev/md/0 has been started with 2 drives.
[   40.176662] md: bind<sdf2>
[   40.177235] md: bind<sde2>
[   40.178332] md: raid10 personality registered for level 10
[   40.178656] md/raid10:md1: active with 2 out of 2 devices
[   40.178746] md1: detected capacity change from 0 to 1998585856
[   40.179170]  md1: unknown partition table
mdadm: /dev/md/1 has been started with 2 drives.
[   40.189887] md: md2 stopped.
[   40.191292] md: bind<sdf3>
[   40.191498] md: bind<sde3>
[   40.192705] md/raid10:md2: active with 2 out of 2 devices
[   40.192797] md2: detected capacity change from 0 to 999292928
[   40.193128]  md2: unknown partition table
mdadm: /dev/md/2 has been started with 2 drives.
[   40.204234] md: md3 stopped.
[   40.205278] md: bind<sdf5>
[   40.205695] md: bind<sde5>
[   40.206613] md/raid10:md3: active with 2 out of 2 devices
[   40.206704] md3: detected capacity change from 0 to 12492734464
[   40.207094]  md3: unknown partition table
mdadm: /dev/md/3 has been started with 2 drives.
[   40.218963] md: md5 stopped.
[   40.223807]  sdb: unknown partition table
[   40.228841]  sda: unknown partition table
[   40.229044] md: bind<sda>
[   40.229613] md: bind<sdb>
[   40.234024]  sdb: unknown partition table
[   40.243686] md/raid10:md5: active with 2 out of 2 devices
[   40.243867] created bitmap (14 pages) for device md5
[   40.244684] md5: bitmap initialized from disk: read 1 pages, set 0 of 28614 bits
[   40.245376] md5: detected capacity change from 0 to 1920248840192
[   40.248331]  md5: unknown partition table
mdadm: /dev/md/5 has been started with 2 drives.
Success: assembled all arrays.
done.
[   40.260413] device-mapper: uevent: version 1.0.3
[   40.260540] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised: dm-devel@redhat.com
done.
Begin: Running /scripts/local-premount ... [   40.265067] PM: Starting manual resume from disk
done.
Begin: Will now check root file system ... fsck from util-linux 2.25.2
[/sbin/fsck.ext4 (1) -- /dev/md1] fsck.ext4 -a -C0 /dev/md1
root: clean, 44775/122160 files, 265568/487936 blocks
done.
[   40.296866] EXT4-fs (md1): mounted filesystem with ordered data mode. Opts: (null)
done.
Begin: Running /scripts/local-bottom ... done.
Begin: Running /scripts/init-bottom ... done.
[   40.350701] systemd[1]: systemd 215 running in system mode. (+PAM +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCR
YPT +ACL +XZ -SECCOMP -APPARMOR)
[   40.350843] systemd[1]: Detected virtualization 'xen'.
[   40.350920] systemd[1]: Detected architecture 'x86-64'.

Welcome to Debian GNU/Linux 8 (jessie)!

Am I right in thinking this is not incremental assembly by udev, but
kernel auto-assembly?

Also, in the previous configuration, mpt3sas was being loaded once
the root filesystem had been mounted. All the drives on the SAS
controller were then available, so udev should have assembled md5 at
that point, right? So I think there is still a problem here, just
one which I have worked around in some other way.

If that is the case, would you like me to continue debugging udev?

Cheers,
Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-18  3:47           ` Andy Smith
@ 2016-11-18  4:08             ` NeilBrown
  2016-11-18  4:17               ` Andy Smith
  0 siblings, 1 reply; 19+ messages in thread
From: NeilBrown @ 2016-11-18  4:08 UTC (permalink / raw)
  To: Andy Smith; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1260 bytes --]

On Fri, Nov 18 2016, Andy Smith wrote:

>
> Am I right in thinking this is not incremental assembly by udev, but
> kernel auto-assembly?

That isn't kernel auto-assembly, but it doesn't look like incremental
assembly either.

 /usr/share/initramfs-tools/scripts/local-block/mdadm

runs
  mdadm -q --assemble --scan --no-degraded
to start all the arrays that the initramfs can find.  So it is
mdadm-based auto-assembly.

>
> Also, in the previous configuration, mpt3sas was being loaded once
> the root filesystem had been mounted. All the drives on the SAS
> controller were then available, so udev should have assembled md5 at
> that point, right? So I think there is still a problem here, just
> one which I have worked around in some other way.

Yes, I think there is still a problem. When the mpt3sas is loaded udev
should trigger the arrays to be created.

>
> If that is the case, would you like me to continue debugging udev?

Up to you, but I have an idea.
The udev rules files depends on 'blkid' having been run.
  /lib/udev/rules.d/60-persistent-storage.rules
does this, but not for
  KERNEL=="fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md*|zram*|mmcblk[0-9]*rpmb"

... though that wouldn't apply to you.

what does
  udevadm info /dev/sdc

report?

NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-18  4:08             ` NeilBrown
@ 2016-11-18  4:17               ` Andy Smith
  2016-11-21  4:32                 ` NeilBrown
  0 siblings, 1 reply; 19+ messages in thread
From: Andy Smith @ 2016-11-18  4:17 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Hi Neil,

On Fri, Nov 18, 2016 at 03:08:23PM +1100, NeilBrown wrote:
> Up to you, but I have an idea.
> The udev rules files depends on 'blkid' having been run.
>   /lib/udev/rules.d/60-persistent-storage.rules
> does this, but not for
>   KERNEL=="fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md*|zram*|mmcblk[0-9]*rpmb"
> 
> ... though that wouldn't apply to you.
> 
> what does
>   udevadm info /dev/sdc

(Since mpt3sas got loaded early the device identifiers have all
changed; what was sd{a,b} have now shifted to the end as sd{e,f}, so
the two members of md5 are now sd{a,b})

$ sudo udevadm info /dev/sda
P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
N: sda
S: disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633
S: disk/by-id/wwn-0x5002538c0007e7a8
S: disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: DEVLINKS=/dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633 /dev/disk/by-id/wwn-0x5002538c0007e7a8 /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: DEVNAME=/dev/sda
E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
E: DEVTYPE=disk
E: ID_ATA=1
E: ID_ATA_DOWNLOAD_MICROCODE=1
E: ID_ATA_FEATURE_SET_HPA=1
E: ID_ATA_FEATURE_SET_HPA_ENABLED=1
E: ID_ATA_FEATURE_SET_PM=1
E: ID_ATA_FEATURE_SET_PM_ENABLED=1
E: ID_ATA_FEATURE_SET_SECURITY=1
E: ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
E: ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=32
E: ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=32
E: ID_ATA_FEATURE_SET_SMART=1
E: ID_ATA_FEATURE_SET_SMART_ENABLED=1
E: ID_ATA_ROTATION_RATE_RPM=0
E: ID_ATA_SATA=1
E: ID_ATA_SATA_SIGNAL_RATE_GEN1=1
E: ID_ATA_SATA_SIGNAL_RATE_GEN2=1
E: ID_ATA_WRITE_CACHE=1
E: ID_ATA_WRITE_CACHE_ENABLED=1
E: ID_BUS=ata
E: ID_FS_LABEL=tbd:5
E: ID_FS_LABEL_ENC=tbd:5
E: ID_FS_TYPE=linux_raid_member
E: ID_FS_USAGE=raid
E: ID_FS_UUID=957030cf-c09f-023d-ceae-bb27e546f095
E: ID_FS_UUID_ENC=957030cf-c09f-023d-ceae-bb27e546f095
E: ID_FS_UUID_SUB=4ac82c29-2d10-9465-7fff-9b228c411c1e
E: ID_FS_UUID_SUB_ENC=4ac82c29-2d10-9465-7fff-9b228c411c1e
E: ID_FS_VERSION=1.2
E: ID_MODEL=SAMSUNG_MZ7KM1T9HAJM-00005
E: ID_MODEL_ENC=SAMSUNG\x20MZ7KM1T9HAJM-00005\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
E: ID_PATH=pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: ID_PATH_TAG=pci-0000_01_00_0-sas-0x4433221100000000-lun-0
E: ID_REVISION=GXM1003Q
E: ID_SERIAL=SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633
E: ID_SERIAL_SHORT=S2HNNAAH200633
E: ID_TYPE=disk
E: ID_WWN=0x5002538c0007e7a8
E: ID_WWN_WITH_EXTENSION=0x5002538c0007e7a8
E: MAJOR=8
E: MINOR=0
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: UDEV_LOG=7
E: USEC_INITIALIZED=38597

Cheers,
Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-18  4:17               ` Andy Smith
@ 2016-11-21  4:32                 ` NeilBrown
  2016-11-21  6:02                   ` Andy Smith
  0 siblings, 1 reply; 19+ messages in thread
From: NeilBrown @ 2016-11-21  4:32 UTC (permalink / raw)
  To: Andy Smith; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3663 bytes --]

On Fri, Nov 18 2016, Andy Smith wrote:

> Hi Neil,
>
> On Fri, Nov 18, 2016 at 03:08:23PM +1100, NeilBrown wrote:
>> Up to you, but I have an idea.
>> The udev rules files depends on 'blkid' having been run.
>>   /lib/udev/rules.d/60-persistent-storage.rules
>> does this, but not for
>>   KERNEL=="fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md*|zram*|mmcblk[0-9]*rpmb"
>> 
>> ... though that wouldn't apply to you.
>> 
>> what does
>>   udevadm info /dev/sdc
>
> (Since mpt3sas got loaded early the device identifiers have all
> changed; what was sd{a,b} have now shifted to the end as sd{e,f}, so
> the two members of md5 are now sd{a,b})
>
> $ sudo udevadm info /dev/sda
> P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
> N: sda
> S: disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633
> S: disk/by-id/wwn-0x5002538c0007e7a8
> S: disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
> E: DEVLINKS=/dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633 /dev/disk/by-id/wwn-0x5002538c0007e7a8 /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
> E: DEVNAME=/dev/sda
> E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
> E: DEVTYPE=disk
> E: ID_ATA=1
> E: ID_ATA_DOWNLOAD_MICROCODE=1
> E: ID_ATA_FEATURE_SET_HPA=1
> E: ID_ATA_FEATURE_SET_HPA_ENABLED=1
> E: ID_ATA_FEATURE_SET_PM=1
> E: ID_ATA_FEATURE_SET_PM_ENABLED=1
> E: ID_ATA_FEATURE_SET_SECURITY=1
> E: ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> E: ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=32
> E: ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=32
> E: ID_ATA_FEATURE_SET_SMART=1
> E: ID_ATA_FEATURE_SET_SMART_ENABLED=1
> E: ID_ATA_ROTATION_RATE_RPM=0
> E: ID_ATA_SATA=1
> E: ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> E: ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> E: ID_ATA_WRITE_CACHE=1
> E: ID_ATA_WRITE_CACHE_ENABLED=1
> E: ID_BUS=ata
> E: ID_FS_LABEL=tbd:5
> E: ID_FS_LABEL_ENC=tbd:5
> E: ID_FS_TYPE=linux_raid_member

This is encouraging.  It means that blkid ran and udev knows that this
is part of an md array.

However there are no "MD_" ... I guess that is normal if the latest udev
event happened after the array was assembled.

If you still want to get to the bottom of this, you might need to revert
your work-around, the try the "udevadm monitor" and "udevadm info" and "udevadm
trigger" while the array is not assembled.

You could possibly try stopping the array, then running "udevadm
trigger".
If that works, you revert the recent change to module loading.
If it doesn't result in the array being assembled, then would be a good
time to try "udevadm info" again.

NeilBrown


> E: ID_FS_USAGE=raid
> E: ID_FS_UUID=957030cf-c09f-023d-ceae-bb27e546f095
> E: ID_FS_UUID_ENC=957030cf-c09f-023d-ceae-bb27e546f095
> E: ID_FS_UUID_SUB=4ac82c29-2d10-9465-7fff-9b228c411c1e
> E: ID_FS_UUID_SUB_ENC=4ac82c29-2d10-9465-7fff-9b228c411c1e
> E: ID_FS_VERSION=1.2
> E: ID_MODEL=SAMSUNG_MZ7KM1T9HAJM-00005
> E: ID_MODEL_ENC=SAMSUNG\x20MZ7KM1T9HAJM-00005\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
> E: ID_PATH=pci-0000:01:00.0-sas-0x4433221100000000-lun-0
> E: ID_PATH_TAG=pci-0000_01_00_0-sas-0x4433221100000000-lun-0
> E: ID_REVISION=GXM1003Q
> E: ID_SERIAL=SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633
> E: ID_SERIAL_SHORT=S2HNNAAH200633
> E: ID_TYPE=disk
> E: ID_WWN=0x5002538c0007e7a8
> E: ID_WWN_WITH_EXTENSION=0x5002538c0007e7a8
> E: MAJOR=8
> E: MINOR=0
> E: SUBSYSTEM=block
> E: TAGS=:systemd:
> E: UDEV_LOG=7
> E: USEC_INITIALIZED=38597
>
> Cheers,
> Andy

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-21  4:32                 ` NeilBrown
@ 2016-11-21  6:02                   ` Andy Smith
  2016-11-21 22:56                     ` NeilBrown
  0 siblings, 1 reply; 19+ messages in thread
From: Andy Smith @ 2016-11-21  6:02 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Hi Neil,

On Mon, Nov 21, 2016 at 03:32:42PM +1100, NeilBrown wrote:
> If you still want to get to the bottom of this, you might need to revert
> your work-around, the try the "udevadm monitor" and "udevadm info" and "udevadm
> trigger" while the array is not assembled.

I have reverted my addition of "mpt3sas" from
/etc/initramfs-tools/modules and rebooted, so that md5 is again not
assembled.

Result of

    udevadm info /dev/sdc

P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc
N: sdc
S: disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633
S: disk/by-id/wwn-0x5002538c0007e7a8
S: disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: DEVLINKS=/dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633 /dev/disk/by-id/wwn-0x5002538c0007e7a8 /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: DEVNAME=/dev/sdc
E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc 
E: DEVTYPE=disk
E: ID_ATA=1
E: ID_ATA_DOWNLOAD_MICROCODE=1
E: ID_ATA_FEATURE_SET_HPA=1
E: ID_ATA_FEATURE_SET_HPA_ENABLED=1
E: ID_ATA_FEATURE_SET_PM=1
E: ID_ATA_FEATURE_SET_PM_ENABLED=1
E: ID_ATA_FEATURE_SET_SECURITY=1
E: ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
E: ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=32
E: ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=32
E: ID_ATA_FEATURE_SET_SMART=1
E: ID_ATA_FEATURE_SET_SMART_ENABLED=1
E: ID_ATA_ROTATION_RATE_RPM=0
E: ID_ATA_SATA=1
E: ID_ATA_SATA_SIGNAL_RATE_GEN1=1
E: ID_ATA_SATA_SIGNAL_RATE_GEN2=1
E: ID_ATA_WRITE_CACHE=1
E: ID_ATA_WRITE_CACHE_ENABLED=1
E: ID_BUS=ata
E: ID_FS_LABEL=tbd:5
E: ID_FS_LABEL_ENC=tbd:5
E: ID_FS_TYPE=linux_raid_member
E: ID_FS_USAGE=raid
E: ID_FS_UUID=957030cf-c09f-023d-ceae-bb27e546f095
E: ID_FS_UUID_ENC=957030cf-c09f-023d-ceae-bb27e546f095
E: ID_FS_UUID_SUB=4ac82c29-2d10-9465-7fff-9b228c411c1e
E: ID_FS_VERSION=1.2
E: ID_MODEL=SAMSUNG_MZ7KM1T9HAJM-00005
E: ID_MODEL_ENC=SAMSUNG\x20MZ7KM1T9HAJM-00005\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
E: ID_PATH=pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: ID_PATH_TAG=pci-0000_01_00_0-sas-0x4433221100000000-lun-0
E: ID_REVISION=GXM1003Q
E: ID_SERIAL=SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633
E: ID_SERIAL_SHORT=S2HNNAAH200633
E: ID_TYPE=disk
E: ID_WWN=0x5002538c0007e7a8
E: ID_WWN_WITH_EXTENSION=0x5002538c0007e7a8
E: MAJOR=8
E: MINOR=32
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: USEC_INITIALIZED=43226

I then issued

    echo change > /sys/block/sdc/uevent

and

    echo change > /sys/block/sdd/uevent

which resulted in the monitor showing:

KERNEL[572.956390] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc (block)
UDEV  [572.960256] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc (block)
KERNEL[593.140178] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:1/end_device-10:1/target10:0:1/10:0:1:0/block/sdd (block)
UDEV  [593.143824] change   /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:1/end_device-10:1/target10:0:1/10:0:1:0/block/sdd (block)

But no assembly of md5.

Afterwards,

    udevadm info /dev/sdc

showed:

P: /devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc
N: sdc
S: disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633
S: disk/by-id/wwn-0x5002538c0007e7a8
S: disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: DEVLINKS=/dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633 /dev/disk/by-id/wwn-0x5002538c0007e7a8 /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: DEVNAME=/dev/sdc
E: DEVPATH=/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc
E: DEVTYPE=disk
E: ID_ATA=1
E: ID_ATA_DOWNLOAD_MICROCODE=1
E: ID_ATA_FEATURE_SET_HPA=1
E: ID_ATA_FEATURE_SET_HPA_ENABLED=1
E: ID_ATA_FEATURE_SET_PM=1
E: ID_ATA_FEATURE_SET_PM_ENABLED=1
E: ID_ATA_FEATURE_SET_SECURITY=1
E: ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
E: ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=32
E: ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=32
E: ID_ATA_FEATURE_SET_SMART=1
E: ID_ATA_FEATURE_SET_SMART_ENABLED=1
E: ID_ATA_ROTATION_RATE_RPM=0
E: ID_ATA_SATA=1
E: ID_ATA_SATA_SIGNAL_RATE_GEN1=1
E: ID_ATA_SATA_SIGNAL_RATE_GEN2=1
E: ID_ATA_WRITE_CACHE=1
E: ID_ATA_WRITE_CACHE_ENABLED=1
E: ID_BUS=ata
E: ID_FS_LABEL=tbd:5
E: ID_FS_LABEL_ENC=tbd:5
E: ID_FS_TYPE=linux_raid_member
E: ID_FS_USAGE=raid
E: ID_FS_UUID=957030cf-c09f-023d-ceae-bb27e546f095
E: ID_FS_UUID_ENC=957030cf-c09f-023d-ceae-bb27e546f095
E: ID_FS_UUID_SUB=4ac82c29-2d10-9465-7fff-9b228c411c1e
E: ID_FS_UUID_SUB_ENC=4ac82c29-2d10-9465-7fff-9b228c411c1e
E: ID_FS_VERSION=1.2
E: ID_MODEL=SAMSUNG_MZ7KM1T9HAJM-00005
E: ID_MODEL_ENC=SAMSUNG\x20MZ7KM1T9HAJM-00005\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
E: ID_PATH=pci-0000:01:00.0-sas-0x4433221100000000-lun-0
E: ID_PATH_TAG=pci-0000_01_00_0-sas-0x4433221100000000-lun-0
E: ID_REVISION=GXM1003Q
E: ID_SERIAL=SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633
E: ID_SERIAL_SHORT=S2HNNAAH200633
E: ID_TYPE=disk
E: ID_WWN=0x5002538c0007e7a8
E: ID_WWN_WITH_EXTENSION=0x5002538c0007e7a8
E: MAJOR=8
E: MINOR=32
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: USEC_INITIALIZED=43226

Thanks,
Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-21  6:02                   ` Andy Smith
@ 2016-11-21 22:56                     ` NeilBrown
  2016-11-22  6:01                       ` Andy Smith
  0 siblings, 1 reply; 19+ messages in thread
From: NeilBrown @ 2016-11-21 22:56 UTC (permalink / raw)
  To: Andy Smith; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1587 bytes --]

On Mon, Nov 21 2016, Andy Smith wrote:

> Hi Neil,
>
> On Mon, Nov 21, 2016 at 03:32:42PM +1100, NeilBrown wrote:
>> If you still want to get to the bottom of this, you might need to revert
>> your work-around, the try the "udevadm monitor" and "udevadm info" and "udevadm
>> trigger" while the array is not assembled.
>
> I have reverted my addition of "mpt3sas" from
> /etc/initramfs-tools/modules and rebooted, so that md5 is again not
> assembled.

Thanks.  Sorry this is taking a lot of back-and-forth...
Still getting
> E: ID_FS_TYPE=linux_raid_member
which is good.  Not getting and MD_*, which is bad.

I would:
 - check that md5 definitely isn't running (mdadm -S /dev/md5)
 - run mdadm -I just like udev does.

   /sbin/mdadm --incremental --export /dev/sdc --offroot /dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633 /dev/disk/by-id/wwn-0x5002538c0007e7a8 /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0

(the string of paths is from the "DEVLINKS" field).

That *should* produce several lines like "MD_NAME=tbd:5" etc.
My guess is that it is producing an error.  Knowing that error message
would help.

If it doesn't produce an error, but does produce some MD_* lines, then
the problem must be that udev isn't doing quite the same thing.
So stop md5 again (mdadm -S /dev/md5), enable udev debugging
  udevadm control -l debug

and re-issue the 'change'
     echo change > /sys/block/sdc/uevent

That should puts lots of stuff in the journal.  If you could extract
that and post it I might be able to find something of interest.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-21 22:56                     ` NeilBrown
@ 2016-11-22  6:01                       ` Andy Smith
  2016-11-23  2:34                         ` NeilBrown
  0 siblings, 1 reply; 19+ messages in thread
From: Andy Smith @ 2016-11-22  6:01 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

Hi Neil,

On Tue, Nov 22, 2016 at 09:56:28AM +1100, NeilBrown wrote:
> Thanks.  Sorry this is taking a lot of back-and-forth...

No worries. This is very interesting to me and I'd also like to know
what is going wrong even if I have a work-around.

> I would:
>  - check that md5 definitely isn't running (mdadm -S /dev/md5)
>  - run mdadm -I just like udev does.
> 
>    /sbin/mdadm --incremental --export /dev/sdc --offroot /dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633 /dev/disk/by-id/wwn-0x5002538c0007e7a8 /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
> 
> (the string of paths is from the "DEVLINKS" field).
> 
> That *should* produce several lines like "MD_NAME=tbd:5" etc.

I got:

$ sudo /sbin/mdadm --incremental --export /dev/sdc --offroot /dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633 /dev/disk/by-id/wwn-0x5002538c0007e7a8 /dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0
MD_DEVICE=md5
MD_DEVNAME=5
MD_FOREIGN=no
MD_STARTED=unsafe

Over in the "udevadm monitor" window I see:

KERNEL[86682.101420] add      /devices/virtual/bdi/9:5 (bdi)
KERNEL[86682.101866] add      /devices/virtual/block/md5 (block)
UDEV  [86682.102211] add      /devices/virtual/bdi/9:5 (bdi)
UDEV  [86682.103376] add      /devices/virtual/block/md5 (block)

and cat /proc/mdstat:

Personalities : [raid1] [raid10]
md5 : inactive sdc[1](S)
      1875243352 blocks super 1.2
…etc…

> If it doesn't produce an error, but does produce some MD_* lines, then
> the problem must be that udev isn't doing quite the same thing.
> So stop md5 again (mdadm -S /dev/md5), enable udev debugging
>   udevadm control -l debug
> 
> and re-issue the 'change'
>      echo change > /sys/block/sdc/uevent
> 
> That should puts lots of stuff in the journal.  If you could extract
> that and post it I might be able to find something of interest.

Nov 22 05:50:15 jfd systemd-udevd[218]: validate module index
Nov 22 05:50:15 jfd systemd-udevd[218]: Check if link configuration needs reloading.
Nov 22 05:50:15 jfd systemd-udevd[218]: seq 5382 queued, 'change' 'block'
Nov 22 05:50:15 jfd systemd-udevd[218]: seq 5382 forked new worker [29004]
Nov 22 05:50:15 jfd systemd-udevd[29004]: seq 5382 running
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1cf950 filled with db file data
Nov 22 05:50:15 jfd systemd-udevd[29004]: removing watch on '/dev/sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1d1a50 has devpath '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: IMPORT 'ata_id --export /dev/sdc' /lib/udev/rules.d/60-persistent-storage.rules:31
Nov 22 05:50:15 jfd systemd-udevd[29005]: starting 'ata_id --export /dev/sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_TYPE=disk'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_BUS=ata'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_MODEL=SAMSUNG_MZ7KM1T9HAJM-00005'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_MODEL_ENC=SAMSUNG\x20MZ7KM1T9HAJM-00005\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_REVISION=GXM1003Q'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_SERIAL=SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_SERIAL_SHORT=S2HNNAAH200633'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_WRITE_CACHE=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_WRITE_CACHE_ENABLED=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_HPA=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_HPA_ENABLED=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_PM=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_PM_ENABLED=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_SECURITY=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_SECURITY_ENABLED=0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=32'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=32'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_SMART=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_FEATURE_SET_SMART_ENABLED=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_DOWNLOAD_MICROCODE=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_SATA=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_SATA_SIGNAL_RATE_GEN2=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_SATA_SIGNAL_RATE_GEN1=1'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_ATA_ROTATION_RATE_RPM=0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_WWN=0x5002538c0007e7a8'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc'(out) 'ID_WWN_WITH_EXTENSION=0x5002538c0007e7a8'
Nov 22 05:50:15 jfd systemd-udevd[29004]: 'ata_id --export /dev/sdc' [29005] exit with return code 0
Nov 22 05:50:15 jfd systemd-udevd[29004]: LINK 'disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633' /lib/udev/rules.d/60-persistent-storage.rules:43
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1dde20 has devpath '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1de650 has devpath '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1dee60 has devpath '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1df650 has devpath '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10'
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1dfe30 has devpath '/devices/pci0000:00/0000:00:01.0/0000:01:00.0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1e0600 has devpath '/devices/pci0000:00/0000:00:01.0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1e0db0 has devpath '/devices/pci0000:00'
Nov 22 05:50:15 jfd systemd-udevd[29004]: IMPORT builtin 'path_id' /lib/udev/rules.d/60-persistent-storage.rules:56
Nov 22 05:50:15 jfd systemd-udevd[29004]: device 0x7f26dc1e17c0 has devpath '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/sas_device/end_device-10:0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: LINK 'disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0' /lib/udev/rules.d/60-persistent-storage.rules:57
Nov 22 05:50:15 jfd systemd-udevd[29004]: IMPORT builtin 'blkid' /lib/udev/rules.d/60-persistent-storage.rules:71
Nov 22 05:50:15 jfd systemd-udevd[29004]: probe /dev/sdc raid offset=0
Nov 22 05:50:15 jfd systemd-udevd[29004]: LINK 'disk/by-id/wwn-0x5002538c0007e7a8' /lib/udev/rules.d/60-persistent-storage.rules:81
Nov 22 05:50:15 jfd systemd-udevd[29004]: no db file to read /run/udev/data/+scsi:10:0:0:0: No such file or directory
Nov 22 05:50:15 jfd systemd-udevd[29004]: handling device node '/dev/sdc', devnum=b8:32, mode=0600, uid=0, gid=0
Nov 22 05:50:15 jfd systemd-udevd[29004]: preserve already existing symlink '/dev/block/8:32' to '../sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: found 'b8:32' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633'
Nov 22 05:50:15 jfd systemd-udevd[29004]: creating link '/dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633' to '/dev/sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: preserve already existing symlink '/dev/disk/by-id/ata-SAMSUNG_MZ7KM1T9HAJM-00005_S2HNNAAH200633' to '../../sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: found 'b8:32' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fwwn-0x5002538c0007e7a8'
Nov 22 05:50:15 jfd systemd-udevd[29004]: creating link '/dev/disk/by-id/wwn-0x5002538c0007e7a8' to '/dev/sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: preserve already existing symlink '/dev/disk/by-id/wwn-0x5002538c0007e7a8' to '../../sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: found 'b8:32' claiming '/run/udev/links/\x2fdisk\x2fby-path\x2fpci-0000:01:00.0-sas-0x4433221100000000-lun-0'
Nov 22 05:50:15 jfd systemd-udevd[29004]: creating link '/dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0' to '/dev/sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: preserve already existing symlink '/dev/disk/by-path/pci-0000:01:00.0-sas-0x4433221100000000-lun-0' to '../../sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: created db file '/run/udev/data/b8:32' for '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: adding watch on '/dev/sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: created db file '/run/udev/data/b8:32' for '/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/end_device-10:0/target10:0:0/10:0:0:0/block/sdc'
Nov 22 05:50:15 jfd systemd-udevd[29004]: passed -1 bytes to netlink monitor 0x7f26dc1d2470
Nov 22 05:50:15 jfd systemd-udevd[29004]: seq 5382 processed with 0
Nov 22 05:50:15 jfd ata_id[29005]: custom logging function 0x7fc30e86abd0 registered
Nov 22 05:50:18 jfd systemd-udevd[218]: cleanup idle workers
Nov 22 05:50:18 jfd systemd-udevd[218]: validate module index
Nov 22 05:50:18 jfd systemd-udevd[218]: Check if link configuration needs reloading.
Nov 22 05:50:18 jfd systemd-udevd[29004]: unload module index
Nov 22 05:50:18 jfd systemd-udevd[29004]: Unloaded link configuration context.
Nov 22 05:50:18 jfd systemd-udevd[218]: worker [29004] exit
Nov 22 05:50:18 jfd systemd-udevd[218]: worker [29004] cleaned up

Cheers,
Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-22  6:01                       ` Andy Smith
@ 2016-11-23  2:34                         ` NeilBrown
  2016-11-23  9:03                           ` Bug#784070: " Michael Tokarev
  2016-11-23  9:09                           ` SOUBEYRAND Yann - externe
  0 siblings, 2 replies; 19+ messages in thread
From: NeilBrown @ 2016-11-23  2:34 UTC (permalink / raw)
  To: Andy Smith; +Cc: linux-raid, 784070

[-- Attachment #1: Type: text/plain, Size: 817 bytes --]

On Tue, Nov 22 2016, Andy Smith wrote:

> Hi Neil,
>
> On Tue, Nov 22, 2016 at 09:56:28AM +1100, NeilBrown wrote:
>> Thanks.  Sorry this is taking a lot of back-and-forth...
>
> No worries. This is very interesting to me and I'd also like to know
> what is going wrong even if I have a work-around.

Thanks.
I tried this on a scratch Debian VM I had lying around, and found I
could exactly reproduce your symptoms.
I found that, unlike on the first Debian system I looked at, there is a
line in  /lib/udev/rules.d/64-md-raid-assembly.rules 

  # Disable incremental assembly to fix Debian bug #784070
  GOTO="md_inc_end"

Remove that and the problem goes away.  Arrgg...

I wish people maintainers would *ask* upstream when they don't
understand, rather than just breaking things.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Bug#784070: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-23  2:34                         ` NeilBrown
@ 2016-11-23  9:03                           ` Michael Tokarev
  2016-11-24  1:24                             ` Andy Smith
  2016-11-23  9:09                           ` SOUBEYRAND Yann - externe
  1 sibling, 1 reply; 19+ messages in thread
From: Michael Tokarev @ 2016-11-23  9:03 UTC (permalink / raw)
  To: NeilBrown, 784070, Andy Smith; +Cc: linux-raid

23.11.2016 05:34, NeilBrown wrote:
> On Tue, Nov 22 2016, Andy Smith wrote:
> 
>> Hi Neil,
>>
>> On Tue, Nov 22, 2016 at 09:56:28AM +1100, NeilBrown wrote:
>>> Thanks.  Sorry this is taking a lot of back-and-forth...
>>
>> No worries. This is very interesting to me and I'd also like to know
>> what is going wrong even if I have a work-around.
> 
> Thanks.
> I tried this on a scratch Debian VM I had lying around, and found I
> could exactly reproduce your symptoms.
> I found that, unlike on the first Debian system I looked at, there is a
> line in  /lib/udev/rules.d/64-md-raid-assembly.rules 
> 
>   # Disable incremental assembly to fix Debian bug #784070
>   GOTO="md_inc_end"
> 
> Remove that and the problem goes away.  Arrgg...
> 
> I wish people maintainers would *ask* upstream when they don't
> understand, rather than just breaking things.

Neil, with all my respect, this is a bit over-reaction :)

It was long ago when we disabled incremental assembly when
you turned it on by default, and kept old static way to
assemble arrays, because neither our initrd nor regular
userpsace weren't ready for that.  At the time jessie come
out, we didn't have enough time to sort it out, so we kept
it for jessie too.

After restoring your (upstream) rules, things WILL break in
other place.  Someone already tried that and had to revert
it back to what we have now. Initrd should have some
initial infrastructure for event-based work before this will
be possible.

I wanted to fix it all for stretch. But once I had a conflict
with the d-i team I don't work on mdadm (or any other package
touching d-i) anymore.

Thanks,

/mjt

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Bug#784070: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-23  2:34                         ` NeilBrown
  2016-11-23  9:03                           ` Bug#784070: " Michael Tokarev
@ 2016-11-23  9:09                           ` SOUBEYRAND Yann - externe
  1 sibling, 0 replies; 19+ messages in thread
From: SOUBEYRAND Yann - externe @ 2016-11-23  9:09 UTC (permalink / raw)
  To: neilb, 784070; +Cc: andy, linux-raid

Le mercredi 23 novembre 2016 à 13:34 +1100, neilb@suse.com a écrit :
> On Tue, Nov 22 2016, Andy Smith wrote:
> 
> > Hi Neil,
> >
> > On Tue, Nov 22, 2016 at 09:56:28AM +1100, NeilBrown wrote:
> >> Thanks.  Sorry this is taking a lot of back-and-forth...
> >
> > No worries. This is very interesting to me and I'd also like to know
> > what is going wrong even if I have a work-around.
> 
> Thanks.
> I tried this on a scratch Debian VM I had lying around, and found I
> could exactly reproduce your symptoms.
> I found that, unlike on the first Debian system I looked at, there is a
> line in  /lib/udev/rules.d/64-md-raid-assembly.rules 
> 
>   # Disable incremental assembly to fix Debian bug #784070
>   GOTO="md_inc_end"
> 
> Remove that and the problem goes away.  Arrgg...
> 
> I wish people maintainers would *ask* upstream when they don't
> understand, rather than just breaking things.
> 
> NeilBrown

Hi Neil,

I encourage you to have a look at the full history of this bug to see
why this line was introduced. You will then see that it has been removed
in Stretch.

Regards

Yann



Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.
____________________________________________________

This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or virus-free.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Bug#784070: Newly-created arrays don't auto-assemble - related to hostname change?
  2016-11-23  9:03                           ` Bug#784070: " Michael Tokarev
@ 2016-11-24  1:24                             ` Andy Smith
  0 siblings, 0 replies; 19+ messages in thread
From: Andy Smith @ 2016-11-24  1:24 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: NeilBrown, 784070, linux-raid

Hi,

On Wed, Nov 23, 2016 at 12:03:49PM +0300, Michael Tokarev wrote:
> It was long ago when we disabled incremental assembly when
> you turned it on by default, and kept old static way to
> assemble arrays, because neither our initrd nor regular
> userpsace weren't ready for that.

Okay, so, on Debian jessie then, it is expected that md arrays on
devices that are only present after the initramfs is done working
will not be automatically (incrementally) started?

I saw Yann mentioned that in stretch the GOTO="md_inc_end" has been
removed again. Does that mean that incremental assembly on device
change is expected to work again in stretch (I have not tested it,
and most likely will not have time to do so with this hardware).

Thanks,
Andy

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2016-11-24  1:24 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-17  3:52 Newly-created arrays don't auto-assemble - related to hostname change? Andy Smith
2016-11-17  6:09 ` NeilBrown
2016-11-17 15:09   ` Andy Smith
2016-11-17 22:43     ` NeilBrown
2016-11-18  2:31       ` Andy Smith
2016-11-18  3:02         ` NeilBrown
2016-11-18  3:47           ` Andy Smith
2016-11-18  4:08             ` NeilBrown
2016-11-18  4:17               ` Andy Smith
2016-11-21  4:32                 ` NeilBrown
2016-11-21  6:02                   ` Andy Smith
2016-11-21 22:56                     ` NeilBrown
2016-11-22  6:01                       ` Andy Smith
2016-11-23  2:34                         ` NeilBrown
2016-11-23  9:03                           ` Bug#784070: " Michael Tokarev
2016-11-24  1:24                             ` Andy Smith
2016-11-23  9:09                           ` SOUBEYRAND Yann - externe
2016-11-17 23:22 ` Peter Sangas
2016-11-18  2:03   ` Glenn Enright

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.