All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-lvm]  auto_activation_volume_list in lvm.conf not honored
@ 2016-12-02  7:07 Stefan Bauer
  0 siblings, 0 replies; 10+ messages in thread
From: Stefan Bauer @ 2016-12-02  7:07 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 3978 bytes --]

Tried now clvmd to set it up but fails with

Dec  2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. Reading all physical volumes. This may take a while... Found volume group "vm1-vg" using metadata type lvm2 Skipping clustered volume group vg
Dec  2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. Skipping clustered volume group vg
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_start_0 (call=22, rc=1, cib-update=26, confirmed=true) unknown error
Dec  2 07:09:46 vm1 crmd[1075]:  warning: status_from_rc: Action 23 (vg_start_0) on vm1 failed (target: 0 vs. rc: 1): Error
Dec  2 07:09:46 vm1 crmd[1075]:  warning: update_failcount: Updating failcount for vg on vm1 after failed start: rc=1 (update=INFINITY, time=1480658986)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-vg (INFINITY)

If i do a cleanup of the ressource - it is started.

syslog attached.


Any help is greatly appreciated.

Thank you.

Stefan 
-----Ursprüngliche Nachricht-----
> Von:David Teigland <teigland@redhat.com>
> Gesendet: Mon 28 November 2016 23:14
> An: Stefan Bauer <sb@plzk.de>
> CC: linux-lvm@redhat.com
> Betreff: Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
> 
> On Fri, Nov 25, 2016 at 10:30:39AM +0100, Zdenek Kabelac wrote:
> > Dne 25.11.2016 v 10:17 Stefan Bauer napsal(a):
> > > Hi Peter,
> > > 
> > > as i said, we have master/slave setup _without_ concurrent write/read. So i do not see a reason why i should take care of locking as only one node is activating the volume group at the same time.
> > > 
> > > That should be fine - right?
> > 
> > Nope it's not.
> > 
> > Every i.e. activation  DOES validation of all resources and takes ACTION
> > when something is wrong.
> > 
> > Sorry, but there is NO way to do this properly without locking manager.
> > 
> > Although many lvm2 users always do try to be 'innovative' and try to use in
> > lock-less way - this seems to work most of the time - till the moment some
> > disaster happens - then just lvm2 is blamed about data loss..
> > 
> > Interestingly they never tried to think why we invested so much time into
> > locking manager when there is such 'easy-fix' in their eyes...
> > 
> > IMHO lvmlockd is relatively 'low-resource/overhead' solution worth to be
> > explored if you don't like clvmd...
> 
> Stefan, as Zdenek points out, even reading VGs on shared storage is not
> entirely safe, because lvm may attempt to fix/repair things on disk while
> it is reading (this becomes more likely if one machine reads while another
> is making changes).  Using some kind of locking or clustering (lvmlockd or
> clvm) is a solution.
> 
> Another fairly new option is to use "system ID", which assigns one host as
> the owner of the VG.  This avoids the problems mentioned above with
> reading->fixing.  But, system ID on its own cannot be used dynamically.
> If you want to fail-over the VG between hosts, the system ID needs to be
> changed, and this needs to be done carefully (e.g. by a resource manager
> or something that takes fencing into account,
> https://bugzilla.redhat.com/show_bug.cgi?id=1336346#c2)
> 
> Also https://www.redhat.com/archives/linux-lvm/2016-November/msg00022.html
> 
> Dave
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

[-- Attachment #2: syslog.txt --]
[-- Type: text/plain, Size: 120167 bytes --]

Dec  2 07:09:23 vm1 corosync[1020]:   [TOTEM ] The network interface [192.168.122.175] is now up.
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync configuration map access [0]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: cmap
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync configuration service [1]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: cfg
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: cpg
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync profile loading service [4]
Dec  2 07:09:23 vm1 corosync[1020]:   [WD    ] No Watchdog, try modprobe <a watchdog>
Dec  2 07:09:23 vm1 corosync[1020]:   [WD    ] no resources configured.
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync watchdog service [7]
Dec  2 07:09:23 vm1 corosync[1020]:   [QUORUM] Using quorum provider corosync_votequorum
Dec  2 07:09:23 vm1 corosync[1020]:   [QUORUM] This node is within the primary component and will provide service.
Dec  2 07:09:23 vm1 corosync[1020]:   [QUORUM] Members[0]:
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: votequorum
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: quorum
Dec  2 07:09:23 vm1 corosync[1020]:   [TOTEM ] A new membership (192.168.122.175:168) was formed. Members joined: 1084783279
Dec  2 07:09:23 vm1 corosync[1020]:   [QUORUM] Members[1]: 1084783279
Dec  2 07:09:23 vm1 corosync[1020]:   [MAIN  ] Completed service synchronization, ready to provide service.
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: mcp_read_config: Configured corosync to accept connections from group 113: OK (1)
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: main: Starting Pacemaker 1.1.10 (Build: 42f2063):  generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart nagios  heartbeat corosync-native snmp libesmtp
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: cluster_connect_quorum: Quorum acquired
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: corosync_node_name: Unable to get node name for nodeid 1084783279
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[1084783279] - state is now member (was (null))
Dec  2 07:09:23 vm1 attrd[1073]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Dec  2 07:09:23 vm1 stonith-ng[1071]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Dec  2 07:09:23 vm1 crmd[1075]:   notice: main: CRM Git Version: 42f2063
Dec  2 07:09:23 vm1 attrd[1073]:   notice: corosync_node_name: Unable to get node name for nodeid 1084783279
Dec  2 07:09:23 vm1 attrd[1073]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Dec  2 07:09:23 vm1 attrd[1073]:   notice: main: Starting mainloop...
Dec  2 07:09:23 vm1 stonith-ng[1071]:   notice: corosync_node_name: Unable to get node name for nodeid 1084783279
Dec  2 07:09:23 vm1 stonith-ng[1071]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Dec  2 07:09:23 vm1 cib[1070]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Dec  2 07:09:23 vm1 cib[1070]:   notice: corosync_node_name: Unable to get node name for nodeid 1084783279
Dec  2 07:09:23 vm1 cib[1070]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Dec  2 07:09:24 vm1 crmd[1075]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Dec  2 07:09:24 vm1 stonith-ng[1071]:   notice: setup_cib: Watching for stonith topology changes
Dec  2 07:09:24 vm1 stonith-ng[1071]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Dec  2 07:09:24 vm1 crmd[1075]:   notice: corosync_node_name: Unable to get node name for nodeid 1084783279
Dec  2 07:09:24 vm1 crmd[1075]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Dec  2 07:09:24 vm1 crmd[1075]:   notice: cluster_connect_quorum: Quorum acquired
Dec  2 07:09:24 vm1 crmd[1075]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[1084783279] - state is now member (was (null))
Dec  2 07:09:25 vm1 crmd[1075]:   notice: corosync_node_name: Unable to get node name for nodeid 1084783279
Dec  2 07:09:25 vm1 crmd[1075]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Dec  2 07:09:25 vm1 crmd[1075]:   notice: do_started: The local CRM is operational
Dec  2 07:09:25 vm1 crmd[1075]:   notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Dec  2 07:09:33 vm1 ntpdate[620]: adjust time server 188.166.163.79 offset 0.412702 sec
Dec  2 07:09:33 vm1 ntpd[1131]: ntpd 4.2.6p5@1.2349-o Wed Oct  5 12:35:25 UTC 2016 (1)
Dec  2 07:09:33 vm1 ntpd[1132]: proto: precision = 0.103 usec
Dec  2 07:09:33 vm1 ntpd[1132]: ntp_io: estimated max descriptors: 1024, initial socket boundary: 16
Dec  2 07:09:33 vm1 ntpd[1132]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen and drop on 1 v6wildcard :: UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen normally on 2 lo 127.0.0.1 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen normally on 3 eth0 192.168.122.175 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen normally on 4 lo ::1 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen normally on 5 eth0 fe80::5054:ff:fe74:8cc7 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: peers refreshed
Dec  2 07:09:33 vm1 ntpd[1132]: Listening on routing socket on fd #22 for interface updates
Dec  2 07:09:38 vm1 ntpd[1132]: ntpd exiting on signal 15
Dec  2 07:09:46 vm1 crmd[1075]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Dec  2 07:09:46 vm1 crmd[1075]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Dec  2 07:09:46 vm1 cib[1070]:   notice: corosync_node_name: Unable to get node name for nodeid 1084783279
Dec  2 07:09:46 vm1 cib[1070]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Dec  2 07:09:46 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Dec  2 07:09:46 vm1 pengine[1074]:   notice: LogActions: Start   dlm:0#011(vm1)
Dec  2 07:09:46 vm1 pengine[1074]:   notice: LogActions: Start   clvmd:0#011(vm1)
Dec  2 07:09:46 vm1 pengine[1074]:   notice: LogActions: Start   vg#011(vm1)
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 4: monitor dlm:0_monitor_0 on vm1 (local)
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 5: monitor clvmd:0_monitor_0 on vm1 (local)
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 6: monitor vg_monitor_0 on vm1 (local)
Dec  2 07:09:46 vm1 pengine[1074]:   notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-141.bz2
Dec  2 07:09:46 vm1 LVM(vg)[1235]: WARNING: LVM Volume vg is not available (stopped)
Dec  2 07:09:46 vm1 LVM(vg)[1235]: INFO: LVM Volume vg is offline
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation clvmd_monitor_0 (call=11, rc=7, cib-update=23, confirmed=true) not running
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation dlm_monitor_0 (call=6, rc=7, cib-update=24, confirmed=true) not running
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_monitor_0 (call=15, rc=7, cib-update=25, confirmed=true) not running
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on vm1 (local) - no waiting
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 4: probe_complete=true
Dec  2 07:09:46 vm1 attrd[1073]:   notice: corosync_node_name: Unable to get node name for nodeid 1084783279
Dec  2 07:09:46 vm1 attrd[1073]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 7: start dlm:0_start_0 on vm1 (local)
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 23: start vg_start_0 on vm1 (local)
Dec  2 07:09:46 vm1 dlm_controld[1285]: 25 dlm_controld 4.0.1 started
Dec  2 07:09:46 vm1 LVM(vg)[1278]: INFO: Activating volume group vg
Dec  2 07:09:46 vm1 kernel: [   25.367705] dlm_controld wrote to rmem_default when file position was not 0!
Dec  2 07:09:46 vm1 kernel: [   25.367705] This will not be supported in the future. To silence this
Dec  2 07:09:46 vm1 kernel: [   25.367705] warning, set kernel.sysctl_writes_strict = -1
Dec  2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. Reading all physical volumes. This may take a while... Found volume group "vm1-vg" using metadata type lvm2 Skipping clustered volume group vg
Dec  2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. Skipping clustered volume group vg
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_start_0 (call=22, rc=1, cib-update=26, confirmed=true) unknown error
Dec  2 07:09:46 vm1 crmd[1075]:  warning: status_from_rc: Action 23 (vg_start_0) on vm1 failed (target: 0 vs. rc: 1): Error
Dec  2 07:09:46 vm1 crmd[1075]:  warning: update_failcount: Updating failcount for vg on vm1 after failed start: rc=1 (update=INFINITY, time=1480658986)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-vg (INFINITY)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 7: fail-count-vg=INFINITY
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-vg (1480658986)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 10: last-failure-vg=1480658986
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-vg (INFINITY)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 12: fail-count-vg=INFINITY
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-vg (1480658986)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 14: last-failure-vg=1480658986
Dec  2 07:09:46 vm1 crmd[1075]:  warning: update_failcount: Updating failcount for vg on vm1 after failed start: rc=1 (update=INFINITY, time=1480658986)
Dec  2 07:09:47 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation dlm_start_0 (call=20, rc=0, cib-update=27, confirmed=true) ok
Dec  2 07:09:47 vm1 crmd[1075]:   notice: run_graph: Transition 0 (Complete=9, Pending=0, Fired=0, Skipped=5, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-141.bz2): Stopped
Dec  2 07:09:47 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Dec  2 07:09:47 vm1 pengine[1074]:  warning: unpack_rsc_op: Processing failed op start for vg on vm1: unknown error (1)
Dec  2 07:09:47 vm1 pengine[1074]:  warning: common_apply_stickiness: Forcing vg away from vm1 after 1000000 failures (max=1000000)
Dec  2 07:09:47 vm1 pengine[1074]:   notice: LogActions: Start   clvmd:0#011(vm1)
Dec  2 07:09:47 vm1 pengine[1074]:   notice: LogActions: Stop    vg#011(vm1)
Dec  2 07:09:47 vm1 pengine[1074]:   notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-142.bz2
Dec  2 07:09:47 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 7: monitor dlm_monitor_60000 on vm1 (local)
Dec  2 07:09:47 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 1: stop vg_stop_0 on vm1 (local)
Dec  2 07:09:47 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 8: start clvmd_start_0 on vm1 (local)
Dec  2 07:09:47 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation dlm_monitor_60000 (call=26, rc=0, cib-update=29, confirmed=false) ok
Dec  2 07:09:47 vm1 clvmd(clvmd)[1309]: INFO: Starting clvmd
Dec  2 07:09:47 vm1 clvmd[1332]: CLVMD started
Dec  2 07:09:47 vm1 kernel: [   26.403628] dlm: Using SCTP for communications
Dec  2 07:09:47 vm1 kernel: [   26.413789] sctp: Hash tables configured (established 256 bind 256)
Dec  2 07:09:47 vm1 kernel: [   26.415925] dlm: clvmd: joining the lockspace group...
Dec  2 07:09:47 vm1 kernel: [   26.418602] dlm: clvmd: group event done 0 0
Dec  2 07:09:47 vm1 kernel: [   26.419054] dlm: clvmd: dlm_recover 1
Dec  2 07:09:47 vm1 kernel: [   26.419065] dlm: clvmd: add member 1084783279
Dec  2 07:09:47 vm1 kernel: [   26.419067] dlm: clvmd: dlm_recover_members 1 nodes
Dec  2 07:09:47 vm1 kernel: [   26.419068] dlm: clvmd: join complete
Dec  2 07:09:47 vm1 kernel: [   26.419557] dlm: clvmd: generation 1 slots 1 1:1084783279
Dec  2 07:09:47 vm1 kernel: [   26.419558] dlm: clvmd: dlm_recover_directory
Dec  2 07:09:47 vm1 kernel: [   26.419559] dlm: clvmd: dlm_recover_directory 0 in 0 new
Dec  2 07:09:47 vm1 kernel: [   26.419560] dlm: clvmd: dlm_recover_directory 0 out 0 messages
Dec  2 07:09:47 vm1 kernel: [   26.419566] dlm: clvmd: dlm_recover 1 generation 1 done: 0 ms
Dec  2 07:09:47 vm1 LVM(vg)[1308]: INFO: Deactivating volume group vg
Dec  2 07:09:48 vm1 clvmd[1332]: Created DLM lockspace for CLVMD.
Dec  2 07:09:48 vm1 clvmd[1332]: DLM initialisation complete
Dec  2 07:09:48 vm1 clvmd[1332]: Our local node id is 1084783279
Dec  2 07:09:48 vm1 clvmd[1332]: Connected to Corosync
Dec  2 07:09:48 vm1 clvmd[1332]: Cluster LVM daemon started - connected to Corosync
Dec  2 07:09:48 vm1 clvmd[1332]: Cluster ready, doing some more initialisation
Dec  2 07:09:48 vm1 clvmd[1332]: starting LVM thread
Dec  2 07:09:48 vm1 clvmd[1332]: LVM thread function started
Dec  2 07:09:48 vm1 lvm[1332]: clvmd ready for work
Dec  2 07:09:48 vm1 lvm[1332]: Using timeout of 60 seconds
Dec  2 07:09:48 vm1 lvm[1332]: confchg callback. 1 joined, 0 left, 1 members
Dec  2 07:09:48 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:09:48 vm1 lvm[1332]: creating pipe, [13, 14]
Dec  2 07:09:48 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:09:48 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:09:48 vm1 lvm[1332]: Sub thread ready for work.
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 (client=0x8517f0)
Dec  2 07:09:48 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:09:48 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 0, flags=0x1 (LOCAL)
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b60. client=0x8517f0, msg=0x851900, len=24, csid=(nil), xid=0
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851ba0, msglen =24, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:09:48 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 1, flags=0x1 (LOCAL)
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b90. client=0x8517f0, msg=0x851b60, len=31, csid=(nil), xid=1
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851bd0, msglen =31, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Syncing device names
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 84
Dec  2 07:09:48 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:09:48 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 2, flags=0x0 ()
Dec  2 07:09:48 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851e50. client=0x8517f0, msg=0x851b90, len=84, csid=(nil), xid=2
Dec  2 07:09:48 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: LOCK_QUERY (0x34) msg=0x851bf0, msglen =84, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: do_lock_query: resource 'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', mode -1 (?)
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 84
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 84
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 3, flags=0x1 (LOCAL)
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851bf0. client=0x8517f0, msg=0x851b90, len=84, csid=(nil), xid=3
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: LOCK_LV (0x32) msg=0x851e50, msglen =84, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: do_lock_lv: resource 'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', cmd = 0x98 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK|CLUSTER_VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:09:48 vm1 lvm[1332]: do_deactivate_lock, lock not already held
Dec  2 07:09:48 vm1 lvm[1332]: Command return is 0, critical_section is 0
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:09:48 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:09:48 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 4, flags=0x0 ()
Dec  2 07:09:48 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851df0. client=0x8517f0, msg=0x851b60, len=31, csid=(nil), xid=4
Dec  2 07:09:48 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851b90, msglen =31, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Syncing device names
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 (client=0x8517f0)
Dec  2 07:09:48 vm1 lvm[1332]: unlock_resource: V_vg lockid: 1
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 5, flags=0x1 (LOCAL)
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b60. client=0x8517f0, msg=0x851900, len=24, csid=(nil), xid=5
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851ba0, msglen =24, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:09:48 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 LVM(vg)[1308]: INFO: 0 logical volume(s) in volume group "vg" now active
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:09:48 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Subthread finished
Dec  2 07:09:48 vm1 lvm[1332]: Joined child thread
Dec  2 07:09:48 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, msg=(nil), len=0, csid=(nil), xid=5
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 LVM(vg)[1308]: INFO: LVM Volume vg is not available (stopped)
Dec  2 07:09:48 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_stop_0 (call=28, rc=0, cib-update=30, confirmed=true) ok
Dec  2 07:09:48 vm1 ntpdate[1229]: adjust time server 37.120.191.245 offset 0.411105 sec
Dec  2 07:09:48 vm1 ntpd[1388]: ntpd 4.2.6p5@1.2349-o Wed Oct  5 12:35:25 UTC 2016 (1)
Dec  2 07:09:48 vm1 ntpd[1389]: proto: precision = 0.134 usec
Dec  2 07:09:48 vm1 ntpd[1389]: ntp_io: estimated max descriptors: 1024, initial socket boundary: 16
Dec  2 07:09:48 vm1 ntpd[1389]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen and drop on 1 v6wildcard :: UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen normally on 2 lo 127.0.0.1 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen normally on 3 eth0 192.168.122.175 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen normally on 4 lo ::1 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen normally on 5 eth0 fe80::5054:ff:fe74:8cc7 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: peers refreshed
Dec  2 07:09:48 vm1 ntpd[1389]: Listening on routing socket on fd #22 for interface updates
Dec  2 07:09:50 vm1 lrmd[1072]:   notice: operation_finished: clvmd_start_0:1309:stderr [   local socket: connect failed: No such file or directory ]
Dec  2 07:09:50 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation clvmd_start_0 (call=30, rc=0, cib-update=31, confirmed=true) ok
Dec  2 07:09:50 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 9: monitor clvmd_monitor_40000 on vm1 (local)
Dec  2 07:09:50 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation clvmd_monitor_40000 (call=35, rc=0, cib-update=32, confirmed=false) ok
Dec  2 07:09:50 vm1 crmd[1075]:   notice: run_graph: Transition 1 (Complete=9, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-142.bz2): Complete
Dec  2 07:09:50 vm1 crmd[1075]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Dec  2 07:17:01 vm1 CRON[1566]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Dec  2 07:24:50 vm1 crmd[1075]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Dec  2 07:24:50 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Dec  2 07:24:50 vm1 pengine[1074]:  warning: unpack_rsc_op: Processing failed op start for vg on vm1: unknown error (1)
Dec  2 07:24:50 vm1 pengine[1074]:  warning: common_apply_stickiness: Forcing vg away from vm1 after 1000000 failures (max=1000000)
Dec  2 07:24:50 vm1 crmd[1075]:   notice: run_graph: Transition 2 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-143.bz2): Complete
Dec  2 07:24:50 vm1 crmd[1075]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Dec  2 07:24:50 vm1 pengine[1074]:   notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-143.bz2
Dec  2 07:33:57 vm1 dhclient: DHCPREQUEST of 192.168.122.175 on eth0 to 192.168.122.1 port 67 (xid=0x1068268f)
Dec  2 07:33:57 vm1 dhclient: DHCPACK of 192.168.122.175 from 192.168.122.1
Dec  2 07:33:57 vm1 dhclient: bound to 192.168.122.175 -- renewal in 1616 seconds.
Dec  2 07:39:50 vm1 crmd[1075]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Dec  2 07:39:50 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Dec  2 07:39:50 vm1 pengine[1074]:  warning: unpack_rsc_op: Processing failed op start for vg on vm1: unknown error (1)
Dec  2 07:39:50 vm1 pengine[1074]:  warning: common_apply_stickiness: Forcing vg away from vm1 after 1000000 failures (max=1000000)
Dec  2 07:39:50 vm1 crmd[1075]:   notice: run_graph: Transition 3 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-143.bz2): Complete
Dec  2 07:39:50 vm1 crmd[1075]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Dec  2 07:39:50 vm1 pengine[1074]:   notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-143.bz2
Dec  2 07:43:08 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:08 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:43:08 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:43:08 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:43:08 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 (client=0x8517f0)
Dec  2 07:43:08 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:43:08 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 6, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=6
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:08 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 7, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=7
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, msglen =31, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 8, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=8
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, msglen =31, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:08 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 9, flags=0x0 ()
Dec  2 07:43:08 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=9
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 (client=0x8517f0)
Dec  2 07:43:08 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 1
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 10, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=10
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:08 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 (client=0x8517f0)
Dec  2 07:43:08 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:43:08 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 11, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=11
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:08 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 12, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=12
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, msglen =31, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:08 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 13, flags=0x0 ()
Dec  2 07:43:08 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=13
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 (client=0x8517f0)
Dec  2 07:43:08 vm1 lvm[1332]: unlock_resource: V_vg lockid: 1
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 14, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=14
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:08 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:43:08 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Subthread finished
Dec  2 07:43:08 vm1 lvm[1332]: Joined child thread
Dec  2 07:43:08 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, msg=(nil), len=0, csid=(nil), xid=14
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:43:09 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:09 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:09 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:43:09 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:43:09 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:43:09 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 1 (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource 'P_#global', flags=0, mode=3
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 15, flags=0x0 ()
Dec  2 07:43:09 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=29, csid=(nil), xid=15
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, msglen =29, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Refreshing context
Dec  2 07:43:09 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:09 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 29
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 16, flags=0x1 (LOCAL)
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=16
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:09 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:09 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 17, flags=0x0 ()
Dec  2 07:43:09 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=17
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 2
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 18, flags=0x1 (LOCAL)
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=18
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 19, flags=0x1 (LOCAL)
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=19
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:09 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:09 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 20, flags=0x0 ()
Dec  2 07:43:09 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=20
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: unlock_resource: V_vg lockid: 2
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 21, flags=0x1 (LOCAL)
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=21
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:43:09 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:09 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: unlock_resource: P_#global lockid: 1
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 22, flags=0x0 ()
Dec  2 07:43:09 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=29, csid=(nil), xid=22
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, msglen =29, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Refreshing context
Dec  2 07:43:09 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:09 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 29
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:43:09 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Subthread finished
Dec  2 07:43:09 vm1 lvm[1332]: Joined child thread
Dec  2 07:43:09 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, msg=(nil), len=0, csid=(nil), xid=22
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:16 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:43:16 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:43:16 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:43:16 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 (client=0x8517f0)
Dec  2 07:43:16 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:43:16 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 23, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=23
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:16 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 24, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=24
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, msglen =31, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 25, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=25
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, msglen =31, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:16 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 26, flags=0x0 ()
Dec  2 07:43:16 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=26
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 (client=0x8517f0)
Dec  2 07:43:16 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 1
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 27, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=27
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:16 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 (client=0x8517f0)
Dec  2 07:43:16 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:43:16 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 28, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=28
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:16 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 29, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=29
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, msglen =31, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:16 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 30, flags=0x0 ()
Dec  2 07:43:16 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=30
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 (client=0x8517f0)
Dec  2 07:43:16 vm1 lvm[1332]: unlock_resource: V_vg lockid: 1
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 31, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=31
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:16 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:43:16 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Subthread finished
Dec  2 07:43:16 vm1 lvm[1332]: Joined child thread
Dec  2 07:43:16 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, msg=(nil), len=0, csid=(nil), xid=31
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:43:24 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:24 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:24 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:43:24 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:43:24 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:43:24 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 1 (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource 'P_#global', flags=0, mode=3
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 32, flags=0x0 ()
Dec  2 07:43:24 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=29, csid=(nil), xid=32
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, msglen =29, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Refreshing context
Dec  2 07:43:24 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:24 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 29
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 33, flags=0x1 (LOCAL)
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=33
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:24 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:24 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 34, flags=0x0 ()
Dec  2 07:43:24 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=34
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 2
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 35, flags=0x1 (LOCAL)
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=35
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 36, flags=0x1 (LOCAL)
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=36
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:24 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:24 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 37, flags=0x0 ()
Dec  2 07:43:24 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=37
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: unlock_resource: V_vg lockid: 2
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 38, flags=0x1 (LOCAL)
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=38
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:43:24 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:24 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: unlock_resource: P_#global lockid: 1
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 39, flags=0x0 ()
Dec  2 07:43:24 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=29, csid=(nil), xid=39
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, msglen =29, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Refreshing context
Dec  2 07:43:24 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:24 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 29
Dec  2 07:43:25 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:25 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:25 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:25 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:25 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:25 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:25 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:25 vm1 lvm[1332]: Send local reply
Dec  2 07:43:25 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:43:25 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:43:25 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:43:25 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:25 vm1 lvm[1332]: Subthread finished
Dec  2 07:43:25 vm1 lvm[1332]: Joined child thread
Dec  2 07:43:25 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:43:25 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, msg=(nil), len=0, csid=(nil), xid=39
Dec  2 07:43:25 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:43:25 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:40 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-vg (<null>)
Dec  2 07:44:40 vm1 crmd[1075]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Dec  2 07:44:40 vm1 attrd[1073]:   notice: attrd_perform_update: Sent delete 16: node=1084783279, attr=fail-count-vg, id=<n/a>, set=(null), section=status
Dec  2 07:44:40 vm1 cib[1070]:   notice: cib:diff: Diff: --- 0.109.15
Dec  2 07:44:40 vm1 cib[1070]:   notice: cib:diff: Diff: +++ 0.110.1 ca82fa1d5517ed5d58fe06dce9b30d67
Dec  2 07:44:40 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Dec  2 07:44:40 vm1 pengine[1074]:   notice: LogActions: Start   vg#011(vm1)
Dec  2 07:44:40 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 6: monitor vg_monitor_0 on vm1 (local)
Dec  2 07:44:40 vm1 pengine[1074]:   notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-144.bz2
Dec  2 07:44:40 vm1 LVM(vg)[2032]: WARNING: LVM Volume vg is not available (stopped)
Dec  2 07:44:40 vm1 LVM(vg)[2032]: INFO: LVM Volume vg is offline
Dec  2 07:44:40 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_monitor_0 (call=42, rc=7, cib-update=42, confirmed=true) not running
Dec  2 07:44:40 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 5: probe_complete probe_complete on vm1 (local) - no waiting
Dec  2 07:44:40 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 23: start vg_start_0 on vm1 (local)
Dec  2 07:44:40 vm1 LVM(vg)[2047]: INFO: Activating volume group vg
Dec  2 07:44:40 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:44:40 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:44:40 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:40 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:40 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:44:40 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:44:40 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:44:40 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:44:40 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 4 (client=0x8517f0)
Dec  2 07:44:40 vm1 lvm[1332]: lock_resource 'P_#global', flags=0, mode=4
Dec  2 07:44:40 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:44:40 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:40 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:40 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:40 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:40 vm1 lvm[1332]: distribute command: XID = 40, flags=0x0 ()
Dec  2 07:44:40 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:40 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=29, csid=(nil), xid=40
Dec  2 07:44:40 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:40 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, msglen =29, client=0x8517f0
Dec  2 07:44:40 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:40 vm1 lvm[1332]: Refreshing context
Dec  2 07:44:40 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:40 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 29
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 41, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=41
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 42, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=42
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Syncing device names
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 2
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 43, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, msg=0x851900, len=28, csid=(nil), xid=43
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, msglen =28, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 44, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=44
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 45, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=31, csid=(nil), xid=45
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, msglen =31, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Syncing device names
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: unlock_resource: V_vg lockid: 2
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 46, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=0x851930, len=24, csid=(nil), xid=46
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: unlock_resource: P_#global lockid: 1
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 47, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, msg=0x851900, len=29, csid=(nil), xid=47
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, msglen =29, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Refreshing context
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 29
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 LVM(vg)[2047]: INFO: Reading all physical volumes. This may take a while... Found volume group "vm1-vg" using metadata type lvm2 Found volume group "vg" using metadata type lvm2
Dec  2 07:44:41 vm1 lvm[1332]: Got new connection on fd 15
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 24
Dec  2 07:44:41 vm1 lvm[1332]: creating pipe, [16, 17]
Dec  2 07:44:41 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:44:41 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:44:41 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Subthread finished
Dec  2 07:44:41 vm1 lvm[1332]: Joined child thread
Dec  2 07:44:41 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, msg=(nil), len=0, csid=(nil), xid=47
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: in sub thread: client = 0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 (client=0x851c70)
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 48, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x851c70, msg=0x851930, len=24, csid=(nil), xid=48
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, msglen =24, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 84
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 49, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x852220. client=0x851c70, msg=0x851950, len=84, csid=(nil), xid=49
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_QUERY (0x34) msg=0x8519b0, msglen =84, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_query: resource 'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', mode -1 (?)
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 84
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 84
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 50, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x852220. client=0x851c70, msg=0x851950, len=84, csid=(nil), xid=50
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_QUERY (0x34) msg=0x8519b0, msglen =84, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_query: resource 'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', mode -1 (?)
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 84
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 84
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 51, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x8519b0. client=0x851c70, msg=0x851950, len=84, csid=(nil), xid=51
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_LV (0x32) msg=0x852220, msglen =84, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_lv: resource 'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', cmd = 0x9d LCK_LV_EXCLUSIVE (EXCL|LV|NONBLOCK|CLUSTER_VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource 'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', flags=1, mode=5
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:44:41 vm1 lvm[1332]: Command return is 0, critical_section is 0
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 84
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 52, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x852220. client=0x851c70, msg=0x851950, len=84, csid=(nil), xid=52
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_QUERY (0x34) msg=0x8519b0, msglen =84, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_query: resource 'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', mode 5 (EX)
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 3 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 84
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 31
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 53, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x852220. client=0x851c70, msg=0x8519b0, len=31, csid=(nil), xid=53
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x8519e0, msglen =31, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Syncing device names
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. len 31
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 24
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 (client=0x851c70)
Dec  2 07:44:41 vm1 lvm[1332]: unlock_resource: V_vg lockid: 1
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 54, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b60. client=0x851c70, msg=0x84f870, len=24, csid=(nil), xid=54
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851ba0, msglen =24, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 LVM(vg)[2047]: INFO: 1 logical volume(s) in volume group "vg" now active
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 0
Dec  2 07:44:41 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Subthread finished
Dec  2 07:44:41 vm1 lvm[1332]: Joined child thread
Dec  2 07:44:41 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b60. client=0x851c70, msg=(nil), len=0, csid=(nil), xid=54
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_start_0 (call=45, rc=0, cib-update=43, confirmed=true) ok
Dec  2 07:44:41 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 24: monitor vg_monitor_60000 on vm1 (local)
Dec  2 07:44:41 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_monitor_60000 (call=48, rc=0, cib-update=44, confirmed=false) ok
Dec  2 07:44:41 vm1 crmd[1075]:   notice: run_graph: Transition 4 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-144.bz2): Complete
Dec  2 07:44:41 vm1 crmd[1075]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
  2016-11-25  9:30 ` Zdenek Kabelac
@ 2016-11-28 22:24   ` David Teigland
  0 siblings, 0 replies; 10+ messages in thread
From: David Teigland @ 2016-11-28 22:24 UTC (permalink / raw)
  To: Stefan Bauer; +Cc: linux-lvm

On Fri, Nov 25, 2016 at 10:30:39AM +0100, Zdenek Kabelac wrote:
> Dne 25.11.2016 v 10:17 Stefan Bauer napsal(a):
> > Hi Peter,
> > 
> > as i said, we have master/slave setup _without_ concurrent write/read. So i do not see a reason why i should take care of locking as only one node is activating the volume group at the same time.
> > 
> > That should be fine - right?
> 
> Nope it's not.
> 
> Every i.e. activation  DOES validation of all resources and takes ACTION
> when something is wrong.
> 
> Sorry, but there is NO way to do this properly without locking manager.
> 
> Although many lvm2 users always do try to be 'innovative' and try to use in
> lock-less way - this seems to work most of the time - till the moment some
> disaster happens - then just lvm2 is blamed about data loss..
> 
> Interestingly they never tried to think why we invested so much time into
> locking manager when there is such 'easy-fix' in their eyes...
> 
> IMHO lvmlockd is relatively 'low-resource/overhead' solution worth to be
> explored if you don't like clvmd...

Stefan, as Zdenek points out, even reading VGs on shared storage is not
entirely safe, because lvm may attempt to fix/repair things on disk while
it is reading (this becomes more likely if one machine reads while another
is making changes).  Using some kind of locking or clustering (lvmlockd or
clvm) is a solution.

Another fairly new option is to use "system ID", which assigns one host as
the owner of the VG.  This avoids the problems mentioned above with
reading->fixing.  But, system ID on its own cannot be used dynamically.
If you want to fail-over the VG between hosts, the system ID needs to be
changed, and this needs to be done carefully (e.g. by a resource manager
or something that takes fencing into account,
https://bugzilla.redhat.com/show_bug.cgi?id=1336346#c2)

Also https://www.redhat.com/archives/linux-lvm/2016-November/msg00022.html

Dave

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
  2016-11-25  9:17 Stefan Bauer
@ 2016-11-25  9:30 ` Zdenek Kabelac
  2016-11-28 22:24   ` David Teigland
  0 siblings, 1 reply; 10+ messages in thread
From: Zdenek Kabelac @ 2016-11-25  9:30 UTC (permalink / raw)
  To: LVM general discussion and development

Dne 25.11.2016 v 10:17 Stefan Bauer napsal(a):
> Hi Peter,
>
> as i said, we have master/slave setup _without_ concurrent write/read. So i do not see a reason why i should take care of locking as only one node is activating the volume group at the same time.
>
> That should be fine - right?

Nope it's not.

Every i.e. activation  DOES validation of all resources and takes ACTION when 
something is wrong.

Sorry, but there is NO way to do this properly without locking manager.

Although many lvm2 users always do try to be 'innovative' and try to use in 
lock-less way - this seems to work most of the time - till the moment some
disaster happens - then just lvm2 is blamed about data loss..

Interestingly they never tried to think why we invested so much time into 
locking manager when there is such 'easy-fix' in their eyes...

IMHO lvmlockd is relatively 'low-resource/overhead' solution worth to be 
explored if you don't like clvmd...

Regards

Zdenek

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
@ 2016-11-25  9:17 Stefan Bauer
  2016-11-25  9:30 ` Zdenek Kabelac
  0 siblings, 1 reply; 10+ messages in thread
From: Stefan Bauer @ 2016-11-25  9:17 UTC (permalink / raw)
  To: LVM general discussion and development, Peter Rajnoha

Hi Peter,

as i said, we have master/slave setup _without_ concurrent write/read. So i do not see a reason why i should take care of locking as only one node is activating the volume group at the same time.

That should be fine - right?

Stefan
-----Ursprüngliche Nachricht-----
> Von:Zdenek Kabelac <zkabelac@redhat.com>
> 
> If you are working with 'shared' storage and manipulating  same VG from 
> multiple nodes (i.e. activation) - it's not so easy to go without really good 
> locking manager.
> 
> If you don't like clvmd - maybe you could take a look at lvmlockd & sanlock.
> 
> But you should be aware there are NOT many people who are able to ensure 
> correct locking of lvm2 commands -  it's really not just 'master/slave'.
> There are big number of error cases which do not proper locking around
> the whole cluster.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
  2016-11-24 14:02 Stefan Bauer
@ 2016-11-25  8:48 ` Zdenek Kabelac
  0 siblings, 0 replies; 10+ messages in thread
From: Zdenek Kabelac @ 2016-11-25  8:48 UTC (permalink / raw)
  To: LVM general discussion and development, Peter Rajnoha

Dne 24.11.2016 v 15:02 Stefan Bauer napsal(a):
> Hi Peter,
>
> now all make sense. On this ubuntu machine upstartd with udev is taking care of vgchange.
>
> After some digging, /lib/udev/rules.d/85-lvm2.rules shows, that vgchange is only executed with -a y
>
> We will test this on weekend but I'm certain now, that this was the problem.
>
> We wanted to keep things simple and do not use clvmd. We have master/slave setup without concurrent write/read.

Hi

Wondering what are you trying to do when you say  'not use clvmd'.

If you are working with 'shared' storage and manipulating  same VG from 
multiple nodes (i.e. activation) - it's not so easy to go without really good 
locking manager.

If you don't like clvmd - maybe you could take a look at lvmlockd & sanlock.

But you should be aware there are NOT many people who are able to ensure 
correct locking of lvm2 commands -  it's really not just 'master/slave'.
There are big number of error cases which do not proper locking around
the whole cluster.


Regards

Zdenek

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [linux-lvm]  auto_activation_volume_list in lvm.conf not honored
@ 2016-11-24 14:20 Stefan Bauer
  0 siblings, 0 replies; 10+ messages in thread
From: Stefan Bauer @ 2016-11-24 14:20 UTC (permalink / raw)
  To: LVM general discussion and development

Its already gone in newer releases of Ubuntu. They replaced upstartd with sytemd. The systemd scripts look sane and use -aay.

Stefan
 
-----Ursprüngliche Nachricht-----
> Von:Peter Rajnoha <prajnoha@redhat.com>
> SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}="lvm*|LVM*",
> RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"
> 
> So that "watershed" should use vgchange -aay. Please report this for
> Ubuntu directly for them to fix this (as the "watershed" helper binary
> is specific to Debian/Ubuntu only).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
@ 2016-11-24 14:02 Stefan Bauer
  2016-11-25  8:48 ` Zdenek Kabelac
  0 siblings, 1 reply; 10+ messages in thread
From: Stefan Bauer @ 2016-11-24 14:02 UTC (permalink / raw)
  To: Peter Rajnoha, linux-lvm

Hi Peter,

now all make sense. On this ubuntu machine upstartd with udev is taking care of vgchange.

After some digging, /lib/udev/rules.d/85-lvm2.rules shows, that vgchange is only executed with -a y

We will test this on weekend but I'm certain now, that this was the problem.

We wanted to keep things simple and do not use clvmd. We have master/slave setup without concurrent write/read.

So according to your documentations ;) it should work without clvmd.

Thank you!

Stefan

 
-----Ursprüngliche Nachricht-----
> Von:Peter Rajnoha <prajnoha@redhat.com>
> Gesendet: Don 24 November 2016 14:39
> An: linux-lvm@redhat.com
> CC: sb@plzk.de
> Betreff: Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
> 
> On 11/24/2016 01:21 PM, Stefan Bauer wrote:
> > hi folks, 
> > 
> > howto avoid pvescan to initialize lvm volume groups on startup (it's for a cluster setup)? auto_activation_volume_list was filled with the remaining VGs we want to setup. Manually it does what it should 
> > 
> > /sbin/lvm pvescan --config 'activation { auto_activation_volume_list = "vg2" }'  only activates vg2 but keeps vg1 "untouched".
> > 
> > But on system startup, all VGs are activated. (ubuntu 14.04.5 LTS)
> > 
> > We also updated the ramdisk and verified, the changes are also in the ramdisks lvm.conf
> > 
> > However vg1 is also enabled.
> > 
> > Any help is greatly appreciated.
> 
> It's important that all scripts which handle LVM activation at boot are
> executed with vgchange -aay that honours the auto_activation_volume_list
> (so not "vgchange -ay").
> 
> All init scripts and systemd units which upstream LVM2 provides are
> executed with "-aay" already.
> 
> You mentioned cluster setup - so are your VGs clustered and are you
> using clvmd? If that's the case, the clustered VGs are activated either
> by clvmd init script/systemd unit or external cluster resource agent
> (e.g. pacemaker and clvm ocf file) which calls the vgchange to activate
> the clustered VGs - that one needs to use "-aay" too.
> 
> -- 
> Peter
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
  2016-11-24 13:38 ` Peter Rajnoha
@ 2016-11-24 13:55   ` Peter Rajnoha
  0 siblings, 0 replies; 10+ messages in thread
From: Peter Rajnoha @ 2016-11-24 13:55 UTC (permalink / raw)
  To: linux-lvm

On 11/24/2016 02:38 PM, Peter Rajnoha wrote:
> On 11/24/2016 01:21 PM, Stefan Bauer wrote:
>> hi folks, 
>>
>> howto avoid pvescan to initialize lvm volume groups on startup (it's for a cluster setup)? auto_activation_volume_list was filled with the remaining VGs we want to setup. Manually it does what it should 
>>
>> /sbin/lvm pvescan --config 'activation { auto_activation_volume_list = "vg2" }'  only activates vg2 but keeps vg1 "untouched".
>>
>> But on system startup, all VGs are activated. (ubuntu 14.04.5 LTS)
>>
>> We also updated the ramdisk and verified, the changes are also in the ramdisks lvm.conf
>>
>> However vg1 is also enabled.
>>
>> Any help is greatly appreciated.
> 
> It's important that all scripts which handle LVM activation at boot are
> executed with vgchange -aay that honours the auto_activation_volume_list
> (so not "vgchange -ay").
> 
> All init scripts and systemd units which upstream LVM2 provides are
> executed with "-aay" already.
> 
> You mentioned cluster setup - so are your VGs clustered and are you
> using clvmd? If that's the case, the clustered VGs are activated either
> by clvmd init script/systemd unit or external cluster resource agent
> (e.g. pacemaker and clvm ocf file) which calls the vgchange to activate
> the clustered VGs - that one needs to use "-aay" too.
> 

I looked at Ubuntu specific environment and I can see there's
/lib/udev/rules.d/85-lvm2.rules with:

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}="lvm*|LVM*",
RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"

So that "watershed" should use vgchange -aay. Please report this for
Ubuntu directly for them to fix this (as the "watershed" helper binary
is specific to Debian/Ubuntu only).

-- 
Peter

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
  2016-11-24 12:21 Stefan Bauer
@ 2016-11-24 13:38 ` Peter Rajnoha
  2016-11-24 13:55   ` Peter Rajnoha
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Rajnoha @ 2016-11-24 13:38 UTC (permalink / raw)
  To: linux-lvm; +Cc: sb

On 11/24/2016 01:21 PM, Stefan Bauer wrote:
> hi folks, 
> 
> howto avoid pvescan to initialize lvm volume groups on startup (it's for a cluster setup)? auto_activation_volume_list was filled with the remaining VGs we want to setup. Manually it does what it should 
> 
> /sbin/lvm pvescan --config 'activation { auto_activation_volume_list = "vg2" }'  only activates vg2 but keeps vg1 "untouched".
> 
> But on system startup, all VGs are activated. (ubuntu 14.04.5 LTS)
> 
> We also updated the ramdisk and verified, the changes are also in the ramdisks lvm.conf
> 
> However vg1 is also enabled.
> 
> Any help is greatly appreciated.

It's important that all scripts which handle LVM activation at boot are
executed with vgchange -aay that honours the auto_activation_volume_list
(so not "vgchange -ay").

All init scripts and systemd units which upstream LVM2 provides are
executed with "-aay" already.

You mentioned cluster setup - so are your VGs clustered and are you
using clvmd? If that's the case, the clustered VGs are activated either
by clvmd init script/systemd unit or external cluster resource agent
(e.g. pacemaker and clvm ocf file) which calls the vgchange to activate
the clustered VGs - that one needs to use "-aay" too.

-- 
Peter

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [linux-lvm] auto_activation_volume_list in lvm.conf not honored
@ 2016-11-24 12:21 Stefan Bauer
  2016-11-24 13:38 ` Peter Rajnoha
  0 siblings, 1 reply; 10+ messages in thread
From: Stefan Bauer @ 2016-11-24 12:21 UTC (permalink / raw)
  To: linux-lvm

hi folks, 

howto avoid pvescan to initialize lvm volume groups on startup (it's for a cluster setup)? auto_activation_volume_list was filled with the remaining VGs we want to setup. Manually it does what it should 

/sbin/lvm pvescan --config 'activation { auto_activation_volume_list = "vg2" }'  only activates vg2 but keeps vg1 "untouched".

But on system startup, all VGs are activated. (ubuntu 14.04.5 LTS)

We also updated the ramdisk and verified, the changes are also in the ramdisks lvm.conf

However vg1 is also enabled.

Any help is greatly appreciated.

Stefan

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-12-02  7:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-02  7:07 [linux-lvm] auto_activation_volume_list in lvm.conf not honored Stefan Bauer
  -- strict thread matches above, loose matches on Subject: below --
2016-11-25  9:17 Stefan Bauer
2016-11-25  9:30 ` Zdenek Kabelac
2016-11-28 22:24   ` David Teigland
2016-11-24 14:20 Stefan Bauer
2016-11-24 14:02 Stefan Bauer
2016-11-25  8:48 ` Zdenek Kabelac
2016-11-24 12:21 Stefan Bauer
2016-11-24 13:38 ` Peter Rajnoha
2016-11-24 13:55   ` Peter Rajnoha

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.