All of lore.kernel.org
 help / color / mirror / Atom feed
* [Ocfs2-devel] dln domain_map is not consistent after a node was fencing
@ 2016-01-11 11:52 Shichangkuo
  2016-01-12  2:21 ` Junxiao Bi
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Shichangkuo @ 2016-01-11 11:52 UTC (permalink / raw)
  To: ocfs2-devel

Hi,

I have three nodes in one cluster. Node1 disconnect TCP with Node2 and Node3, and then fencing by restarting.
When re-mounted LUNA on Node1, I found strange issue: Node1 this it was the only node mounted on LUNA, the kernel log as follows:
Jan  4 16:46:14 cvk12 kernel: [   99.223321] o2dlm: Joining domain 0400EAB0791F4B4C85F3FCB5AAC76A1D ( 5 ) 1 nodes

I don't think Node1 had a mistake by reading heartbeat on other slots.
The most possible reason maybe other nodes responded JOIN_OK_NO_MAP after receiving dlm_query_join messages by Node1.
Threr are two ways to respond JOIN_OK_NO_MAP in dlm_query_join_handler:
    1) dlm->dlm_state == DLM_CTXT_LEAVING
    2) dlm->dlm_state == DLM_CTXT_NEW &&
                    dlm->joining_node == DLM_LOCK_RES_OWNER_UNKNOWN
But neither of them is suitable as node already mounted.
I have no idea about it, If any other encount or familiar with this issue, please let me know.

Thanks,
Changkuo.

-------------------------------------------------------------------------------------------------------------------------------------
????????????????????????????????????????
????????????????????????????????????????
????????????????????????????????????????
???
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Ocfs2-devel] dln domain_map is not consistent after a node was fencing
  2016-01-11 11:52 [Ocfs2-devel] dln domain_map is not consistent after a node was fencing Shichangkuo
@ 2016-01-12  2:21 ` Junxiao Bi
  2016-01-12  9:27 ` Joseph Qi
  2016-01-13  2:58 ` xuejiufei
  2 siblings, 0 replies; 5+ messages in thread
From: Junxiao Bi @ 2016-01-12  2:21 UTC (permalink / raw)
  To: ocfs2-devel

Hi,

On 01/11/2016 07:52 PM, Shichangkuo wrote:
> Hi,
> 
> I have three nodes in one cluster. Node1 disconnect TCP with Node2 and Node3, and then fencing by restarting.
> When re-mounted LUNA on Node1, I found strange issue: Node1 this it was the only node mounted on LUNA, the kernel log as follows:
> Jan  4 16:46:14 cvk12 kernel: [   99.223321] o2dlm: Joining domain 0400EAB0791F4B4C85F3FCB5AAC76A1D ( 5 ) 1 nodes
> 
> I don't think Node1 had a mistake by reading heartbeat on other slots.
So tcp connections from node 1 to node2/3 were established?

> The most possible reason maybe other nodes responded JOIN_OK_NO_MAP after receiving dlm_query_join messages by Node1.
> Threr are two ways to respond JOIN_OK_NO_MAP in dlm_query_join_handler:
>     1) dlm->dlm_state == DLM_CTXT_LEAVING
>     2) dlm->dlm_state == DLM_CTXT_NEW &&
>                     dlm->joining_node == DLM_LOCK_RES_OWNER_UNKNOWN
> But neither of them is suitable as node already mounted.
Can this happen every time? If so and no connection issue, you can use
tcpdump to capture packets to see what happened.

Thanks,
Junxiao.

> I have no idea about it, If any other encount or familiar with this issue, please let me know.
> 
> Thanks,
> Changkuo.
> 
> -------------------------------------------------------------------------------------------------------------------------------------
> ????????????????????????????????????????
> ????????????????????????????????????????
> ????????????????????????????????????????
> ???
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Ocfs2-devel] dln domain_map is not consistent after a node was fencing
  2016-01-11 11:52 [Ocfs2-devel] dln domain_map is not consistent after a node was fencing Shichangkuo
  2016-01-12  2:21 ` Junxiao Bi
@ 2016-01-12  9:27 ` Joseph Qi
  2016-01-13  2:58 ` xuejiufei
  2 siblings, 0 replies; 5+ messages in thread
From: Joseph Qi @ 2016-01-12  9:27 UTC (permalink / raw)
  To: ocfs2-devel

Maybe node1 mistaken cleared slots of node2 and node3 when mounting
after system restarted.
So what's the output of slot allocation?

On 2016/1/11 19:52, Shichangkuo wrote:
> Hi,
> 
> I have three nodes in one cluster. Node1 disconnect TCP with Node2 and Node3, and then fencing by restarting.
> When re-mounted LUNA on Node1, I found strange issue: Node1 this it was the only node mounted on LUNA, the kernel log as follows:
> Jan  4 16:46:14 cvk12 kernel: [   99.223321] o2dlm: Joining domain 0400EAB0791F4B4C85F3FCB5AAC76A1D ( 5 ) 1 nodes
> 
> I don't think Node1 had a mistake by reading heartbeat on other slots.
> The most possible reason maybe other nodes responded JOIN_OK_NO_MAP after receiving dlm_query_join messages by Node1.
> Threr are two ways to respond JOIN_OK_NO_MAP in dlm_query_join_handler:
>     1) dlm->dlm_state == DLM_CTXT_LEAVING
>     2) dlm->dlm_state == DLM_CTXT_NEW &&
>                     dlm->joining_node == DLM_LOCK_RES_OWNER_UNKNOWN
> But neither of them is suitable as node already mounted.
> I have no idea about it, If any other encount or familiar with this issue, please let me know.
> 
> Thanks,
> Changkuo.
> 
> -------------------------------------------------------------------------------------------------------------------------------------
> ????????????????????????????????????????
> ????????????????????????????????????????
> ????????????????????????????????????????
> ???
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Ocfs2-devel] dln domain_map is not consistent after a node was fencing
  2016-01-11 11:52 [Ocfs2-devel] dln domain_map is not consistent after a node was fencing Shichangkuo
  2016-01-12  2:21 ` Junxiao Bi
  2016-01-12  9:27 ` Joseph Qi
@ 2016-01-13  2:58 ` xuejiufei
  2016-01-13  6:07   ` [Ocfs2-devel] 答复: " Shichangkuo
  2 siblings, 1 reply; 5+ messages in thread
From: xuejiufei @ 2016-01-13  2:58 UTC (permalink / raw)
  To: ocfs2-devel

Hi Changkuo,

On 2016/1/11 19:52, Shichangkuo wrote:
> Hi,
> 
> I have three nodes in one cluster. Node1 disconnect TCP with Node2 and Node3, and then fencing by restarting.
> When re-mounted LUNA on Node1, I found strange issue: Node1 this it was the only node mounted on LUNA, the kernel log as follows:
> Jan  4 16:46:14 cvk12 kernel: [   99.223321] o2dlm: Joining domain 0400EAB0791F4B4C85F3FCB5AAC76A1D ( 5 ) 1 nodes
> 
> I don't think Node1 had a mistake by reading heartbeat on other slots.
Maybe Node2 and Node3 is busy writing user data and can not write heartbeat to disk when Node1 remount.
So Node1 think Node2 and Node3 is dead. So the dlm domain_map on Node1 only contain itself.

> The most possible reason maybe other nodes responded JOIN_OK_NO_MAP after receiving dlm_query_join messages by Node1.
> Threr are two ways to respond JOIN_OK_NO_MAP in dlm_query_join_handler:
>     1) dlm->dlm_state == DLM_CTXT_LEAVING
>     2) dlm->dlm_state == DLM_CTXT_NEW &&
>                     dlm->joining_node == DLM_LOCK_RES_OWNER_UNKNOWN
> But neither of them is suitable as node already mounted.
> I have no idea about it, If any other encount or familiar with this issue, please let me know.
> 
> Thanks,
> Changkuo.
> 
> -------------------------------------------------------------------------------------------------------------------------------------
> ????????????????????????????????????????
> ????????????????????????????????????????
> ????????????????????????????????????????
> ???
> This e-mail and its attachments contain confidential information from H3C, which is
> intended only for the person or entity whose address is listed above. Any use of the
> information contained herein in any way (including, but not limited to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
> by phone or email immediately and delete it!
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Ocfs2-devel] 答复:  dln domain_map is not consistent after a node was fencing
  2016-01-13  2:58 ` xuejiufei
@ 2016-01-13  6:07   ` Shichangkuo
  0 siblings, 0 replies; 5+ messages in thread
From: Shichangkuo @ 2016-01-13  6:07 UTC (permalink / raw)
  To: ocfs2-devel

Hi, Joseph, Junxiao and Jiufei,
	Thanks for your reply.
	I am sorry for my mistake after checking syslog on Node2 again. The disk is indeed unavailable on Node2 and Node3 when Node1 was mounting.
    The slotmap also changed, as Node1 will do "recovery" on Node2 and Node3
root at cvknode1:~# debugfs.ocfs2 -R slotmap /dev/sdd
	Slot#   Node#
	    1       2

When the storage link recovered , and mounted another LUN on Node1, the tcp connections to Node2 and Node3 established, but the dlm map of LUN1 still inconsistent.
Then the cluster enters split-brain state, dose this issue can automatically fix when heartbeat changed to be active?

-----------------
???: xuejiufei [mailto:xuejiufei at huawei.com] 
????: 2016?1?13? 10:58
???: shichangkuo 09727 (CCPL); ocfs2-devel at oss.oracle.com
??: Re: [Ocfs2-devel] dln domain_map is not consistent after a node was fencing

Hi Changkuo,

On 2016/1/11 19:52, Shichangkuo wrote:
> Hi,
> 
> I have three nodes in one cluster. Node1 disconnect TCP with Node2 and Node3, and then fencing by restarting.
> When re-mounted LUNA on Node1, I found strange issue: Node1 this it was the only node mounted on LUNA, the kernel log as follows:
> Jan  4 16:46:14 cvk12 kernel: [   99.223321] o2dlm: Joining domain 0400EAB0791F4B4C85F3FCB5AAC76A1D ( 5 ) 1 nodes
> 
> I don't think Node1 had a mistake by reading heartbeat on other slots.
Maybe Node2 and Node3 is busy writing user data and can not write heartbeat to disk when Node1 remount.
So Node1 think Node2 and Node3 is dead. So the dlm domain_map on Node1 only contain itself.

> The most possible reason maybe other nodes responded JOIN_OK_NO_MAP after receiving dlm_query_join messages by Node1.
> Threr are two ways to respond JOIN_OK_NO_MAP in dlm_query_join_handler:
>     1) dlm->dlm_state == DLM_CTXT_LEAVING
>     2) dlm->dlm_state == DLM_CTXT_NEW &&
>                     dlm->joining_node == DLM_LOCK_RES_OWNER_UNKNOWN 
> But neither of them is suitable as node already mounted.
> I have no idea about it, If any other encount or familiar with this issue, please let me know.
> 
> Thanks,
> Changkuo.
> 
> ----------------------------------------------------------------------
> ---------------------------------------------------------------
> ????????????????????????????????????????
> ????????????????????????????????????????
> ????????????????????????????????????????
> ???
> This e-mail and its attachments contain confidential information from 
> H3C, which is intended only for the person or entity whose address is 
> listed above. Any use of the information contained herein in any way 
> (including, but not limited to, total or partial disclosure, 
> reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, 
> please notify the sender by phone or email immediately and delete it!
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-01-13  6:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-11 11:52 [Ocfs2-devel] dln domain_map is not consistent after a node was fencing Shichangkuo
2016-01-12  2:21 ` Junxiao Bi
2016-01-12  9:27 ` Joseph Qi
2016-01-13  2:58 ` xuejiufei
2016-01-13  6:07   ` [Ocfs2-devel] 答复: " Shichangkuo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.