All of lore.kernel.org
 help / color / mirror / Atom feed
* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
@ 2016-01-20  3:13 Junxiao Bi
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer Junxiao Bi
                   ` (8 more replies)
  0 siblings, 9 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20  3:13 UTC (permalink / raw)
  To: ocfs2-devel

Hi,

This serial of patches is to fix the issue that when storage down,
all nodes will fence self due to write timeout.
With this patch set, all nodes will keep going until storage back
online, except if the following issue happens, then all nodes will
do as before to fence self.
1. io error got
2. network between nodes down
3. nodes panic

Junxiao Bi (6):
      ocfs2: o2hb: add negotiate timer
      ocfs2: o2hb: add NEGO_TIMEOUT message
      ocfs2: o2hb: add NEGOTIATE_APPROVE message
      ocfs2: o2hb: add some user/debug log
      ocfs2: o2hb: don't negotiate if last hb fail
      ocfs2: o2hb: fix hb hung time

 fs/ocfs2/cluster/heartbeat.c |  181 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 175 insertions(+), 6 deletions(-)

 Thanks,
 Junxiao.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
@ 2016-01-20  3:13 ` Junxiao Bi
  2016-01-21 23:42   ` Andrew Morton
  2016-01-22  0:56   ` Joseph Qi
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message Junxiao Bi
                   ` (7 subsequent siblings)
  8 siblings, 2 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20  3:13 UTC (permalink / raw)
  To: ocfs2-devel

When storage down, all nodes will fence self due to write timeout.
The negotiate timer is designed to avoid this, with it node will
wait until storage up again.

Negotiate timer working in the following way:

1. The timer expires before write timeout timer, its timeout is half
of write timeout now. It is re-queued along with write timeout timer.
If expires, it will send NEGO_TIMEOUT message to master node(node with
lowest node number). This message does nothing but marks a bit in a
bitmap recording which nodes are negotiating timeout on master node.

2. If storage down, nodes will send this message to master node, then
when master node finds its bitmap including all online nodes, it sends
NEGO_APPROVL message to all nodes one by one, this message will re-queue
write timeout timer and negotiate timer.
For any node doesn't receive this message or meets some issue when
handling this message, it will be fenced.
If storage up at any time, o2hb_thread will run and re-queue all the
timer, nothing will be affected by these two steps.

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
---
 fs/ocfs2/cluster/heartbeat.c |   52 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 48 insertions(+), 4 deletions(-)

diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index a3cc6d2fc896..b601ee95de50 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -272,6 +272,10 @@ struct o2hb_region {
 	struct delayed_work	hr_write_timeout_work;
 	unsigned long		hr_last_timeout_start;
 
+	/* negotiate timer, used to negotiate extending hb timeout. */
+	struct delayed_work	hr_nego_timeout_work;
+	unsigned long		hr_nego_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
+
 	/* Used during o2hb_check_slot to hold a copy of the block
 	 * being checked because we temporarily have to zero out the
 	 * crc field. */
@@ -320,7 +324,7 @@ static void o2hb_write_timeout(struct work_struct *work)
 	o2quo_disk_timeout();
 }
 
-static void o2hb_arm_write_timeout(struct o2hb_region *reg)
+static void o2hb_arm_timeout(struct o2hb_region *reg)
 {
 	/* Arm writeout only after thread reaches steady state */
 	if (atomic_read(&reg->hr_steady_iterations) != 0)
@@ -338,11 +342,50 @@ static void o2hb_arm_write_timeout(struct o2hb_region *reg)
 	reg->hr_last_timeout_start = jiffies;
 	schedule_delayed_work(&reg->hr_write_timeout_work,
 			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS));
+
+	cancel_delayed_work(&reg->hr_nego_timeout_work);
+	/* negotiate timeout must be less than write timeout. */
+	schedule_delayed_work(&reg->hr_nego_timeout_work,
+			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)/2);
+	memset(reg->hr_nego_node_bitmap, 0, sizeof(reg->hr_nego_node_bitmap));
 }
 
-static void o2hb_disarm_write_timeout(struct o2hb_region *reg)
+static void o2hb_disarm_timeout(struct o2hb_region *reg)
 {
 	cancel_delayed_work_sync(&reg->hr_write_timeout_work);
+	cancel_delayed_work_sync(&reg->hr_nego_timeout_work);
+}
+
+static void o2hb_nego_timeout(struct work_struct *work)
+{
+	struct o2hb_region *reg =
+		container_of(work, struct o2hb_region,
+			     hr_nego_timeout_work.work);
+	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
+	int master_node;
+
+	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
+	/* lowest node as master node to make negotiate decision. */
+	master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0);
+
+	if (master_node == o2nm_this_node()) {
+		set_bit(master_node, reg->hr_nego_node_bitmap);
+		if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap,
+				sizeof(reg->hr_nego_node_bitmap))) {
+			/* check negotiate bitmap every second to do timeout
+			 * approve decision.
+			 */
+			schedule_delayed_work(&reg->hr_nego_timeout_work,
+				msecs_to_jiffies(1000));
+
+			return;
+		}
+
+		/* approve negotiate timeout request. */
+	} else {
+		/* negotiate timeout with master node. */
+	}
+
 }
 
 static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc)
@@ -1033,7 +1076,7 @@ static int o2hb_do_disk_heartbeat(struct o2hb_region *reg)
 	/* Skip disarming the timeout if own slot has stale/bad data */
 	if (own_slot_ok) {
 		o2hb_set_quorum_device(reg);
-		o2hb_arm_write_timeout(reg);
+		o2hb_arm_timeout(reg);
 	}
 
 bail:
@@ -1115,7 +1158,7 @@ static int o2hb_thread(void *data)
 		}
 	}
 
-	o2hb_disarm_write_timeout(reg);
+	o2hb_disarm_timeout(reg);
 
 	/* unclean stop is only used in very bad situation */
 	for(i = 0; !reg->hr_unclean_stop && i < reg->hr_blocks; i++)
@@ -1762,6 +1805,7 @@ static ssize_t o2hb_region_dev_store(struct config_item *item,
 	}
 
 	INIT_DELAYED_WORK(&reg->hr_write_timeout_work, o2hb_write_timeout);
+	INIT_DELAYED_WORK(&reg->hr_nego_timeout_work, o2hb_nego_timeout);
 
 	/*
 	 * A node is considered live after it has beat LIVE_THRESHOLD
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer Junxiao Bi
@ 2016-01-20  3:13 ` Junxiao Bi
  2016-01-21 23:47   ` Andrew Morton
  2016-01-25  3:18   ` Eric Ren
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 3/6] ocfs2: o2hb: add NEGOTIATE_APPROVE message Junxiao Bi
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20  3:13 UTC (permalink / raw)
  To: ocfs2-devel

This message is sent to master node when non-master nodes's
negotiate timer expired. Master node records these nodes in
a bitmap which is used to do write timeout timer re-queue
decision.

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
---
 fs/ocfs2/cluster/heartbeat.c |   66 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 65 insertions(+), 1 deletion(-)

diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index b601ee95de50..ecf8a5e21c38 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -280,6 +280,10 @@ struct o2hb_region {
 	 * being checked because we temporarily have to zero out the
 	 * crc field. */
 	struct o2hb_disk_heartbeat_block *hr_tmp_block;
+
+	/* Message key for negotiate timeout message. */
+	unsigned int		hr_key;
+	struct list_head	hr_handler_list;
 };
 
 struct o2hb_bio_wait_ctxt {
@@ -288,6 +292,14 @@ struct o2hb_bio_wait_ctxt {
 	int               wc_error;
 };
 
+enum {
+	O2HB_NEGO_TIMEOUT_MSG = 1,
+};
+
+struct o2hb_nego_msg {
+	u8 node_num;
+};
+
 static void o2hb_write_timeout(struct work_struct *work)
 {
 	int failed, quorum;
@@ -356,6 +368,24 @@ static void o2hb_disarm_timeout(struct o2hb_region *reg)
 	cancel_delayed_work_sync(&reg->hr_nego_timeout_work);
 }
 
+static int o2hb_send_nego_msg(int key, int type, u8 target)
+{
+	struct o2hb_nego_msg msg;
+	int status, ret;
+
+	msg.node_num = o2nm_this_node();
+again:
+	ret = o2net_send_message(type, key, &msg, sizeof(msg),
+			target, &status);
+
+	if (ret == -EAGAIN || ret == -ENOMEM) {
+		msleep(100);
+		goto again;
+	}
+
+	return ret;
+}
+
 static void o2hb_nego_timeout(struct work_struct *work)
 {
 	struct o2hb_region *reg =
@@ -384,8 +414,24 @@ static void o2hb_nego_timeout(struct work_struct *work)
 		/* approve negotiate timeout request. */
 	} else {
 		/* negotiate timeout with master node. */
+		o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG,
+			master_node);
 	}
+}
+
+static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
+				void **ret_data)
+{
+	struct o2hb_region *reg = (struct o2hb_region *)data;
+	struct o2hb_nego_msg *nego_msg;
 
+	nego_msg = (struct o2hb_nego_msg *)msg->buf;
+	if (nego_msg->node_num < O2NM_MAX_NODES)
+		set_bit(nego_msg->node_num, reg->hr_nego_node_bitmap);
+	else
+		mlog(ML_ERROR, "got nego timeout message from bad node.\n");
+
+	return 0;
 }
 
 static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc)
@@ -1493,6 +1539,7 @@ static void o2hb_region_release(struct config_item *item)
 	list_del(&reg->hr_all_item);
 	spin_unlock(&o2hb_live_lock);
 
+	o2net_unregister_handler_list(&reg->hr_handler_list);
 	kfree(reg);
 }
 
@@ -2039,13 +2086,30 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g
 
 	config_item_init_type_name(&reg->hr_item, name, &o2hb_region_type);
 
+	/* this is the same way to generate msg key as dlm, for local heartbeat,
+	 * name is also the same, so make initial crc value different to avoid
+	 * message key conflict.
+	 */
+	reg->hr_key = crc32_le(reg->hr_region_num + O2NM_MAX_REGIONS,
+		name, strlen(name));
+	INIT_LIST_HEAD(&reg->hr_handler_list);
+	ret = o2net_register_handler(O2HB_NEGO_TIMEOUT_MSG, reg->hr_key,
+			sizeof(struct o2hb_nego_msg),
+			o2hb_nego_timeout_handler,
+			reg, NULL, &reg->hr_handler_list);
+	if (ret)
+		goto free;
+
 	ret = o2hb_debug_region_init(reg, o2hb_debug_dir);
 	if (ret) {
 		config_item_put(&reg->hr_item);
-		goto free;
+		goto free_handler;
 	}
 
 	return &reg->hr_item;
+
+free_handler:
+	o2net_unregister_handler_list(&reg->hr_handler_list);
 free:
 	kfree(reg);
 	return ERR_PTR(ret);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 3/6] ocfs2: o2hb: add NEGOTIATE_APPROVE message
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer Junxiao Bi
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message Junxiao Bi
@ 2016-01-20  3:13 ` Junxiao Bi
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log Junxiao Bi
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20  3:13 UTC (permalink / raw)
  To: ocfs2-devel

This message is used to re-queue write timeout timer and negotiate timer
when all nodes suffer a write hung to storage, this makes node not fence
self if storage down.

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
---
 fs/ocfs2/cluster/heartbeat.c |   28 +++++++++++++++++++++++++++-
 1 file changed, 27 insertions(+), 1 deletion(-)

diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index ecf8a5e21c38..d5ef8dce08da 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -294,6 +294,7 @@ struct o2hb_bio_wait_ctxt {
 
 enum {
 	O2HB_NEGO_TIMEOUT_MSG = 1,
+	O2HB_NEGO_APPROVE_MSG = 2,
 };
 
 struct o2hb_nego_msg {
@@ -392,7 +393,7 @@ static void o2hb_nego_timeout(struct work_struct *work)
 		container_of(work, struct o2hb_region,
 			     hr_nego_timeout_work.work);
 	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
-	int master_node;
+	int master_node, i;
 
 	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
 	/* lowest node as master node to make negotiate decision. */
@@ -412,6 +413,17 @@ static void o2hb_nego_timeout(struct work_struct *work)
 		}
 
 		/* approve negotiate timeout request. */
+		o2hb_arm_timeout(reg);
+
+		i = -1;
+		while ((i = find_next_bit(live_node_bitmap,
+				O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) {
+			if (i == master_node)
+				continue;
+
+			o2hb_send_nego_msg(reg->hr_key,
+					O2HB_NEGO_APPROVE_MSG, i);
+		}
 	} else {
 		/* negotiate timeout with master node. */
 		o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG,
@@ -434,6 +446,13 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
 	return 0;
 }
 
+static int o2hb_nego_approve_handler(struct o2net_msg *msg, u32 len, void *data,
+				void **ret_data)
+{
+	o2hb_arm_timeout((struct o2hb_region *)data);
+	return 0;
+}
+
 static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc)
 {
 	atomic_set(&wc->wc_num_reqs, 1);
@@ -2100,6 +2119,13 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g
 	if (ret)
 		goto free;
 
+	ret = o2net_register_handler(O2HB_NEGO_APPROVE_MSG, reg->hr_key,
+			sizeof(struct o2hb_nego_msg),
+			o2hb_nego_approve_handler,
+			reg, NULL, &reg->hr_handler_list);
+	if (ret)
+		goto free_handler;
+
 	ret = o2hb_debug_region_init(reg, o2hb_debug_dir);
 	if (ret) {
 		config_item_put(&reg->hr_item);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
                   ` (2 preceding siblings ...)
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 3/6] ocfs2: o2hb: add NEGOTIATE_APPROVE message Junxiao Bi
@ 2016-01-20  3:13 ` Junxiao Bi
  2016-01-25  3:28   ` Eric Ren
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 5/6] ocfs2: o2hb: don't negotiate if last hb fail Junxiao Bi
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20  3:13 UTC (permalink / raw)
  To: ocfs2-devel

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
---
 fs/ocfs2/cluster/heartbeat.c |   39 ++++++++++++++++++++++++++++++++-------
 1 file changed, 32 insertions(+), 7 deletions(-)

diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index d5ef8dce08da..6c57fd21e597 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -292,6 +292,8 @@ struct o2hb_bio_wait_ctxt {
 	int               wc_error;
 };
 
+#define O2HB_NEGO_TIMEOUT_MS (O2HB_MAX_WRITE_TIMEOUT_MS/2)
+
 enum {
 	O2HB_NEGO_TIMEOUT_MSG = 1,
 	O2HB_NEGO_APPROVE_MSG = 2,
@@ -359,7 +361,7 @@ static void o2hb_arm_timeout(struct o2hb_region *reg)
 	cancel_delayed_work(&reg->hr_nego_timeout_work);
 	/* negotiate timeout must be less than write timeout. */
 	schedule_delayed_work(&reg->hr_nego_timeout_work,
-			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)/2);
+			      msecs_to_jiffies(O2HB_NEGO_TIMEOUT_MS));
 	memset(reg->hr_nego_node_bitmap, 0, sizeof(reg->hr_nego_node_bitmap));
 }
 
@@ -393,14 +395,19 @@ static void o2hb_nego_timeout(struct work_struct *work)
 		container_of(work, struct o2hb_region,
 			     hr_nego_timeout_work.work);
 	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
-	int master_node, i;
+	int master_node, i, ret;
 
 	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
 	/* lowest node as master node to make negotiate decision. */
 	master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0);
 
 	if (master_node == o2nm_this_node()) {
-		set_bit(master_node, reg->hr_nego_node_bitmap);
+		if (!test_bit(master_node, reg->hr_nego_node_bitmap)) {
+			printk(KERN_NOTICE "o2hb: node %d hb write hung for %ds on region %s (%s).\n",
+				o2nm_this_node(), O2HB_NEGO_TIMEOUT_MS/1000,
+				config_item_name(&reg->hr_item), reg->hr_dev_name);
+			set_bit(master_node, reg->hr_nego_node_bitmap);
+		}
 		if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap,
 				sizeof(reg->hr_nego_node_bitmap))) {
 			/* check negotiate bitmap every second to do timeout
@@ -412,6 +419,8 @@ static void o2hb_nego_timeout(struct work_struct *work)
 			return;
 		}
 
+		printk(KERN_NOTICE "o2hb: all nodes hb write hung, maybe region %s (%s) is down.\n",
+			config_item_name(&reg->hr_item), reg->hr_dev_name);
 		/* approve negotiate timeout request. */
 		o2hb_arm_timeout(reg);
 
@@ -421,13 +430,23 @@ static void o2hb_nego_timeout(struct work_struct *work)
 			if (i == master_node)
 				continue;
 
-			o2hb_send_nego_msg(reg->hr_key,
+			mlog(ML_HEARTBEAT, "send NEGO_APPROVE msg to node %d\n", i);
+			ret = o2hb_send_nego_msg(reg->hr_key,
 					O2HB_NEGO_APPROVE_MSG, i);
+			if (ret)
+				mlog(ML_ERROR, "send NEGO_APPROVE msg to node %d fail %d\n",
+					i, ret);
 		}
 	} else {
 		/* negotiate timeout with master node. */
-		o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG,
-			master_node);
+		printk(KERN_NOTICE "o2hb: node %d hb write hung for %ds on region %s (%s), negotiate timeout with node %d.\n",
+			o2nm_this_node(), O2HB_NEGO_TIMEOUT_MS/1000, config_item_name(&reg->hr_item),
+			reg->hr_dev_name, master_node);
+		ret = o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG,
+				master_node);
+		if (ret)
+			mlog(ML_ERROR, "send NEGO_TIMEOUT msg to node %d fail %d\n",
+				master_node, ret);
 	}
 }
 
@@ -438,6 +457,8 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
 	struct o2hb_nego_msg *nego_msg;
 
 	nego_msg = (struct o2hb_nego_msg *)msg->buf;
+	printk(KERN_NOTICE "o2hb: receive negotiate timeout message from node %d on region %s (%s).\n",
+		nego_msg->node_num, config_item_name(&reg->hr_item), reg->hr_dev_name);
 	if (nego_msg->node_num < O2NM_MAX_NODES)
 		set_bit(nego_msg->node_num, reg->hr_nego_node_bitmap);
 	else
@@ -449,7 +470,11 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
 static int o2hb_nego_approve_handler(struct o2net_msg *msg, u32 len, void *data,
 				void **ret_data)
 {
-	o2hb_arm_timeout((struct o2hb_region *)data);
+	struct o2hb_region *reg = (struct o2hb_region *)data;
+
+	printk(KERN_NOTICE "o2hb: negotiate timeout approved by master node on region %s (%s).\n",
+		config_item_name(&reg->hr_item), reg->hr_dev_name);
+	o2hb_arm_timeout(reg);
 	return 0;
 }
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 5/6] ocfs2: o2hb: don't negotiate if last hb fail
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
                   ` (3 preceding siblings ...)
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log Junxiao Bi
@ 2016-01-20  3:13 ` Junxiao Bi
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 6/6] ocfs2: o2hb: fix hb hung time Junxiao Bi
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20  3:13 UTC (permalink / raw)
  To: ocfs2-devel

Sometimes io error is returned when storage is down for a while.
Like for iscsi device, stroage is made offline when session timeout,
and this will make all io return -EIO. For this case, nodes shouldn't
do negotiate timeout but should fence self. So let nodes fence self
when o2hb_do_disk_heartbeat return an error, this is the same behavior
with o2hb without negotiate timer.

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
---
 fs/ocfs2/cluster/heartbeat.c |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index 6c57fd21e597..cb931381f474 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -284,6 +284,9 @@ struct o2hb_region {
 	/* Message key for negotiate timeout message. */
 	unsigned int		hr_key;
 	struct list_head	hr_handler_list;
+
+	/* last hb status, 0 for success, other value for error. */
+	int			hr_last_hb_status;
 };
 
 struct o2hb_bio_wait_ctxt {
@@ -397,6 +400,12 @@ static void o2hb_nego_timeout(struct work_struct *work)
 	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
 	int master_node, i, ret;
 
+	/* don't negotiate timeout if last hb failed since it is very
+	 * possible io failed. Should let write timeout fence self.
+	 */
+	if (reg->hr_last_hb_status)
+		return;
+
 	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
 	/* lowest node as master node to make negotiate decision. */
 	master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0);
@@ -1230,6 +1239,7 @@ static int o2hb_thread(void *data)
 		before_hb = ktime_get_real();
 
 		ret = o2hb_do_disk_heartbeat(reg);
+		reg->hr_last_hb_status = ret;
 
 		after_hb = ktime_get_real();
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 6/6] ocfs2: o2hb: fix hb hung time
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
                   ` (4 preceding siblings ...)
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 5/6] ocfs2: o2hb: don't negotiate if last hb fail Junxiao Bi
@ 2016-01-20  3:13 ` Junxiao Bi
  2016-01-20  6:00 ` [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Gang He
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20  3:13 UTC (permalink / raw)
  To: ocfs2-devel

hr_last_timeout_start should be set as the last time where hb is still OK.
When hb write timeout, hung time will be (jiffies - hr_last_timeout_start).

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
---
 fs/ocfs2/cluster/heartbeat.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index cb931381f474..a3ce5a734b7b 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -357,7 +357,6 @@ static void o2hb_arm_timeout(struct o2hb_region *reg)
 		spin_unlock(&o2hb_live_lock);
 	}
 	cancel_delayed_work(&reg->hr_write_timeout_work);
-	reg->hr_last_timeout_start = jiffies;
 	schedule_delayed_work(&reg->hr_write_timeout_work,
 			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS));
 
@@ -1176,6 +1175,7 @@ static int o2hb_do_disk_heartbeat(struct o2hb_region *reg)
 	if (own_slot_ok) {
 		o2hb_set_quorum_device(reg);
 		o2hb_arm_timeout(reg);
+		reg->hr_last_timeout_start = jiffies;
 	}
 
 bail:
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
                   ` (5 preceding siblings ...)
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 6/6] ocfs2: o2hb: fix hb hung time Junxiao Bi
@ 2016-01-20  6:00 ` Gang He
  2016-01-20  8:09   ` Junxiao Bi
  2016-01-20  9:18 ` Joseph Qi
  2016-01-21  8:34 ` rwxybh
  8 siblings, 1 reply; 34+ messages in thread
From: Gang He @ 2016-01-20  6:00 UTC (permalink / raw)
  To: ocfs2-devel

Hi Junxiao,

Thank for your fix.
Just one quick question, this fix only effects OCFS2 O2CB case, right?
If the user selects pacemaker as cluster stack? OCFS2 file system will encounter the same problem?

Thanks
Gang 


>>> 
> Hi,
> 
> This serial of patches is to fix the issue that when storage down,
> all nodes will fence self due to write timeout.
> With this patch set, all nodes will keep going until storage back
> online, except if the following issue happens, then all nodes will
> do as before to fence self.
> 1. io error got
> 2. network between nodes down
> 3. nodes panic
> 
> Junxiao Bi (6):
>       ocfs2: o2hb: add negotiate timer
>       ocfs2: o2hb: add NEGO_TIMEOUT message
>       ocfs2: o2hb: add NEGOTIATE_APPROVE message
>       ocfs2: o2hb: add some user/debug log
>       ocfs2: o2hb: don't negotiate if last hb fail
>       ocfs2: o2hb: fix hb hung time
> 
>  fs/ocfs2/cluster/heartbeat.c |  181 
> ++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 175 insertions(+), 6 deletions(-)
> 
>  Thanks,
>  Junxiao.
> 
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com 
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-20  6:00 ` [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Gang He
@ 2016-01-20  8:09   ` Junxiao Bi
  0 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20  8:09 UTC (permalink / raw)
  To: ocfs2-devel

Hi Gang,

On 01/20/2016 02:00 PM, Gang He wrote:
> Hi Junxiao,
> 
> Thank for your fix.
> Just one quick question, this fix only effects OCFS2 O2CB case, right?
Right.
> If the user selects pacemaker as cluster stack? OCFS2 file system will encounter the same problem?
Not sure about this, i have no knowledge about packmaker. You can run a
quick test on the setup.

Thanks,
Junxiao.
> 
> Thanks
> Gang 
> 
> 
>>>>
>> Hi,
>>
>> This serial of patches is to fix the issue that when storage down,
>> all nodes will fence self due to write timeout.
>> With this patch set, all nodes will keep going until storage back
>> online, except if the following issue happens, then all nodes will
>> do as before to fence self.
>> 1. io error got
>> 2. network between nodes down
>> 3. nodes panic
>>
>> Junxiao Bi (6):
>>       ocfs2: o2hb: add negotiate timer
>>       ocfs2: o2hb: add NEGO_TIMEOUT message
>>       ocfs2: o2hb: add NEGOTIATE_APPROVE message
>>       ocfs2: o2hb: add some user/debug log
>>       ocfs2: o2hb: don't negotiate if last hb fail
>>       ocfs2: o2hb: fix hb hung time
>>
>>  fs/ocfs2/cluster/heartbeat.c |  181 
>> ++++++++++++++++++++++++++++++++++++++++--
>>  1 file changed, 175 insertions(+), 6 deletions(-)
>>
>>  Thanks,
>>  Junxiao.
>>
>> _______________________________________________
>> Ocfs2-devel mailing list
>> Ocfs2-devel at oss.oracle.com 
>> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
                   ` (6 preceding siblings ...)
  2016-01-20  6:00 ` [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Gang He
@ 2016-01-20  9:18 ` Joseph Qi
  2016-01-20 13:27   ` Junxiao Bi
  2016-01-21  8:34 ` rwxybh
  8 siblings, 1 reply; 34+ messages in thread
From: Joseph Qi @ 2016-01-20  9:18 UTC (permalink / raw)
  To: ocfs2-devel

Hi Junxiao,
Thanks for the patch set.
In case only one node storage link down, if this node doesn't fence
self, other nodes will still check and mark this node dead, which will
cause cluster membership inconsistency.
In your patch set, I cannot see any logic to handle this. Am I missing
something?

On 2016/1/20 11:13, Junxiao Bi wrote:
> Hi,
> 
> This serial of patches is to fix the issue that when storage down,
> all nodes will fence self due to write timeout.
> With this patch set, all nodes will keep going until storage back
> online, except if the following issue happens, then all nodes will
> do as before to fence self.
> 1. io error got
> 2. network between nodes down
> 3. nodes panic
> 
> Junxiao Bi (6):
>       ocfs2: o2hb: add negotiate timer
>       ocfs2: o2hb: add NEGO_TIMEOUT message
>       ocfs2: o2hb: add NEGOTIATE_APPROVE message
>       ocfs2: o2hb: add some user/debug log
>       ocfs2: o2hb: don't negotiate if last hb fail
>       ocfs2: o2hb: fix hb hung time
> 
>  fs/ocfs2/cluster/heartbeat.c |  181 ++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 175 insertions(+), 6 deletions(-)
> 
>  Thanks,
>  Junxiao.
> 
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-20  9:18 ` Joseph Qi
@ 2016-01-20 13:27   ` Junxiao Bi
  2016-01-21  0:46     ` Joseph Qi
  0 siblings, 1 reply; 34+ messages in thread
From: Junxiao Bi @ 2016-01-20 13:27 UTC (permalink / raw)
  To: ocfs2-devel

Hi Joseph,

> ? 2016?1?20????5:18?Joseph Qi <joseph.qi@huawei.com> ???
> 
> Hi Junxiao,
> Thanks for the patch set.
> In case only one node storage link down, if this node doesn't fence
> self, other nodes will still check and mark this node dead, which will
> cause cluster membership inconsistency.
> In your patch set, I cannot see any logic to handle this. Am I missing
> something?
No, there is no logic for this. But why didn?t node fence self when storage down? What make a softirq timer can?t be run, another bug?

Thanks,
Junxiao.
> 
> On 2016/1/20 11:13, Junxiao Bi wrote:
>> Hi,
>> 
>> This serial of patches is to fix the issue that when storage down,
>> all nodes will fence self due to write timeout.
>> With this patch set, all nodes will keep going until storage back
>> online, except if the following issue happens, then all nodes will
>> do as before to fence self.
>> 1. io error got
>> 2. network between nodes down
>> 3. nodes panic
>> 
>> Junxiao Bi (6):
>>      ocfs2: o2hb: add negotiate timer
>>      ocfs2: o2hb: add NEGO_TIMEOUT message
>>      ocfs2: o2hb: add NEGOTIATE_APPROVE message
>>      ocfs2: o2hb: add some user/debug log
>>      ocfs2: o2hb: don't negotiate if last hb fail
>>      ocfs2: o2hb: fix hb hung time
>> 
>> fs/ocfs2/cluster/heartbeat.c |  181 ++++++++++++++++++++++++++++++++++++++++--
>> 1 file changed, 175 insertions(+), 6 deletions(-)
>> 
>> Thanks,
>> Junxiao.
>> 
>> _______________________________________________
>> Ocfs2-devel mailing list
>> Ocfs2-devel at oss.oracle.com
>> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
>> 
>> 
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-20 13:27   ` Junxiao Bi
@ 2016-01-21  0:46     ` Joseph Qi
  2016-01-21  1:48       ` Junxiao Bi
  0 siblings, 1 reply; 34+ messages in thread
From: Joseph Qi @ 2016-01-21  0:46 UTC (permalink / raw)
  To: ocfs2-devel

Hi Junxiao,
So you mean the negotiation you added only happens if all nodes storage
link down?

Thanks,
Joseph

On 2016/1/20 21:27, Junxiao Bi wrote:
> Hi Joseph,
> 
>> ? 2016?1?20????5:18?Joseph Qi <joseph.qi@huawei.com> ???
>>
>> Hi Junxiao,
>> Thanks for the patch set.
>> In case only one node storage link down, if this node doesn't fence
>> self, other nodes will still check and mark this node dead, which will
>> cause cluster membership inconsistency.
>> In your patch set, I cannot see any logic to handle this. Am I missing
>> something?
> No, there is no logic for this. But why didn?t node fence self when storage down? What make a softirq timer can?t be run, another bug?
> 
> Thanks,
> Junxiao.
>>
>> On 2016/1/20 11:13, Junxiao Bi wrote:
>>> Hi,
>>>
>>> This serial of patches is to fix the issue that when storage down,
>>> all nodes will fence self due to write timeout.
>>> With this patch set, all nodes will keep going until storage back
>>> online, except if the following issue happens, then all nodes will
>>> do as before to fence self.
>>> 1. io error got
>>> 2. network between nodes down
>>> 3. nodes panic
>>>
>>> Junxiao Bi (6):
>>>      ocfs2: o2hb: add negotiate timer
>>>      ocfs2: o2hb: add NEGO_TIMEOUT message
>>>      ocfs2: o2hb: add NEGOTIATE_APPROVE message
>>>      ocfs2: o2hb: add some user/debug log
>>>      ocfs2: o2hb: don't negotiate if last hb fail
>>>      ocfs2: o2hb: fix hb hung time
>>>
>>> fs/ocfs2/cluster/heartbeat.c |  181 ++++++++++++++++++++++++++++++++++++++++--
>>> 1 file changed, 175 insertions(+), 6 deletions(-)
>>>
>>> Thanks,
>>> Junxiao.
>>>
>>> _______________________________________________
>>> Ocfs2-devel mailing list
>>> Ocfs2-devel at oss.oracle.com
>>> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
>>>
>>>
>>
>>
> 
> 
> .
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-21  0:46     ` Joseph Qi
@ 2016-01-21  1:48       ` Junxiao Bi
  2016-01-22  4:25         ` Joseph Qi
  0 siblings, 1 reply; 34+ messages in thread
From: Junxiao Bi @ 2016-01-21  1:48 UTC (permalink / raw)
  To: ocfs2-devel

On 01/21/2016 08:46 AM, Joseph Qi wrote:
> Hi Junxiao,
> So you mean the negotiation you added only happens if all nodes storage
> link down?
Negotiation happened when one node found its storage link down, but
success when all nodes storage link down, or it will keep the same
behavior like before.

Thanks,
Junxiao.
> 
> Thanks,
> Joseph
> 
> On 2016/1/20 21:27, Junxiao Bi wrote:
>> Hi Joseph,
>>
>>> ? 2016?1?20????5:18?Joseph Qi <joseph.qi@huawei.com> ???
>>>
>>> Hi Junxiao,
>>> Thanks for the patch set.
>>> In case only one node storage link down, if this node doesn't fence
>>> self, other nodes will still check and mark this node dead, which will
>>> cause cluster membership inconsistency.
>>> In your patch set, I cannot see any logic to handle this. Am I missing
>>> something?
>> No, there is no logic for this. But why didn?t node fence self when storage down? What make a softirq timer can?t be run, another bug?
>>
>> Thanks,
>> Junxiao.
>>>
>>> On 2016/1/20 11:13, Junxiao Bi wrote:
>>>> Hi,
>>>>
>>>> This serial of patches is to fix the issue that when storage down,
>>>> all nodes will fence self due to write timeout.
>>>> With this patch set, all nodes will keep going until storage back
>>>> online, except if the following issue happens, then all nodes will
>>>> do as before to fence self.
>>>> 1. io error got
>>>> 2. network between nodes down
>>>> 3. nodes panic
>>>>
>>>> Junxiao Bi (6):
>>>>      ocfs2: o2hb: add negotiate timer
>>>>      ocfs2: o2hb: add NEGO_TIMEOUT message
>>>>      ocfs2: o2hb: add NEGOTIATE_APPROVE message
>>>>      ocfs2: o2hb: add some user/debug log
>>>>      ocfs2: o2hb: don't negotiate if last hb fail
>>>>      ocfs2: o2hb: fix hb hung time
>>>>
>>>> fs/ocfs2/cluster/heartbeat.c |  181 ++++++++++++++++++++++++++++++++++++++++--
>>>> 1 file changed, 175 insertions(+), 6 deletions(-)
>>>>
>>>> Thanks,
>>>> Junxiao.
>>>>
>>>> _______________________________________________
>>>> Ocfs2-devel mailing list
>>>> Ocfs2-devel at oss.oracle.com
>>>> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
>>>>
>>>>
>>>
>>>
>>
>>
>> .
>>
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
                   ` (7 preceding siblings ...)
  2016-01-20  9:18 ` Joseph Qi
@ 2016-01-21  8:34 ` rwxybh
  2016-01-21  8:41   ` Junxiao Bi
  8 siblings, 1 reply; 34+ messages in thread
From: rwxybh @ 2016-01-21  8:34 UTC (permalink / raw)
  To: ocfs2-devel

Hi, junxiao!


We can't find correct fencing log after a node fencing itself. 
We know there is log such as following in source code:

printk(KERN_ERR "*** ocfs2 is very sorry to be fencing this "
      "system by restarting ***\n");

But we NEVER found this message from /var/log/message or last "demsg".

Do u mean we can find this message from local fs log after applying this patch set?

Or any way to find this output (without netconsole), thx?



rwxybh
 
From: Junxiao Bi
Date: 2016-01-20 11:13
To: ocfs2-devel
CC: mfasheh
Subject: [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
Hi,
 
This serial of patches is to fix the issue that when storage down,
all nodes will fence self due to write timeout.
With this patch set, all nodes will keep going until storage back
online, except if the following issue happens, then all nodes will
do as before to fence self.
1. io error got
2. network between nodes down
3. nodes panic
 
Junxiao Bi (6):
      ocfs2: o2hb: add negotiate timer
      ocfs2: o2hb: add NEGO_TIMEOUT message
      ocfs2: o2hb: add NEGOTIATE_APPROVE message
      ocfs2: o2hb: add some user/debug log
      ocfs2: o2hb: don't negotiate if last hb fail
      ocfs2: o2hb: fix hb hung time
 
fs/ocfs2/cluster/heartbeat.c |  181 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 175 insertions(+), 6 deletions(-)
 
Thanks,
Junxiao.
 
_______________________________________________
Ocfs2-devel mailing list
Ocfs2-devel at oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-devel/attachments/20160121/7b9ef8eb/attachment.html 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-21  8:34 ` rwxybh
@ 2016-01-21  8:41   ` Junxiao Bi
  0 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-21  8:41 UTC (permalink / raw)
  To: ocfs2-devel

On 01/21/2016 04:34 PM, rwxybh wrote:
> Hi, junxiao!
> 
> 
> We can't find correct fencing log after a node fencing itself. 
> We know there is log such as following in source code:
> 
> printk(KERN_ERR "*** ocfs2 is very sorry to be fencing this "
>       "system by restarting ***\n");
> 
> But we NEVER found this message from /var/log/message or last "demsg".
> 
> Do u mean we can find this message from local fs log after applying this
> patch set?
No, this patch is not targeted to do that. This patch set is to avoid
nodes fence self if storage down.
To get that log, i am afraid you need configure a console as panic
follows that printk.

Thanks,
Junxiao.
> 
> Or any way to find this output (without netconsole), thx?
> 
> ------------------------------------------------------------------------
> rwxybh
> 
>      
>     *From:* Junxiao Bi <mailto:junxiao.bi@oracle.com>
>     *Date:* 2016-01-20 11:13
>     *To:* ocfs2-devel <mailto:ocfs2-devel@oss.oracle.com>
>     *CC:* mfasheh <mailto:mfasheh@suse.com>
>     *Subject:* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
>     Hi,
>      
>     This serial of patches is to fix the issue that when storage down,
>     all nodes will fence self due to write timeout.
>     With this patch set, all nodes will keep going until storage back
>     online, except if the following issue happens, then all nodes will
>     do as before to fence self.
>     1. io error got
>     2. network between nodes down
>     3. nodes panic
>      
>     Junxiao Bi (6):
>           ocfs2: o2hb: add negotiate timer
>           ocfs2: o2hb: add NEGO_TIMEOUT message
>           ocfs2: o2hb: add NEGOTIATE_APPROVE message
>           ocfs2: o2hb: add some user/debug log
>           ocfs2: o2hb: don't negotiate if last hb fail
>           ocfs2: o2hb: fix hb hung time
>      
>     fs/ocfs2/cluster/heartbeat.c |  181
>     ++++++++++++++++++++++++++++++++++++++++--
>     1 file changed, 175 insertions(+), 6 deletions(-)
>      
>     Thanks,
>     Junxiao.
>      
>     _______________________________________________
>     Ocfs2-devel mailing list
>     Ocfs2-devel at oss.oracle.com
>     https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer Junxiao Bi
@ 2016-01-21 23:42   ` Andrew Morton
  2016-01-22  3:23     ` Junxiao Bi
  2016-01-22  0:56   ` Joseph Qi
  1 sibling, 1 reply; 34+ messages in thread
From: Andrew Morton @ 2016-01-21 23:42 UTC (permalink / raw)
  To: ocfs2-devel

On Wed, 20 Jan 2016 11:13:34 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:

> When storage down, all nodes will fence self due to write timeout.
> The negotiate timer is designed to avoid this, with it node will
> wait until storage up again.
> 
> Negotiate timer working in the following way:
> 
> 1. The timer expires before write timeout timer, its timeout is half
> of write timeout now. It is re-queued along with write timeout timer.
> If expires, it will send NEGO_TIMEOUT message to master node(node with
> lowest node number). This message does nothing but marks a bit in a
> bitmap recording which nodes are negotiating timeout on master node.
> 
> 2. If storage down, nodes will send this message to master node, then
> when master node finds its bitmap including all online nodes, it sends
> NEGO_APPROVL message to all nodes one by one, this message will re-queue
> write timeout timer and negotiate timer.
> For any node doesn't receive this message or meets some issue when
> handling this message, it will be fenced.
> If storage up at any time, o2hb_thread will run and re-queue all the
> timer, nothing will be affected by these two steps.
> 
> ...
>
> +static void o2hb_nego_timeout(struct work_struct *work)
> +{
> +	struct o2hb_region *reg =
> +		container_of(work, struct o2hb_region,
> +			     hr_nego_timeout_work.work);

It's better to just do

	struct o2hb_region *reg;

	reg = container_of(work, struct o2hb_region, hr_nego_timeout_work.work);

and avoid the weird 80-column tricks.

> +	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];

the bitmap.h interfaces might be nicer here.  Perhaps.  A little bit.

> +	int master_node;
> +
> +	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
> +	/* lowest node as master node to make negotiate decision. */
> +	master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0);
> +
> +	if (master_node == o2nm_this_node()) {
> +		set_bit(master_node, reg->hr_nego_node_bitmap);
> +		if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap,
> +				sizeof(reg->hr_nego_node_bitmap))) {
> +			/* check negotiate bitmap every second to do timeout
> +			 * approve decision.
> +			 */
> +			schedule_delayed_work(&reg->hr_nego_timeout_work,
> +				msecs_to_jiffies(1000));

One second is long enough to unmount the fs (and to run `rmmod
ocfs2'!).  Is there anything preventing the work from triggering in
these situations?

> +
> +			return;
> +		}
> +
> +		/* approve negotiate timeout request. */
> +	} else {
> +		/* negotiate timeout with master node. */
> +	}
> +
>  }

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message Junxiao Bi
@ 2016-01-21 23:47   ` Andrew Morton
  2016-01-22  5:12     ` Junxiao Bi
  2016-01-25  3:18   ` Eric Ren
  1 sibling, 1 reply; 34+ messages in thread
From: Andrew Morton @ 2016-01-21 23:47 UTC (permalink / raw)
  To: ocfs2-devel

On Wed, 20 Jan 2016 11:13:35 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:

> This message is sent to master node when non-master nodes's
> negotiate timer expired. Master node records these nodes in
> a bitmap which is used to do write timeout timer re-queue
> decision.
> 
> ...
>
> +static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
> +				void **ret_data)
> +{
> +	struct o2hb_region *reg = (struct o2hb_region *)data;

It's best not to typecast a void*.  It's unneeded clutter and the cast
can actually hide bugs - if someone changes `data' to a different type
or if there's a different "data" in scope, etc.

> +	struct o2hb_nego_msg *nego_msg;
>  
> +	nego_msg = (struct o2hb_nego_msg *)msg->buf;
> +	if (nego_msg->node_num < O2NM_MAX_NODES)
> +		set_bit(nego_msg->node_num, reg->hr_nego_node_bitmap);
> +	else
> +		mlog(ML_ERROR, "got nego timeout message from bad node.\n");
> +
> +	return 0;
>  }

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer Junxiao Bi
  2016-01-21 23:42   ` Andrew Morton
@ 2016-01-22  0:56   ` Joseph Qi
  2016-01-22  3:19     ` Junxiao Bi
  1 sibling, 1 reply; 34+ messages in thread
From: Joseph Qi @ 2016-01-22  0:56 UTC (permalink / raw)
  To: ocfs2-devel

Hi Junxiao,

On 2016/1/20 11:13, Junxiao Bi wrote:
> When storage down, all nodes will fence self due to write timeout.
> The negotiate timer is designed to avoid this, with it node will
> wait until storage up again.
> 
> Negotiate timer working in the following way:
> 
> 1. The timer expires before write timeout timer, its timeout is half
> of write timeout now. It is re-queued along with write timeout timer.
> If expires, it will send NEGO_TIMEOUT message to master node(node with
> lowest node number). This message does nothing but marks a bit in a
> bitmap recording which nodes are negotiating timeout on master node.
> 
> 2. If storage down, nodes will send this message to master node, then
> when master node finds its bitmap including all online nodes, it sends
> NEGO_APPROVL message to all nodes one by one, this message will re-queue
> write timeout timer and negotiate timer.
> For any node doesn't receive this message or meets some issue when
> handling this message, it will be fenced.
> If storage up at any time, o2hb_thread will run and re-queue all the
> timer, nothing will be affected by these two steps.
> 
> Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
> Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
> ---
>  fs/ocfs2/cluster/heartbeat.c |   52 ++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 48 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
> index a3cc6d2fc896..b601ee95de50 100644
> --- a/fs/ocfs2/cluster/heartbeat.c
> +++ b/fs/ocfs2/cluster/heartbeat.c
> @@ -272,6 +272,10 @@ struct o2hb_region {
>  	struct delayed_work	hr_write_timeout_work;
>  	unsigned long		hr_last_timeout_start;
>  
> +	/* negotiate timer, used to negotiate extending hb timeout. */
> +	struct delayed_work	hr_nego_timeout_work;
> +	unsigned long		hr_nego_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
> +
>  	/* Used during o2hb_check_slot to hold a copy of the block
>  	 * being checked because we temporarily have to zero out the
>  	 * crc field. */
> @@ -320,7 +324,7 @@ static void o2hb_write_timeout(struct work_struct *work)
>  	o2quo_disk_timeout();
>  }
>  
> -static void o2hb_arm_write_timeout(struct o2hb_region *reg)
> +static void o2hb_arm_timeout(struct o2hb_region *reg)
>  {
>  	/* Arm writeout only after thread reaches steady state */
>  	if (atomic_read(&reg->hr_steady_iterations) != 0)
> @@ -338,11 +342,50 @@ static void o2hb_arm_write_timeout(struct o2hb_region *reg)
>  	reg->hr_last_timeout_start = jiffies;
>  	schedule_delayed_work(&reg->hr_write_timeout_work,
>  			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS));
> +
> +	cancel_delayed_work(&reg->hr_nego_timeout_work);
> +	/* negotiate timeout must be less than write timeout. */
> +	schedule_delayed_work(&reg->hr_nego_timeout_work,
> +			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)/2);
> +	memset(reg->hr_nego_node_bitmap, 0, sizeof(reg->hr_nego_node_bitmap));
>  }
>  
> -static void o2hb_disarm_write_timeout(struct o2hb_region *reg)
> +static void o2hb_disarm_timeout(struct o2hb_region *reg)
>  {
>  	cancel_delayed_work_sync(&reg->hr_write_timeout_work);
> +	cancel_delayed_work_sync(&reg->hr_nego_timeout_work);
> +}
> +
> +static void o2hb_nego_timeout(struct work_struct *work)
> +{
> +	struct o2hb_region *reg =
> +		container_of(work, struct o2hb_region,
> +			     hr_nego_timeout_work.work);
> +	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
> +	int master_node;
> +
> +	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
> +	/* lowest node as master node to make negotiate decision. */
> +	master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0);
> +
> +	if (master_node == o2nm_this_node()) {
> +		set_bit(master_node, reg->hr_nego_node_bitmap);
> +		if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap,
> +				sizeof(reg->hr_nego_node_bitmap))) {
Should the access to hr_nego_node_bitmap be protected, for example,
under o2hb_live_lock?

Thanks,
Joseph

> +			/* check negotiate bitmap every second to do timeout
> +			 * approve decision.
> +			 */
> +			schedule_delayed_work(&reg->hr_nego_timeout_work,
> +				msecs_to_jiffies(1000));
> +
> +			return;
> +		}
> +
> +		/* approve negotiate timeout request. */
> +	} else {
> +		/* negotiate timeout with master node. */
> +	}
> +
>  }
>  
>  static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc)
> @@ -1033,7 +1076,7 @@ static int o2hb_do_disk_heartbeat(struct o2hb_region *reg)
>  	/* Skip disarming the timeout if own slot has stale/bad data */
>  	if (own_slot_ok) {
>  		o2hb_set_quorum_device(reg);
> -		o2hb_arm_write_timeout(reg);
> +		o2hb_arm_timeout(reg);
>  	}
>  
>  bail:
> @@ -1115,7 +1158,7 @@ static int o2hb_thread(void *data)
>  		}
>  	}
>  
> -	o2hb_disarm_write_timeout(reg);
> +	o2hb_disarm_timeout(reg);
>  
>  	/* unclean stop is only used in very bad situation */
>  	for(i = 0; !reg->hr_unclean_stop && i < reg->hr_blocks; i++)
> @@ -1762,6 +1805,7 @@ static ssize_t o2hb_region_dev_store(struct config_item *item,
>  	}
>  
>  	INIT_DELAYED_WORK(&reg->hr_write_timeout_work, o2hb_write_timeout);
> +	INIT_DELAYED_WORK(&reg->hr_nego_timeout_work, o2hb_nego_timeout);
>  
>  	/*
>  	 * A node is considered live after it has beat LIVE_THRESHOLD
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer
  2016-01-22  0:56   ` Joseph Qi
@ 2016-01-22  3:19     ` Junxiao Bi
  0 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-22  3:19 UTC (permalink / raw)
  To: ocfs2-devel

Hi Joseph,

On 01/22/2016 08:56 AM, Joseph Qi wrote:
> Hi Junxiao,
> 
> On 2016/1/20 11:13, Junxiao Bi wrote:
>> When storage down, all nodes will fence self due to write timeout.
>> The negotiate timer is designed to avoid this, with it node will
>> wait until storage up again.
>>
>> Negotiate timer working in the following way:
>>
>> 1. The timer expires before write timeout timer, its timeout is half
>> of write timeout now. It is re-queued along with write timeout timer.
>> If expires, it will send NEGO_TIMEOUT message to master node(node with
>> lowest node number). This message does nothing but marks a bit in a
>> bitmap recording which nodes are negotiating timeout on master node.
>>
>> 2. If storage down, nodes will send this message to master node, then
>> when master node finds its bitmap including all online nodes, it sends
>> NEGO_APPROVL message to all nodes one by one, this message will re-queue
>> write timeout timer and negotiate timer.
>> For any node doesn't receive this message or meets some issue when
>> handling this message, it will be fenced.
>> If storage up at any time, o2hb_thread will run and re-queue all the
>> timer, nothing will be affected by these two steps.
>>
>> Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
>> Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
>> ---
>>  fs/ocfs2/cluster/heartbeat.c |   52 ++++++++++++++++++++++++++++++++++++++----
>>  1 file changed, 48 insertions(+), 4 deletions(-)
>>
>> diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
>> index a3cc6d2fc896..b601ee95de50 100644
>> --- a/fs/ocfs2/cluster/heartbeat.c
>> +++ b/fs/ocfs2/cluster/heartbeat.c
>> @@ -272,6 +272,10 @@ struct o2hb_region {
>>  	struct delayed_work	hr_write_timeout_work;
>>  	unsigned long		hr_last_timeout_start;
>>  
>> +	/* negotiate timer, used to negotiate extending hb timeout. */
>> +	struct delayed_work	hr_nego_timeout_work;
>> +	unsigned long		hr_nego_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
>> +
>>  	/* Used during o2hb_check_slot to hold a copy of the block
>>  	 * being checked because we temporarily have to zero out the
>>  	 * crc field. */
>> @@ -320,7 +324,7 @@ static void o2hb_write_timeout(struct work_struct *work)
>>  	o2quo_disk_timeout();
>>  }
>>  
>> -static void o2hb_arm_write_timeout(struct o2hb_region *reg)
>> +static void o2hb_arm_timeout(struct o2hb_region *reg)
>>  {
>>  	/* Arm writeout only after thread reaches steady state */
>>  	if (atomic_read(&reg->hr_steady_iterations) != 0)
>> @@ -338,11 +342,50 @@ static void o2hb_arm_write_timeout(struct o2hb_region *reg)
>>  	reg->hr_last_timeout_start = jiffies;
>>  	schedule_delayed_work(&reg->hr_write_timeout_work,
>>  			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS));
>> +
>> +	cancel_delayed_work(&reg->hr_nego_timeout_work);
>> +	/* negotiate timeout must be less than write timeout. */
>> +	schedule_delayed_work(&reg->hr_nego_timeout_work,
>> +			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)/2);
>> +	memset(reg->hr_nego_node_bitmap, 0, sizeof(reg->hr_nego_node_bitmap));
>>  }
>>  
>> -static void o2hb_disarm_write_timeout(struct o2hb_region *reg)
>> +static void o2hb_disarm_timeout(struct o2hb_region *reg)
>>  {
>>  	cancel_delayed_work_sync(&reg->hr_write_timeout_work);
>> +	cancel_delayed_work_sync(&reg->hr_nego_timeout_work);
>> +}
>> +
>> +static void o2hb_nego_timeout(struct work_struct *work)
>> +{
>> +	struct o2hb_region *reg =
>> +		container_of(work, struct o2hb_region,
>> +			     hr_nego_timeout_work.work);
>> +	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
>> +	int master_node;
>> +
>> +	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
>> +	/* lowest node as master node to make negotiate decision. */
>> +	master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0);
>> +
>> +	if (master_node == o2nm_this_node()) {
>> +		set_bit(master_node, reg->hr_nego_node_bitmap);
>> +		if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap,
>> +				sizeof(reg->hr_nego_node_bitmap))) {
> Should the access to hr_nego_node_bitmap be protected, for example,
> under o2hb_live_lock?
I didn't see need for this. This bitmap is used by negotiation master
node, every set op is ordered by o2net_wq. And master will check the bit
every second to find whether it's set.

Thanks,
Junxiao.
> 
> Thanks,
> Joseph
> 
>> +			/* check negotiate bitmap every second to do timeout
>> +			 * approve decision.
>> +			 */
>> +			schedule_delayed_work(&reg->hr_nego_timeout_work,
>> +				msecs_to_jiffies(1000));
>> +
>> +			return;
>> +		}
>> +
>> +		/* approve negotiate timeout request. */
>> +	} else {
>> +		/* negotiate timeout with master node. */
>> +	}
>> +
>>  }
>>  
>>  static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc)
>> @@ -1033,7 +1076,7 @@ static int o2hb_do_disk_heartbeat(struct o2hb_region *reg)
>>  	/* Skip disarming the timeout if own slot has stale/bad data */
>>  	if (own_slot_ok) {
>>  		o2hb_set_quorum_device(reg);
>> -		o2hb_arm_write_timeout(reg);
>> +		o2hb_arm_timeout(reg);
>>  	}
>>  
>>  bail:
>> @@ -1115,7 +1158,7 @@ static int o2hb_thread(void *data)
>>  		}
>>  	}
>>  
>> -	o2hb_disarm_write_timeout(reg);
>> +	o2hb_disarm_timeout(reg);
>>  
>>  	/* unclean stop is only used in very bad situation */
>>  	for(i = 0; !reg->hr_unclean_stop && i < reg->hr_blocks; i++)
>> @@ -1762,6 +1805,7 @@ static ssize_t o2hb_region_dev_store(struct config_item *item,
>>  	}
>>  
>>  	INIT_DELAYED_WORK(&reg->hr_write_timeout_work, o2hb_write_timeout);
>> +	INIT_DELAYED_WORK(&reg->hr_nego_timeout_work, o2hb_nego_timeout);
>>  
>>  	/*
>>  	 * A node is considered live after it has beat LIVE_THRESHOLD
>>
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer
  2016-01-21 23:42   ` Andrew Morton
@ 2016-01-22  3:23     ` Junxiao Bi
  0 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-22  3:23 UTC (permalink / raw)
  To: ocfs2-devel

Hi Andrew,

On 01/22/2016 07:42 AM, Andrew Morton wrote:
> On Wed, 20 Jan 2016 11:13:34 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:
> 
>> When storage down, all nodes will fence self due to write timeout.
>> The negotiate timer is designed to avoid this, with it node will
>> wait until storage up again.
>>
>> Negotiate timer working in the following way:
>>
>> 1. The timer expires before write timeout timer, its timeout is half
>> of write timeout now. It is re-queued along with write timeout timer.
>> If expires, it will send NEGO_TIMEOUT message to master node(node with
>> lowest node number). This message does nothing but marks a bit in a
>> bitmap recording which nodes are negotiating timeout on master node.
>>
>> 2. If storage down, nodes will send this message to master node, then
>> when master node finds its bitmap including all online nodes, it sends
>> NEGO_APPROVL message to all nodes one by one, this message will re-queue
>> write timeout timer and negotiate timer.
>> For any node doesn't receive this message or meets some issue when
>> handling this message, it will be fenced.
>> If storage up at any time, o2hb_thread will run and re-queue all the
>> timer, nothing will be affected by these two steps.
>>
>> ...
>>
>> +static void o2hb_nego_timeout(struct work_struct *work)
>> +{
>> +	struct o2hb_region *reg =
>> +		container_of(work, struct o2hb_region,
>> +			     hr_nego_timeout_work.work);
> 
> It's better to just do
> 
> 	struct o2hb_region *reg;
> 
> 	reg = container_of(work, struct o2hb_region, hr_nego_timeout_work.work);
> 
> and avoid the weird 80-column tricks.
OK. Will update this in V2.

> 
>> +	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
> 
> the bitmap.h interfaces might be nicer here.  Perhaps.  A little bit.
Will consider this in v2.

> 
>> +	int master_node;
>> +
>> +	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
>> +	/* lowest node as master node to make negotiate decision. */
>> +	master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0);
>> +
>> +	if (master_node == o2nm_this_node()) {
>> +		set_bit(master_node, reg->hr_nego_node_bitmap);
>> +		if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap,
>> +				sizeof(reg->hr_nego_node_bitmap))) {
>> +			/* check negotiate bitmap every second to do timeout
>> +			 * approve decision.
>> +			 */
>> +			schedule_delayed_work(&reg->hr_nego_timeout_work,
>> +				msecs_to_jiffies(1000));
> 
> One second is long enough to unmount the fs (and to run `rmmod
> ocfs2'!).  Is there anything preventing the work from triggering in
> these situations?
Yes, this delayed work will by sync before the umount.

Thanks,
Junxiao.
> 
>> +
>> +			return;
>> +		}
>> +
>> +		/* approve negotiate timeout request. */
>> +	} else {
>> +		/* negotiate timeout with master node. */
>> +	}
>> +
>>  }
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-21  1:48       ` Junxiao Bi
@ 2016-01-22  4:25         ` Joseph Qi
  2016-01-22  5:08           ` Junxiao Bi
  0 siblings, 1 reply; 34+ messages in thread
From: Joseph Qi @ 2016-01-22  4:25 UTC (permalink / raw)
  To: ocfs2-devel

Hi Junxiao,

On 2016/1/21 9:48, Junxiao Bi wrote:
> On 01/21/2016 08:46 AM, Joseph Qi wrote:
>> Hi Junxiao,
>> So you mean the negotiation you added only happens if all nodes storage
>> link down?
> Negotiation happened when one node found its storage link down, but
> success when all nodes storage link down, or it will keep the same
> behavior like before.
IC, thanks for your explanation.
IMHO, if storage down, all business deployed on the storage will be
impacted even nodes won't fence.
I have another scenario, only several paths (multipath environment) in
several nodes have problems, as a result, ocfs2 will fence these nodes.
So I wonder if we have a better way to resolve this issue.

Thanks,
Joseph

> 
> Thanks,
> Junxiao.
>>
>> Thanks,
>> Joseph
>>
>> On 2016/1/20 21:27, Junxiao Bi wrote:
>>> Hi Joseph,
>>>
>>>> ? 2016?1?20????5:18?Joseph Qi <joseph.qi@huawei.com> ???
>>>>
>>>> Hi Junxiao,
>>>> Thanks for the patch set.
>>>> In case only one node storage link down, if this node doesn't fence
>>>> self, other nodes will still check and mark this node dead, which will
>>>> cause cluster membership inconsistency.
>>>> In your patch set, I cannot see any logic to handle this. Am I missing
>>>> something?
>>> No, there is no logic for this. But why didn?t node fence self when storage down? What make a softirq timer can?t be run, another bug?
>>>
>>> Thanks,
>>> Junxiao.
>>>>
>>>> On 2016/1/20 11:13, Junxiao Bi wrote:
>>>>> Hi,
>>>>>
>>>>> This serial of patches is to fix the issue that when storage down,
>>>>> all nodes will fence self due to write timeout.
>>>>> With this patch set, all nodes will keep going until storage back
>>>>> online, except if the following issue happens, then all nodes will
>>>>> do as before to fence self.
>>>>> 1. io error got
>>>>> 2. network between nodes down
>>>>> 3. nodes panic
>>>>>
>>>>> Junxiao Bi (6):
>>>>>      ocfs2: o2hb: add negotiate timer
>>>>>      ocfs2: o2hb: add NEGO_TIMEOUT message
>>>>>      ocfs2: o2hb: add NEGOTIATE_APPROVE message
>>>>>      ocfs2: o2hb: add some user/debug log
>>>>>      ocfs2: o2hb: don't negotiate if last hb fail
>>>>>      ocfs2: o2hb: fix hb hung time
>>>>>
>>>>> fs/ocfs2/cluster/heartbeat.c |  181 ++++++++++++++++++++++++++++++++++++++++--
>>>>> 1 file changed, 175 insertions(+), 6 deletions(-)
>>>>>
>>>>> Thanks,
>>>>> Junxiao.
>>>>>
>>>>> _______________________________________________
>>>>> Ocfs2-devel mailing list
>>>>> Ocfs2-devel at oss.oracle.com
>>>>> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>> .
>>>
>>
>>
> 
> 
> .
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-01-22  4:25         ` Joseph Qi
@ 2016-01-22  5:08           ` Junxiao Bi
  0 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-22  5:08 UTC (permalink / raw)
  To: ocfs2-devel

Hi Joseph,

On 01/22/2016 12:25 PM, Joseph Qi wrote:
> Hi Junxiao,
> 
> On 2016/1/21 9:48, Junxiao Bi wrote:
>> On 01/21/2016 08:46 AM, Joseph Qi wrote:
>>> Hi Junxiao,
>>> So you mean the negotiation you added only happens if all nodes storage
>>> link down?
>> Negotiation happened when one node found its storage link down, but
>> success when all nodes storage link down, or it will keep the same
>> behavior like before.
> IC, thanks for your explanation.
> IMHO, if storage down, all business deployed on the storage will be
> impacted even nodes won't fence.
Yes, but storage may back online again after a while. This can improve
system's stability and availability.

> I have another scenario, only several paths (multipath environment) in
> several nodes have problems, as a result, ocfs2 will fence these nodes.
> So I wonder if we have a better way to resolve this issue.
This seemed not obey usual ocfs2's policy. Fence these nodes at that
time will be good to the availability of ocfs2?

Any way I am not sure whether it is feasible now, the problem is that we
need find a way to make an agreement between good nodes during an env
that more error maybe coming, while good nodes can't be hurt even the
agreement can't be made.

Thanks,
Junxiao.
> 
> Thanks,
> Joseph
> 
>>
>> Thanks,
>> Junxiao.
>>>
>>> Thanks,
>>> Joseph
>>>
>>> On 2016/1/20 21:27, Junxiao Bi wrote:
>>>> Hi Joseph,
>>>>
>>>>> ? 2016?1?20????5:18?Joseph Qi <joseph.qi@huawei.com> ???
>>>>>
>>>>> Hi Junxiao,
>>>>> Thanks for the patch set.
>>>>> In case only one node storage link down, if this node doesn't fence
>>>>> self, other nodes will still check and mark this node dead, which will
>>>>> cause cluster membership inconsistency.
>>>>> In your patch set, I cannot see any logic to handle this. Am I missing
>>>>> something?
>>>> No, there is no logic for this. But why didn?t node fence self when storage down? What make a softirq timer can?t be run, another bug?
>>>>
>>>> Thanks,
>>>> Junxiao.
>>>>>
>>>>> On 2016/1/20 11:13, Junxiao Bi wrote:
>>>>>> Hi,
>>>>>>
>>>>>> This serial of patches is to fix the issue that when storage down,
>>>>>> all nodes will fence self due to write timeout.
>>>>>> With this patch set, all nodes will keep going until storage back
>>>>>> online, except if the following issue happens, then all nodes will
>>>>>> do as before to fence self.
>>>>>> 1. io error got
>>>>>> 2. network between nodes down
>>>>>> 3. nodes panic
>>>>>>
>>>>>> Junxiao Bi (6):
>>>>>>      ocfs2: o2hb: add negotiate timer
>>>>>>      ocfs2: o2hb: add NEGO_TIMEOUT message
>>>>>>      ocfs2: o2hb: add NEGOTIATE_APPROVE message
>>>>>>      ocfs2: o2hb: add some user/debug log
>>>>>>      ocfs2: o2hb: don't negotiate if last hb fail
>>>>>>      ocfs2: o2hb: fix hb hung time
>>>>>>
>>>>>> fs/ocfs2/cluster/heartbeat.c |  181 ++++++++++++++++++++++++++++++++++++++++--
>>>>>> 1 file changed, 175 insertions(+), 6 deletions(-)
>>>>>>
>>>>>> Thanks,
>>>>>> Junxiao.
>>>>>>
>>>>>> _______________________________________________
>>>>>> Ocfs2-devel mailing list
>>>>>> Ocfs2-devel at oss.oracle.com
>>>>>> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> .
>>>>
>>>
>>>
>>
>>
>> .
>>
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
  2016-01-21 23:47   ` Andrew Morton
@ 2016-01-22  5:12     ` Junxiao Bi
  2016-01-22  5:45       ` Andrew Morton
  0 siblings, 1 reply; 34+ messages in thread
From: Junxiao Bi @ 2016-01-22  5:12 UTC (permalink / raw)
  To: ocfs2-devel

On 01/22/2016 07:47 AM, Andrew Morton wrote:
> On Wed, 20 Jan 2016 11:13:35 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:
> 
>> This message is sent to master node when non-master nodes's
>> negotiate timer expired. Master node records these nodes in
>> a bitmap which is used to do write timeout timer re-queue
>> decision.
>>
>> ...
>>
>> +static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
>> +				void **ret_data)
>> +{
>> +	struct o2hb_region *reg = (struct o2hb_region *)data;
> 
> It's best not to typecast a void*.  It's unneeded clutter and the cast
> can actually hide bugs - if someone changes `data' to a different type
> or if there's a different "data" in scope, etc.
There are many kinds of messages in ocfs2 and each one needs a different
type of "data", so it is made type void*.

Thanks,
Junxiao.
> 
>> +	struct o2hb_nego_msg *nego_msg;
>>  
>> +	nego_msg = (struct o2hb_nego_msg *)msg->buf;
>> +	if (nego_msg->node_num < O2NM_MAX_NODES)
>> +		set_bit(nego_msg->node_num, reg->hr_nego_node_bitmap);
>> +	else
>> +		mlog(ML_ERROR, "got nego timeout message from bad node.\n");
>> +
>> +	return 0;
>>  }
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
  2016-01-22  5:12     ` Junxiao Bi
@ 2016-01-22  5:45       ` Andrew Morton
  2016-01-22  5:46         ` Junxiao Bi
  0 siblings, 1 reply; 34+ messages in thread
From: Andrew Morton @ 2016-01-22  5:45 UTC (permalink / raw)
  To: ocfs2-devel

On Fri, 22 Jan 2016 13:12:26 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:

> On 01/22/2016 07:47 AM, Andrew Morton wrote:
> > On Wed, 20 Jan 2016 11:13:35 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:
> > 
> >> This message is sent to master node when non-master nodes's
> >> negotiate timer expired. Master node records these nodes in
> >> a bitmap which is used to do write timeout timer re-queue
> >> decision.
> >>
> >> ...
> >>
> >> +static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
> >> +				void **ret_data)
> >> +{
> >> +	struct o2hb_region *reg = (struct o2hb_region *)data;
> > 
> > It's best not to typecast a void*.  It's unneeded clutter and the cast
> > can actually hide bugs - if someone changes `data' to a different type
> > or if there's a different "data" in scope, etc.
> There are many kinds of messages in ocfs2 and each one needs a different
> type of "data", so it is made type void*.

What I mean is to do this:

	struct o2hb_region *reg = data;

and not

	struct o2hb_region *reg = (struct o2hb_region *)data;

Because the typecast is unneeded and is actually harmful.  Imagine if someone
goofed and had `int data;': no warning, runtime failure.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
  2016-01-22  5:45       ` Andrew Morton
@ 2016-01-22  5:46         ` Junxiao Bi
  0 siblings, 0 replies; 34+ messages in thread
From: Junxiao Bi @ 2016-01-22  5:46 UTC (permalink / raw)
  To: ocfs2-devel

On 01/22/2016 01:45 PM, Andrew Morton wrote:
> On Fri, 22 Jan 2016 13:12:26 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:
> 
>> On 01/22/2016 07:47 AM, Andrew Morton wrote:
>>> On Wed, 20 Jan 2016 11:13:35 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:
>>>
>>>> This message is sent to master node when non-master nodes's
>>>> negotiate timer expired. Master node records these nodes in
>>>> a bitmap which is used to do write timeout timer re-queue
>>>> decision.
>>>>
>>>> ...
>>>>
>>>> +static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
>>>> +				void **ret_data)
>>>> +{
>>>> +	struct o2hb_region *reg = (struct o2hb_region *)data;
>>>
>>> It's best not to typecast a void*.  It's unneeded clutter and the cast
>>> can actually hide bugs - if someone changes `data' to a different type
>>> or if there's a different "data" in scope, etc.
>> There are many kinds of messages in ocfs2 and each one needs a different
>> type of "data", so it is made type void*.
> 
> What I mean is to do this:
> 
> 	struct o2hb_region *reg = data;
> 
> and not
> 
> 	struct o2hb_region *reg = (struct o2hb_region *)data;
> 
> Because the typecast is unneeded and is actually harmful.  Imagine if someone
> goofed and had `int data;': no warning, runtime failure.
Oh, I see. Thank you. Will update this in V2.

Thanks,
Junxiao.
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message Junxiao Bi
  2016-01-21 23:47   ` Andrew Morton
@ 2016-01-25  3:18   ` Eric Ren
  2016-01-25  4:28     ` Junxiao Bi
  1 sibling, 1 reply; 34+ messages in thread
From: Eric Ren @ 2016-01-25  3:18 UTC (permalink / raw)
  To: ocfs2-devel

On Wed, Jan 20, 2016 at 11:13:35AM +0800, Junxiao Bi wrote: 
> This message is sent to master node when non-master nodes's
> negotiate timer expired. Master node records these nodes in
> a bitmap which is used to do write timeout timer re-queue
> decision.
> 
> Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
> Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
> ---
>  fs/ocfs2/cluster/heartbeat.c |   66 +++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 65 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
> index b601ee95de50..ecf8a5e21c38 100644
> --- a/fs/ocfs2/cluster/heartbeat.c
> +++ b/fs/ocfs2/cluster/heartbeat.c
> @@ -280,6 +280,10 @@ struct o2hb_region {
>  	 * being checked because we temporarily have to zero out the
>  	 * crc field. */
>  	struct o2hb_disk_heartbeat_block *hr_tmp_block;
> +
> +	/* Message key for negotiate timeout message. */
> +	unsigned int		hr_key;
> +	struct list_head	hr_handler_list;
>  };
>  
>  struct o2hb_bio_wait_ctxt {
> @@ -288,6 +292,14 @@ struct o2hb_bio_wait_ctxt {
>  	int               wc_error;
>  };
>  
> +enum {
> +	O2HB_NEGO_TIMEOUT_MSG = 1,
> +};
> +
> +struct o2hb_nego_msg {
> +	u8 node_num;
> +};
> +
>  static void o2hb_write_timeout(struct work_struct *work)
>  {
>  	int failed, quorum;
> @@ -356,6 +368,24 @@ static void o2hb_disarm_timeout(struct o2hb_region *reg)
>  	cancel_delayed_work_sync(&reg->hr_nego_timeout_work);
>  }
>  
> +static int o2hb_send_nego_msg(int key, int type, u8 target)
> +{
> +	struct o2hb_nego_msg msg;
> +	int status, ret;
> +
> +	msg.node_num = o2nm_this_node();
> +again:
> +	ret = o2net_send_message(type, key, &msg, sizeof(msg),
> +			target, &status);
> +
> +	if (ret == -EAGAIN || ret == -ENOMEM) {
> +		msleep(100);
> +		goto again;
> +	}
> +
> +	return ret;
> +}
> +
>  static void o2hb_nego_timeout(struct work_struct *work)
>  {
>  	struct o2hb_region *reg =
> @@ -384,8 +414,24 @@ static void o2hb_nego_timeout(struct work_struct *work)
>  		/* approve negotiate timeout request. */
>  	} else {
>  		/* negotiate timeout with master node. */
> +		o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG,
> +			master_node);
>  	}
> +}
> +
> +static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
> +				void **ret_data)
> +{
> +	struct o2hb_region *reg = (struct o2hb_region *)data;
> +	struct o2hb_nego_msg *nego_msg;
>  
> +	nego_msg = (struct o2hb_nego_msg *)msg->buf;
> +	if (nego_msg->node_num < O2NM_MAX_NODES)
> +		set_bit(nego_msg->node_num, reg->hr_nego_node_bitmap);
> +	else
> +		mlog(ML_ERROR, "got nego timeout message from bad node.\n");
> +
> +	return 0;
>  }
>  
>  static inline void o2hb_bio_wait_init(struct o2hb_bio_wait_ctxt *wc)
> @@ -1493,6 +1539,7 @@ static void o2hb_region_release(struct config_item *item)
>  	list_del(&reg->hr_all_item);
>  	spin_unlock(&o2hb_live_lock);
>  
> +	o2net_unregister_handler_list(&reg->hr_handler_list);
>  	kfree(reg);
>  }
>  
> @@ -2039,13 +2086,30 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g
>  
>  	config_item_init_type_name(&reg->hr_item, name, &o2hb_region_type);
>  
> +	/* this is the same way to generate msg key as dlm, for local heartbeat,
> +	 * name is also the same, so make initial crc value different to avoid
> +	 * message key conflict.
> +	 */
> +	reg->hr_key = crc32_le(reg->hr_region_num + O2NM_MAX_REGIONS,
> +		name, strlen(name));
> +	INIT_LIST_HEAD(&reg->hr_handler_list);

Looks no need to initilize ->hr_handler_list here?

Thanks,
Eric
> +	ret = o2net_register_handler(O2HB_NEGO_TIMEOUT_MSG, reg->hr_key,
> +			sizeof(struct o2hb_nego_msg),
> +			o2hb_nego_timeout_handler,
> +			reg, NULL, &reg->hr_handler_list);
> +	if (ret)
> +		goto free;
> +
>  	ret = o2hb_debug_region_init(reg, o2hb_debug_dir);
>  	if (ret) {
>  		config_item_put(&reg->hr_item);
> -		goto free;
> +		goto free_handler;
>  	}
>  
>  	return &reg->hr_item;
> +
> +free_handler:
> +	o2net_unregister_handler_list(&reg->hr_handler_list);
>  free:
>  	kfree(reg);
>  	return ERR_PTR(ret);
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log
  2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log Junxiao Bi
@ 2016-01-25  3:28   ` Eric Ren
  2016-01-25  4:29     ` Junxiao Bi
  0 siblings, 1 reply; 34+ messages in thread
From: Eric Ren @ 2016-01-25  3:28 UTC (permalink / raw)
  To: ocfs2-devel

Hi Junxiao,

On Wed, Jan 20, 2016 at 11:13:37AM +0800, Junxiao Bi wrote: 
> Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
> Reviewed-by: Ryan Ding <ryan.ding@oracle.com>
> ---
>  fs/ocfs2/cluster/heartbeat.c |   39 ++++++++++++++++++++++++++++++++-------
>  1 file changed, 32 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
> index d5ef8dce08da..6c57fd21e597 100644
> --- a/fs/ocfs2/cluster/heartbeat.c
> +++ b/fs/ocfs2/cluster/heartbeat.c
> @@ -292,6 +292,8 @@ struct o2hb_bio_wait_ctxt {
>  	int               wc_error;
>  };
>  
> +#define O2HB_NEGO_TIMEOUT_MS (O2HB_MAX_WRITE_TIMEOUT_MS/2)
> +
>  enum {
>  	O2HB_NEGO_TIMEOUT_MSG = 1,
>  	O2HB_NEGO_APPROVE_MSG = 2,
> @@ -359,7 +361,7 @@ static void o2hb_arm_timeout(struct o2hb_region *reg)
>  	cancel_delayed_work(&reg->hr_nego_timeout_work);
>  	/* negotiate timeout must be less than write timeout. */
>  	schedule_delayed_work(&reg->hr_nego_timeout_work,
> -			      msecs_to_jiffies(O2HB_MAX_WRITE_TIMEOUT_MS)/2);
> +			      msecs_to_jiffies(O2HB_NEGO_TIMEOUT_MS));
>  	memset(reg->hr_nego_node_bitmap, 0, sizeof(reg->hr_nego_node_bitmap));
>  }
>  
> @@ -393,14 +395,19 @@ static void o2hb_nego_timeout(struct work_struct *work)
>  		container_of(work, struct o2hb_region,
>  			     hr_nego_timeout_work.work);
>  	unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)];
> -	int master_node, i;
> +	int master_node, i, ret;
>  
>  	o2hb_fill_node_map(live_node_bitmap, sizeof(live_node_bitmap));
>  	/* lowest node as master node to make negotiate decision. */
>  	master_node = find_next_bit(live_node_bitmap, O2NM_MAX_NODES, 0);
>  
>  	if (master_node == o2nm_this_node()) {
> -		set_bit(master_node, reg->hr_nego_node_bitmap);
> +		if (!test_bit(master_node, reg->hr_nego_node_bitmap)) {
> +			printk(KERN_NOTICE "o2hb: node %d hb write hung for %ds on region %s (%s).\n",
> +				o2nm_this_node(), O2HB_NEGO_TIMEOUT_MS/1000,
> +				config_item_name(&reg->hr_item), reg->hr_dev_name);
> +			set_bit(master_node, reg->hr_nego_node_bitmap);
> +		}
>  		if (memcmp(reg->hr_nego_node_bitmap, live_node_bitmap,
>  				sizeof(reg->hr_nego_node_bitmap))) {
>  			/* check negotiate bitmap every second to do timeout
> @@ -412,6 +419,8 @@ static void o2hb_nego_timeout(struct work_struct *work)
>  			return;
>  		}
>  
> +		printk(KERN_NOTICE "o2hb: all nodes hb write hung, maybe region %s (%s) is down.\n",
> +			config_item_name(&reg->hr_item), reg->hr_dev_name);
>  		/* approve negotiate timeout request. */
>  		o2hb_arm_timeout(reg);
>  
> @@ -421,13 +430,23 @@ static void o2hb_nego_timeout(struct work_struct *work)
>  			if (i == master_node)
>  				continue;
>  
> -			o2hb_send_nego_msg(reg->hr_key,
> +			mlog(ML_HEARTBEAT, "send NEGO_APPROVE msg to node %d\n", i);
> +			ret = o2hb_send_nego_msg(reg->hr_key,
>  					O2HB_NEGO_APPROVE_MSG, i);
> +			if (ret)
> +				mlog(ML_ERROR, "send NEGO_APPROVE msg to node %d fail %d\n",
> +					i, ret);
>  		}
>  	} else {
>  		/* negotiate timeout with master node. */
> -		o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG,
> -			master_node);
> +		printk(KERN_NOTICE "o2hb: node %d hb write hung for %ds on region %s (%s), negotiate timeout with node %d.\n",
> +			o2nm_this_node(), O2HB_NEGO_TIMEOUT_MS/1000, config_item_name(&reg->hr_item),
> +			reg->hr_dev_name, master_node);
> +		ret = o2hb_send_nego_msg(reg->hr_key, O2HB_NEGO_TIMEOUT_MSG,
> +				master_node);
> +		if (ret)
> +			mlog(ML_ERROR, "send NEGO_TIMEOUT msg to node %d fail %d\n",
> +				master_node, ret);
>  	}
>  }
>  
> @@ -438,6 +457,8 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
>  	struct o2hb_nego_msg *nego_msg;
>  
>  	nego_msg = (struct o2hb_nego_msg *)msg->buf;
> +	printk(KERN_NOTICE "o2hb: receive negotiate timeout message from node %d on region %s (%s).\n",
> +		nego_msg->node_num, config_item_name(&reg->hr_item), reg->hr_dev_name);
>  	if (nego_msg->node_num < O2NM_MAX_NODES)
>  		set_bit(nego_msg->node_num, reg->hr_nego_node_bitmap);
>  	else
> @@ -449,7 +470,11 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
>  static int o2hb_nego_approve_handler(struct o2net_msg *msg, u32 len, void *data,
>  				void **ret_data)
>  {
> -	o2hb_arm_timeout((struct o2hb_region *)data);
> +	struct o2hb_region *reg = (struct o2hb_region *)data;
> +
> +	printk(KERN_NOTICE "o2hb: negotiate timeout approved by master node on region %s (%s).\n",
> +		config_item_name(&reg->hr_item), reg->hr_dev_name);
> +	o2hb_arm_timeout(reg);

Why mix the use of printk and mlog? Any rules to follow?

Thanks,
Eric

>  	return 0;
>  }
>  
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
  2016-01-25  3:18   ` Eric Ren
@ 2016-01-25  4:28     ` Junxiao Bi
  2016-01-25  5:59       ` Eric Ren
  0 siblings, 1 reply; 34+ messages in thread
From: Junxiao Bi @ 2016-01-25  4:28 UTC (permalink / raw)
  To: ocfs2-devel

On 01/25/2016 11:18 AM, Eric Ren wrote:
>>  
>> > @@ -2039,13 +2086,30 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g
>> >  
>> >  	config_item_init_type_name(&reg->hr_item, name, &o2hb_region_type);
>> >  
>> > +	/* this is the same way to generate msg key as dlm, for local heartbeat,
>> > +	 * name is also the same, so make initial crc value different to avoid
>> > +	 * message key conflict.
>> > +	 */
>> > +	reg->hr_key = crc32_le(reg->hr_region_num + O2NM_MAX_REGIONS,
>> > +		name, strlen(name));
>> > +	INIT_LIST_HEAD(&reg->hr_handler_list);
> Looks no need to initilize ->hr_handler_list here?
Why? It is list head.

Thanks,
Junxiao.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log
  2016-01-25  3:28   ` Eric Ren
@ 2016-01-25  4:29     ` Junxiao Bi
  2016-01-25  6:00       ` Eric Ren
  0 siblings, 1 reply; 34+ messages in thread
From: Junxiao Bi @ 2016-01-25  4:29 UTC (permalink / raw)
  To: ocfs2-devel

On 01/25/2016 11:28 AM, Eric Ren wrote:
>> @@ -449,7 +470,11 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
>> >  static int o2hb_nego_approve_handler(struct o2net_msg *msg, u32 len, void *data,
>> >  				void **ret_data)
>> >  {
>> > -	o2hb_arm_timeout((struct o2hb_region *)data);
>> > +	struct o2hb_region *reg = (struct o2hb_region *)data;
>> > +
>> > +	printk(KERN_NOTICE "o2hb: negotiate timeout approved by master node on region %s (%s).\n",
>> > +		config_item_name(&reg->hr_item), reg->hr_dev_name);
>> > +	o2hb_arm_timeout(reg);
> Why mix the use of printk and mlog? Any rules to follow?
printk is log for user while mlog is log for debug.

Thanks,
Junxiao.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message
  2016-01-25  4:28     ` Junxiao Bi
@ 2016-01-25  5:59       ` Eric Ren
  0 siblings, 0 replies; 34+ messages in thread
From: Eric Ren @ 2016-01-25  5:59 UTC (permalink / raw)
  To: ocfs2-devel

On Mon, Jan 25, 2016 at 12:28:08PM +0800, Junxiao Bi wrote: 
> On 01/25/2016 11:18 AM, Eric Ren wrote:
> >>  
> >> > @@ -2039,13 +2086,30 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g
> >> >  
> >> >  	config_item_init_type_name(&reg->hr_item, name, &o2hb_region_type);
> >> >  
> >> > +	/* this is the same way to generate msg key as dlm, for local heartbeat,
> >> > +	 * name is also the same, so make initial crc value different to avoid
> >> > +	 * message key conflict.
> >> > +	 */
> >> > +	reg->hr_key = crc32_le(reg->hr_region_num + O2NM_MAX_REGIONS,
> >> > +		name, strlen(name));
> >> > +	INIT_LIST_HEAD(&reg->hr_handler_list);
> > Looks no need to initilize ->hr_handler_list here?
> Why? It is list head.
Oh, sorry, it should.

Another trivial, the label name "free_handler" sounds little strange. How about just
"unregister" or "unregister_handler"?

Thanks,
Eric
> 
> Thanks,
> Junxiao.
> 
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log
  2016-01-25  4:29     ` Junxiao Bi
@ 2016-01-25  6:00       ` Eric Ren
  0 siblings, 0 replies; 34+ messages in thread
From: Eric Ren @ 2016-01-25  6:00 UTC (permalink / raw)
  To: ocfs2-devel

Hi Juxiao,

On Mon, Jan 25, 2016 at 12:29:05PM +0800, Junxiao Bi wrote: 
> On 01/25/2016 11:28 AM, Eric Ren wrote:
> >> @@ -449,7 +470,11 @@ static int o2hb_nego_timeout_handler(struct o2net_msg *msg, u32 len, void *data,
> >> >  static int o2hb_nego_approve_handler(struct o2net_msg *msg, u32 len, void *data,
> >> >  				void **ret_data)
> >> >  {
> >> > -	o2hb_arm_timeout((struct o2hb_region *)data);
> >> > +	struct o2hb_region *reg = (struct o2hb_region *)data;
> >> > +
> >> > +	printk(KERN_NOTICE "o2hb: negotiate timeout approved by master node on region %s (%s).\n",
> >> > +		config_item_name(&reg->hr_item), reg->hr_dev_name);
> >> > +	o2hb_arm_timeout(reg);
> > Why mix the use of printk and mlog? Any rules to follow?
> printk is log for user while mlog is log for debug.

Gotcha, thanks!

Eric

> 
> Thanks,
> Junxiao.
> 
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-05-23 21:50 ` Andrew Morton
@ 2016-05-23 23:40   ` Mark Fasheh
  0 siblings, 0 replies; 34+ messages in thread
From: Mark Fasheh @ 2016-05-23 23:40 UTC (permalink / raw)
  To: ocfs2-devel

On Mon, May 23, 2016 at 02:50:10PM -0700, Andrew Morton wrote:
> On Wed,  2 Mar 2016 15:56:06 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:
> 
> > 
> > Hi Mark,
> > 
> > This serial of patches is to fix the issue that when storage down,
> > all nodes will fence self due to write timeout.
> > With this patch set, all nodes will keep going until storage back
> > online, except if the following issue happens, then all nodes will
> > do as before to fence self.
> > 1. io error got
> > 2. network between nodes down
> > 3. nodes panic
> 
> Guys, can we please do a quick triple-check on this series?  I'd like
> to unload them this week.  Thanks.
> 
> I'll send out the current version in a few secs.

I'll go through them and give my review.

Thanks,
	--Mark


--
Mark Fasheh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
  2016-03-02  7:56 Junxiao Bi
@ 2016-05-23 21:50 ` Andrew Morton
  2016-05-23 23:40   ` Mark Fasheh
  0 siblings, 1 reply; 34+ messages in thread
From: Andrew Morton @ 2016-05-23 21:50 UTC (permalink / raw)
  To: ocfs2-devel

On Wed,  2 Mar 2016 15:56:06 +0800 Junxiao Bi <junxiao.bi@oracle.com> wrote:

> 
> Hi Mark,
> 
> This serial of patches is to fix the issue that when storage down,
> all nodes will fence self due to write timeout.
> With this patch set, all nodes will keep going until storage back
> online, except if the following issue happens, then all nodes will
> do as before to fence self.
> 1. io error got
> 2. network between nodes down
> 3. nodes panic

Guys, can we please do a quick triple-check on this series?  I'd like
to unload them this week.  Thanks.

I'll send out the current version in a few secs.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down
@ 2016-03-02  7:56 Junxiao Bi
  2016-05-23 21:50 ` Andrew Morton
  0 siblings, 1 reply; 34+ messages in thread
From: Junxiao Bi @ 2016-03-02  7:56 UTC (permalink / raw)
  To: ocfs2-devel


Hi Mark,

This serial of patches is to fix the issue that when storage down,
all nodes will fence self due to write timeout.
With this patch set, all nodes will keep going until storage back
online, except if the following issue happens, then all nodes will
do as before to fence self.
1. io error got
2. network between nodes down
3. nodes panic

---
Changes from V1:
- code style fix.

Junxiao Bi (6):
      ocfs2: o2hb: add negotiate timer
      ocfs2: o2hb: add NEGO_TIMEOUT message
      ocfs2: o2hb: add NEGOTIATE_APPROVE message
      ocfs2: o2hb: add some user/debug log
      ocfs2: o2hb: don't negotiate if last hb fail
      ocfs2: o2hb: fix hb hung time

fs/ocfs2/cluster/heartbeat.c |  180 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 174 insertions(+), 6 deletions(-)


Thanks,
Junxiao.

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2016-05-23 23:40 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-20  3:13 [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Junxiao Bi
2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 1/6] ocfs2: o2hb: add negotiate timer Junxiao Bi
2016-01-21 23:42   ` Andrew Morton
2016-01-22  3:23     ` Junxiao Bi
2016-01-22  0:56   ` Joseph Qi
2016-01-22  3:19     ` Junxiao Bi
2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 2/6] ocfs2: o2hb: add NEGO_TIMEOUT message Junxiao Bi
2016-01-21 23:47   ` Andrew Morton
2016-01-22  5:12     ` Junxiao Bi
2016-01-22  5:45       ` Andrew Morton
2016-01-22  5:46         ` Junxiao Bi
2016-01-25  3:18   ` Eric Ren
2016-01-25  4:28     ` Junxiao Bi
2016-01-25  5:59       ` Eric Ren
2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 3/6] ocfs2: o2hb: add NEGOTIATE_APPROVE message Junxiao Bi
2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 4/6] ocfs2: o2hb: add some user/debug log Junxiao Bi
2016-01-25  3:28   ` Eric Ren
2016-01-25  4:29     ` Junxiao Bi
2016-01-25  6:00       ` Eric Ren
2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 5/6] ocfs2: o2hb: don't negotiate if last hb fail Junxiao Bi
2016-01-20  3:13 ` [Ocfs2-devel] [PATCH 6/6] ocfs2: o2hb: fix hb hung time Junxiao Bi
2016-01-20  6:00 ` [Ocfs2-devel] ocfs2: o2hb: not fence self if storage down Gang He
2016-01-20  8:09   ` Junxiao Bi
2016-01-20  9:18 ` Joseph Qi
2016-01-20 13:27   ` Junxiao Bi
2016-01-21  0:46     ` Joseph Qi
2016-01-21  1:48       ` Junxiao Bi
2016-01-22  4:25         ` Joseph Qi
2016-01-22  5:08           ` Junxiao Bi
2016-01-21  8:34 ` rwxybh
2016-01-21  8:41   ` Junxiao Bi
2016-03-02  7:56 Junxiao Bi
2016-05-23 21:50 ` Andrew Morton
2016-05-23 23:40   ` Mark Fasheh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.