All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] Drivers: hv: vmbus: handle various crash scenarios
@ 2016-03-18 12:33 Vitaly Kuznetsov
  2016-03-18 15:20 ` Radim Krcmar
  2016-03-18 18:02 ` KY Srinivasan
  0 siblings, 2 replies; 10+ messages in thread
From: Vitaly Kuznetsov @ 2016-03-18 12:33 UTC (permalink / raw)
  To: devel
  Cc: linux-kernel, K. Y. Srinivasan, Haiyang Zhang, Alex Ng,
	Radim Krcmar, Cathy Avery

Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
delivered to CPU0 regardless of what CPU we're sending CHANNELMSG_UNLOAD
from. vmbus_wait_for_unload() doesn't account for the fact that in case
we're crashing on some other CPU and CPU0 is still alive and operational
CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
vmbus_connection.unload_event, our wait on the current CPU will never
end.

Do the following:
1) Check for completion_done() in the loop. In case interrupt handler is
   still alive we'll get the confirmation we need.

2) Always read CPU0's message page as CHANNELMSG_UNLOAD_RESPONSE will be
   delivered there. We can race with still-alive interrupt handler doing
   the same but we don't care as we're checking completion_done() now.

3) Cleanup message pages on all CPUs. This is required (at least for the
   current CPU as we're clearing CPU0 messages now but we may want to bring
   up additional CPUs on crash) as new messages won't be delivered till we
   consume what's pending. On boot we'll place message pages somewhere else
   and we won't be able to read stale messages.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 drivers/hv/channel_mgmt.c | 30 +++++++++++++++++++++++++-----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index b10e8f74..5f37057 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -512,14 +512,26 @@ static void init_vp_index(struct vmbus_channel *channel, const uuid_le *type_gui
 
 static void vmbus_wait_for_unload(void)
 {
-	int cpu = smp_processor_id();
-	void *page_addr = hv_context.synic_message_page[cpu];
+	int cpu;
+	void *page_addr = hv_context.synic_message_page[0];
 	struct hv_message *msg = (struct hv_message *)page_addr +
 				  VMBUS_MESSAGE_SINT;
 	struct vmbus_channel_message_header *hdr;
 	bool unloaded = false;
 
-	while (1) {
+	/*
+	 * CHANNELMSG_UNLOAD_RESPONSE is always delivered to CPU0. When we're
+	 * crashing on a different CPU let's hope that IRQ handler on CPU0 is
+	 * still functional and vmbus_unload_response() will complete
+	 * vmbus_connection.unload_event. If not, the last thing we can do is
+	 * read message page for CPU0 regardless of what CPU we're on.
+	 */
+	while (!unloaded) {
+		if (completion_done(&vmbus_connection.unload_event)) {
+			unloaded = true;
+			break;
+		}
+
 		if (READ_ONCE(msg->header.message_type) == HVMSG_NONE) {
 			mdelay(10);
 			continue;
@@ -530,9 +542,17 @@ static void vmbus_wait_for_unload(void)
 			unloaded = true;
 
 		vmbus_signal_eom(msg);
+	}
 
-		if (unloaded)
-			break;
+	/*
+	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
+	 * maybe-pending messages on all CPUs to be able to receive new
+	 * messages after we reconnect.
+	 */
+	for_each_online_cpu(cpu) {
+		page_addr = hv_context.synic_message_page[cpu];
+		msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
+		msg->header.message_type = HVMSG_NONE;
 	}
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-18 12:33 [PATCH] Drivers: hv: vmbus: handle various crash scenarios Vitaly Kuznetsov
@ 2016-03-18 15:20 ` Radim Krcmar
  2016-03-18 15:53   ` Vitaly Kuznetsov
  2016-03-18 18:02 ` KY Srinivasan
  1 sibling, 1 reply; 10+ messages in thread
From: Radim Krcmar @ 2016-03-18 15:20 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: devel, linux-kernel, K. Y. Srinivasan, Haiyang Zhang, Alex Ng,
	Cathy Avery

2016-03-18 13:33+0100, Vitaly Kuznetsov:
> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
> delivered to CPU0 regardless of what CPU we're sending CHANNELMSG_UNLOAD
> from. vmbus_wait_for_unload() doesn't account for the fact that in case
> we're crashing on some other CPU and CPU0 is still alive and operational
> CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
> vmbus_connection.unload_event, our wait on the current CPU will never
> end.

(Any chance of learning about this behavior from the spec?)

> Do the following:
> 1) Check for completion_done() in the loop. In case interrupt handler is
>    still alive we'll get the confirmation we need.
> 
> 2) Always read CPU0's message page as CHANNELMSG_UNLOAD_RESPONSE will be
>    delivered there. We can race with still-alive interrupt handler doing
>    the same but we don't care as we're checking completion_done() now.

(Yeah, seems better than hv_setup_vmbus_irq(NULL) or other hacks.)

> 3) Cleanup message pages on all CPUs. This is required (at least for the
>    current CPU as we're clearing CPU0 messages now but we may want to bring
>    up additional CPUs on crash) as new messages won't be delivered till we
>    consume what's pending. On boot we'll place message pages somewhere else
>    and we won't be able to read stale messages.

What if HV already set the pending message bit on current message,
do we get any guarantees that clearing once after UNLOAD_RESPONSE is
enough?

> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---

I had a question about NULL below.  (Parenthesised rants aren't related
to r-b tag. ;)

>  drivers/hv/channel_mgmt.c | 30 +++++++++++++++++++++++++-----
>  1 file changed, 25 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> index b10e8f74..5f37057 100644
> --- a/drivers/hv/channel_mgmt.c
> +++ b/drivers/hv/channel_mgmt.c
> @@ -512,14 +512,26 @@ static void init_vp_index(struct vmbus_channel *channel, const uuid_le *type_gui
>  
>  static void vmbus_wait_for_unload(void)
>  {
> -	int cpu = smp_processor_id();
> -	void *page_addr = hv_context.synic_message_page[cpu];
> +	int cpu;
> +	void *page_addr = hv_context.synic_message_page[0];
>  	struct hv_message *msg = (struct hv_message *)page_addr +
>  				  VMBUS_MESSAGE_SINT;
>  	struct vmbus_channel_message_header *hdr;
>  	bool unloaded = false;
>  
> -	while (1) {
> +	/*
> +	 * CHANNELMSG_UNLOAD_RESPONSE is always delivered to CPU0. When we're
> +	 * crashing on a different CPU let's hope that IRQ handler on CPU0 is
> +	 * still functional and vmbus_unload_response() will complete
> +	 * vmbus_connection.unload_event. If not, the last thing we can do is
> +	 * read message page for CPU0 regardless of what CPU we're on.
> +	 */
> +	while (!unloaded) {

(I'd feel a bit safer if this was bounded by some timeout, but all
 scenarios where this would make a difference are unplausible ...
 queue_work() not working while the rest is fine is the best one.)

> +		if (completion_done(&vmbus_connection.unload_event)) {
> +			unloaded = true;

(No need to set unloaded when you break.)

> +			break;
> +		}
> +
>  		if (READ_ONCE(msg->header.message_type) == HVMSG_NONE) {
>  			mdelay(10);
>  			continue;
> @@ -530,9 +542,17 @@ static void vmbus_wait_for_unload(void)

(I'm not a huge fan of the unloaded variable; what about remembering the
 header/msgtype here ...

>  			unloaded = true;
>  
>  		vmbus_signal_eom(msg);

 and checking its value here?)

> +	}
>  
> -		if (unloaded)
> -			break;
> +	/*
> +	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
> +	 * maybe-pending messages on all CPUs to be able to receive new
> +	 * messages after we reconnect.
> +	 */
> +	for_each_online_cpu(cpu) {
> +		page_addr = hv_context.synic_message_page[cpu];

Can't this be NULL?

> +		msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
> +		msg->header.message_type = HVMSG_NONE;
>  	}

(And, this block belongs to a separate function. ;])

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-18 15:20 ` Radim Krcmar
@ 2016-03-18 15:53   ` Vitaly Kuznetsov
  2016-03-18 16:11     ` Radim Krcmar
  0 siblings, 1 reply; 10+ messages in thread
From: Vitaly Kuznetsov @ 2016-03-18 15:53 UTC (permalink / raw)
  To: Radim Krcmar
  Cc: devel, linux-kernel, K. Y. Srinivasan, Haiyang Zhang, Alex Ng,
	Cathy Avery

Radim Krcmar <rkrcmar@redhat.com> writes:

> 2016-03-18 13:33+0100, Vitaly Kuznetsov:
>> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
>> delivered to CPU0 regardless of what CPU we're sending CHANNELMSG_UNLOAD
>> from. vmbus_wait_for_unload() doesn't account for the fact that in case
>> we're crashing on some other CPU and CPU0 is still alive and operational
>> CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
>> vmbus_connection.unload_event, our wait on the current CPU will never
>> end.
>
> (Any chance of learning about this behavior from the spec?)
>
>> Do the following:
>> 1) Check for completion_done() in the loop. In case interrupt handler is
>>    still alive we'll get the confirmation we need.
>> 
>> 2) Always read CPU0's message page as CHANNELMSG_UNLOAD_RESPONSE will be
>>    delivered there. We can race with still-alive interrupt handler doing
>>    the same but we don't care as we're checking completion_done() now.
>
> (Yeah, seems better than hv_setup_vmbus_irq(NULL) or other hacks.)
>
>> 3) Cleanup message pages on all CPUs. This is required (at least for the
>>    current CPU as we're clearing CPU0 messages now but we may want to bring
>>    up additional CPUs on crash) as new messages won't be delivered till we
>>    consume what's pending. On boot we'll place message pages somewhere else
>>    and we won't be able to read stale messages.
>
> What if HV already set the pending message bit on current message,
> do we get any guarantees that clearing once after UNLOAD_RESPONSE is
> enough?

I think so but I'd like to get a confirmation from K.Y./Alex/Haiyang.

>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>
> I had a question about NULL below.  (Parenthesised rants aren't related
> to r-b tag. ;)
>
>>  drivers/hv/channel_mgmt.c | 30 +++++++++++++++++++++++++-----
>>  1 file changed, 25 insertions(+), 5 deletions(-)
>> 
>> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
>> index b10e8f74..5f37057 100644
>> --- a/drivers/hv/channel_mgmt.c
>> +++ b/drivers/hv/channel_mgmt.c
>> @@ -512,14 +512,26 @@ static void init_vp_index(struct vmbus_channel *channel, const uuid_le *type_gui
>>  
>>  static void vmbus_wait_for_unload(void)
>>  {
>> -	int cpu = smp_processor_id();
>> -	void *page_addr = hv_context.synic_message_page[cpu];
>> +	int cpu;
>> +	void *page_addr = hv_context.synic_message_page[0];
>>  	struct hv_message *msg = (struct hv_message *)page_addr +
>>  				  VMBUS_MESSAGE_SINT;
>>  	struct vmbus_channel_message_header *hdr;
>>  	bool unloaded = false;
>>  
>> -	while (1) {
>> +	/*
>> +	 * CHANNELMSG_UNLOAD_RESPONSE is always delivered to CPU0. When we're
>> +	 * crashing on a different CPU let's hope that IRQ handler on CPU0 is
>> +	 * still functional and vmbus_unload_response() will complete
>> +	 * vmbus_connection.unload_event. If not, the last thing we can do is
>> +	 * read message page for CPU0 regardless of what CPU we're on.
>> +	 */
>> +	while (!unloaded) {
>
> (I'd feel a bit safer if this was bounded by some timeout, but all
>  scenarios where this would make a difference are unplausible ...
>  queue_work() not working while the rest is fine is the best one.)
>
>> +		if (completion_done(&vmbus_connection.unload_event)) {
>> +			unloaded = true;
>
> (No need to set unloaded when you break.)
>
>> +			break;
>> +		}
>> +
>>  		if (READ_ONCE(msg->header.message_type) == HVMSG_NONE) {
>>  			mdelay(10);
>>  			continue;
>> @@ -530,9 +542,17 @@ static void vmbus_wait_for_unload(void)
>
> (I'm not a huge fan of the unloaded variable; what about remembering the
>  header/msgtype here ...
>
>>  			unloaded = true;
>>  
>>  		vmbus_signal_eom(msg);
>
>  and checking its value here?)
>

Sure, but we'll have to use a variable for that ... why would it be
better than 'unloaded'?

>> +	}
>>  
>> -		if (unloaded)
>> -			break;
>> +	/*
>> +	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
>> +	 * maybe-pending messages on all CPUs to be able to receive new
>> +	 * messages after we reconnect.
>> +	 */
>> +	for_each_online_cpu(cpu) {
>> +		page_addr = hv_context.synic_message_page[cpu];
>
> Can't this be NULL?

It can't, we allocate it from hv_synic_alloc() (and we don't support cpu
onlining/offlining on WS2012R2+).

>
>> +		msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
>> +		msg->header.message_type = HVMSG_NONE;
>>  	}
>
> (And, this block belongs to a separate function. ;])

I thought about moving it to hv_crash_handler() but then I decided to
leave it here as the need for this fixup is rather an artifact of how we
recieve the message. But I'm flexible here)

-- 
  Vitaly

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-18 15:53   ` Vitaly Kuznetsov
@ 2016-03-18 16:11     ` Radim Krcmar
  0 siblings, 0 replies; 10+ messages in thread
From: Radim Krcmar @ 2016-03-18 16:11 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: devel, linux-kernel, K. Y. Srinivasan, Haiyang Zhang, Alex Ng,
	Cathy Avery

2016-03-18 16:53+0100, Vitaly Kuznetsov:
> Radim Krcmar <rkrcmar@redhat.com> writes:
>> 2016-03-18 13:33+0100, Vitaly Kuznetsov:
>>> @@ -530,9 +542,17 @@ static void vmbus_wait_for_unload(void)
>>
>> (I'm not a huge fan of the unloaded variable; what about remembering the
>>  header/msgtype here ...
>>
>>>  			unloaded = true;
>>>  
>>>  		vmbus_signal_eom(msg);
>>
>>  and checking its value here?)
>>
> 
> Sure, but we'll have to use a variable for that ... why would it be
> better than 'unloaded'?

It's easier to understand IMO,

  x = mem       |  x = mem
  if *x == sth  |  z = *x
    u = true    |
  eoi()         |  eoi()
  if u          |  if z == sth
    break       |   break

And you can replace msg with the new variable,

 z = *mem
 eoi()
 if z == sth
   break

>> Can't this be NULL?
> 
> It can't, we allocate it from hv_synic_alloc() (and we don't support cpu
> onlining/offlining on WS2012R2+).

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>

Thanks.

>>> +		msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
>>> +		msg->header.message_type = HVMSG_NONE;
>>>  	}
>>
>> (And, this block belongs to a separate function. ;])
> 
> I thought about moving it to hv_crash_handler() but then I decided to
> leave it here as the need for this fixup is rather an artifact of how we
> recieve the message. But I'm flexible here)

Ok, clearing all VCPUs made me think that it would be generally useful.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-18 12:33 [PATCH] Drivers: hv: vmbus: handle various crash scenarios Vitaly Kuznetsov
  2016-03-18 15:20 ` Radim Krcmar
@ 2016-03-18 18:02 ` KY Srinivasan
  2016-03-21  7:51   ` Vitaly Kuznetsov
  1 sibling, 1 reply; 10+ messages in thread
From: KY Srinivasan @ 2016-03-18 18:02 UTC (permalink / raw)
  To: Vitaly Kuznetsov, devel
  Cc: linux-kernel, Haiyang Zhang, Alex Ng (LIS), Radim Krcmar, Cathy Avery



> -----Original Message-----
> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> Sent: Friday, March 18, 2016 5:33 AM
> To: devel@linuxdriverproject.org
> Cc: linux-kernel@vger.kernel.org; KY Srinivasan <kys@microsoft.com>;
> Haiyang Zhang <haiyangz@microsoft.com>; Alex Ng (LIS)
> <alexng@microsoft.com>; Radim Krcmar <rkrcmar@redhat.com>; Cathy
> Avery <cavery@redhat.com>
> Subject: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
> 
> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
> delivered to CPU0 regardless of what CPU we're sending
> CHANNELMSG_UNLOAD
> from. vmbus_wait_for_unload() doesn't account for the fact that in case
> we're crashing on some other CPU and CPU0 is still alive and operational
> CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
> vmbus_connection.unload_event, our wait on the current CPU will never
> end.

What was the host you were testing on?

K. Y
> 
> Do the following:
> 1) Check for completion_done() in the loop. In case interrupt handler is
>    still alive we'll get the confirmation we need.
> 
> 2) Always read CPU0's message page as CHANNELMSG_UNLOAD_RESPONSE
> will be
>    delivered there. We can race with still-alive interrupt handler doing
>    the same but we don't care as we're checking completion_done() now.
> 
> 3) Cleanup message pages on all CPUs. This is required (at least for the
>    current CPU as we're clearing CPU0 messages now but we may want to
> bring
>    up additional CPUs on crash) as new messages won't be delivered till we
>    consume what's pending. On boot we'll place message pages somewhere
> else
>    and we won't be able to read stale messages.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
>  drivers/hv/channel_mgmt.c | 30 +++++++++++++++++++++++++-----
>  1 file changed, 25 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> index b10e8f74..5f37057 100644
> --- a/drivers/hv/channel_mgmt.c
> +++ b/drivers/hv/channel_mgmt.c
> @@ -512,14 +512,26 @@ static void init_vp_index(struct vmbus_channel
> *channel, const uuid_le *type_gui
> 
>  static void vmbus_wait_for_unload(void)
>  {
> -	int cpu = smp_processor_id();
> -	void *page_addr = hv_context.synic_message_page[cpu];
> +	int cpu;
> +	void *page_addr = hv_context.synic_message_page[0];
>  	struct hv_message *msg = (struct hv_message *)page_addr +
>  				  VMBUS_MESSAGE_SINT;
>  	struct vmbus_channel_message_header *hdr;
>  	bool unloaded = false;
> 
> -	while (1) {
> +	/*
> +	 * CHANNELMSG_UNLOAD_RESPONSE is always delivered to CPU0.
> When we're
> +	 * crashing on a different CPU let's hope that IRQ handler on CPU0 is
> +	 * still functional and vmbus_unload_response() will complete
> +	 * vmbus_connection.unload_event. If not, the last thing we can do
> is
> +	 * read message page for CPU0 regardless of what CPU we're on.
> +	 */
> +	while (!unloaded) {
> +		if (completion_done(&vmbus_connection.unload_event)) {
> +			unloaded = true;
> +			break;
> +		}
> +
>  		if (READ_ONCE(msg->header.message_type) ==
> HVMSG_NONE) {
>  			mdelay(10);
>  			continue;
> @@ -530,9 +542,17 @@ static void vmbus_wait_for_unload(void)
>  			unloaded = true;
> 
>  		vmbus_signal_eom(msg);
> +	}
> 
> -		if (unloaded)
> -			break;
> +	/*
> +	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup
> all
> +	 * maybe-pending messages on all CPUs to be able to receive new
> +	 * messages after we reconnect.
> +	 */
> +	for_each_online_cpu(cpu) {
> +		page_addr = hv_context.synic_message_page[cpu];
> +		msg = (struct hv_message *)page_addr +
> VMBUS_MESSAGE_SINT;
> +		msg->header.message_type = HVMSG_NONE;
>  	}
>  }
> 
> --
> 2.5.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-18 18:02 ` KY Srinivasan
@ 2016-03-21  7:51   ` Vitaly Kuznetsov
  2016-03-21 22:44     ` KY Srinivasan
  0 siblings, 1 reply; 10+ messages in thread
From: Vitaly Kuznetsov @ 2016-03-21  7:51 UTC (permalink / raw)
  To: KY Srinivasan
  Cc: devel, linux-kernel, Haiyang Zhang, Alex Ng (LIS),
	Radim Krcmar, Cathy Avery

KY Srinivasan <kys@microsoft.com> writes:

>> -----Original Message-----
>> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
>> Sent: Friday, March 18, 2016 5:33 AM
>> To: devel@linuxdriverproject.org
>> Cc: linux-kernel@vger.kernel.org; KY Srinivasan <kys@microsoft.com>;
>> Haiyang Zhang <haiyangz@microsoft.com>; Alex Ng (LIS)
>> <alexng@microsoft.com>; Radim Krcmar <rkrcmar@redhat.com>; Cathy
>> Avery <cavery@redhat.com>
>> Subject: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
>> 
>> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
>> delivered to CPU0 regardless of what CPU we're sending
>> CHANNELMSG_UNLOAD
>> from. vmbus_wait_for_unload() doesn't account for the fact that in case
>> we're crashing on some other CPU and CPU0 is still alive and operational
>> CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
>> vmbus_connection.unload_event, our wait on the current CPU will never
>> end.
>
> What was the host you were testing on?
>

I was testing on both 2012R2 and 2016TP4. The bug is easily reproducible
by forcing crash on a secondary CPU, e.g.:

# cat crash.sh
#! /bin/sh
echo c > /proc/sysrq-trigger

# taskset -c 1 ./crash.sh

>> 
>> Do the following:
>> 1) Check for completion_done() in the loop. In case interrupt handler is
>>    still alive we'll get the confirmation we need.
>> 
>> 2) Always read CPU0's message page as CHANNELMSG_UNLOAD_RESPONSE
>> will be
>>    delivered there. We can race with still-alive interrupt handler doing
>>    the same but we don't care as we're checking completion_done() now.
>> 
>> 3) Cleanup message pages on all CPUs. This is required (at least for the
>>    current CPU as we're clearing CPU0 messages now but we may want to
>> bring
>>    up additional CPUs on crash) as new messages won't be delivered till we
>>    consume what's pending. On boot we'll place message pages somewhere
>> else
>>    and we won't be able to read stale messages.
>> 
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>>  drivers/hv/channel_mgmt.c | 30 +++++++++++++++++++++++++-----
>>  1 file changed, 25 insertions(+), 5 deletions(-)
>> 
>> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
>> index b10e8f74..5f37057 100644
>> --- a/drivers/hv/channel_mgmt.c
>> +++ b/drivers/hv/channel_mgmt.c
>> @@ -512,14 +512,26 @@ static void init_vp_index(struct vmbus_channel
>> *channel, const uuid_le *type_gui
>> 
>>  static void vmbus_wait_for_unload(void)
>>  {
>> -	int cpu = smp_processor_id();
>> -	void *page_addr = hv_context.synic_message_page[cpu];
>> +	int cpu;
>> +	void *page_addr = hv_context.synic_message_page[0];
>>  	struct hv_message *msg = (struct hv_message *)page_addr +
>>  				  VMBUS_MESSAGE_SINT;
>>  	struct vmbus_channel_message_header *hdr;
>>  	bool unloaded = false;
>> 
>> -	while (1) {
>> +	/*
>> +	 * CHANNELMSG_UNLOAD_RESPONSE is always delivered to CPU0.
>> When we're
>> +	 * crashing on a different CPU let's hope that IRQ handler on CPU0 is
>> +	 * still functional and vmbus_unload_response() will complete
>> +	 * vmbus_connection.unload_event. If not, the last thing we can do
>> is
>> +	 * read message page for CPU0 regardless of what CPU we're on.
>> +	 */
>> +	while (!unloaded) {
>> +		if (completion_done(&vmbus_connection.unload_event)) {
>> +			unloaded = true;
>> +			break;
>> +		}
>> +
>>  		if (READ_ONCE(msg->header.message_type) ==
>> HVMSG_NONE) {
>>  			mdelay(10);
>>  			continue;
>> @@ -530,9 +542,17 @@ static void vmbus_wait_for_unload(void)
>>  			unloaded = true;
>> 
>>  		vmbus_signal_eom(msg);
>> +	}
>> 
>> -		if (unloaded)
>> -			break;
>> +	/*
>> +	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup
>> all
>> +	 * maybe-pending messages on all CPUs to be able to receive new
>> +	 * messages after we reconnect.
>> +	 */
>> +	for_each_online_cpu(cpu) {
>> +		page_addr = hv_context.synic_message_page[cpu];
>> +		msg = (struct hv_message *)page_addr +
>> VMBUS_MESSAGE_SINT;
>> +		msg->header.message_type = HVMSG_NONE;
>>  	}
>>  }
>> 
>> --
>> 2.5.0

-- 
  Vitaly

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-21  7:51   ` Vitaly Kuznetsov
@ 2016-03-21 22:44     ` KY Srinivasan
  2016-03-22  9:47       ` Vitaly Kuznetsov
  2016-03-22 14:00       ` Vitaly Kuznetsov
  0 siblings, 2 replies; 10+ messages in thread
From: KY Srinivasan @ 2016-03-21 22:44 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: devel, linux-kernel, Haiyang Zhang, Alex Ng (LIS),
	Radim Krcmar, Cathy Avery



> -----Original Message-----
> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> Sent: Monday, March 21, 2016 12:52 AM
> To: KY Srinivasan <kys@microsoft.com>
> Cc: devel@linuxdriverproject.org; linux-kernel@vger.kernel.org; Haiyang
> Zhang <haiyangz@microsoft.com>; Alex Ng (LIS) <alexng@microsoft.com>;
> Radim Krcmar <rkrcmar@redhat.com>; Cathy Avery <cavery@redhat.com>
> Subject: Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
> 
> KY Srinivasan <kys@microsoft.com> writes:
> 
> >> -----Original Message-----
> >> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> >> Sent: Friday, March 18, 2016 5:33 AM
> >> To: devel@linuxdriverproject.org
> >> Cc: linux-kernel@vger.kernel.org; KY Srinivasan <kys@microsoft.com>;
> >> Haiyang Zhang <haiyangz@microsoft.com>; Alex Ng (LIS)
> >> <alexng@microsoft.com>; Radim Krcmar <rkrcmar@redhat.com>; Cathy
> >> Avery <cavery@redhat.com>
> >> Subject: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
> >>
> >> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is
> always
> >> delivered to CPU0 regardless of what CPU we're sending
> >> CHANNELMSG_UNLOAD
> >> from. vmbus_wait_for_unload() doesn't account for the fact that in case
> >> we're crashing on some other CPU and CPU0 is still alive and operational
> >> CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
> >> vmbus_connection.unload_event, our wait on the current CPU will never
> >> end.
> >
> > What was the host you were testing on?
> >
> 
> I was testing on both 2012R2 and 2016TP4. The bug is easily reproducible
> by forcing crash on a secondary CPU, e.g.:

Prior to 2012R2, all messages would be delivered on CPU0 and this includes CHANNELMSG_UNLOAD_RESPONSE.
For this reason we don't support kexec on pre-2012 R2 hosts. On 2012. From 2012 R2 on, all vmbus 
messages (responses) will be delivered on  the CPU that we initially set up - look at the code in
vmbus_negotiate_version(). So on post 2012 R2 hosts, the response to CHANNELMSG_UNLOAD_RESPONSE
will be delivered on the CPU where we initiate the contact with the host - CHANNELMSG_INITIATE_CONTACT message.
So, maybe we can stash away the CPU on which we made the initial contact and poll the state on that CPU
to make forward progress in the case of crash.

Regards,

K. Y

 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-21 22:44     ` KY Srinivasan
@ 2016-03-22  9:47       ` Vitaly Kuznetsov
  2016-03-22 14:00       ` Vitaly Kuznetsov
  1 sibling, 0 replies; 10+ messages in thread
From: Vitaly Kuznetsov @ 2016-03-22  9:47 UTC (permalink / raw)
  To: KY Srinivasan
  Cc: devel, linux-kernel, Haiyang Zhang, Alex Ng (LIS),
	Radim Krcmar, Cathy Avery

KY Srinivasan <kys@microsoft.com> writes:

>> -----Original Message-----
>> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
>> Sent: Monday, March 21, 2016 12:52 AM
>> To: KY Srinivasan <kys@microsoft.com>
>> Cc: devel@linuxdriverproject.org; linux-kernel@vger.kernel.org; Haiyang
>> Zhang <haiyangz@microsoft.com>; Alex Ng (LIS) <alexng@microsoft.com>;
>> Radim Krcmar <rkrcmar@redhat.com>; Cathy Avery <cavery@redhat.com>
>> Subject: Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
>> 
>> KY Srinivasan <kys@microsoft.com> writes:
>> 
>> >> -----Original Message-----
>> >> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
>> >> Sent: Friday, March 18, 2016 5:33 AM
>> >> To: devel@linuxdriverproject.org
>> >> Cc: linux-kernel@vger.kernel.org; KY Srinivasan <kys@microsoft.com>;
>> >> Haiyang Zhang <haiyangz@microsoft.com>; Alex Ng (LIS)
>> >> <alexng@microsoft.com>; Radim Krcmar <rkrcmar@redhat.com>; Cathy
>> >> Avery <cavery@redhat.com>
>> >> Subject: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
>> >>
>> >> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is
>> always
>> >> delivered to CPU0 regardless of what CPU we're sending
>> >> CHANNELMSG_UNLOAD
>> >> from. vmbus_wait_for_unload() doesn't account for the fact that in case
>> >> we're crashing on some other CPU and CPU0 is still alive and operational
>> >> CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
>> >> vmbus_connection.unload_event, our wait on the current CPU will never
>> >> end.
>> >
>> > What was the host you were testing on?
>> >
>> 
>> I was testing on both 2012R2 and 2016TP4. The bug is easily reproducible
>> by forcing crash on a secondary CPU, e.g.:
>
> Prior to 2012R2, all messages would be delivered on CPU0 and this includes CHANNELMSG_UNLOAD_RESPONSE.
> For this reason we don't support kexec on pre-2012 R2 hosts. On 2012. From 2012 R2 on, all vmbus 
> messages (responses) will be delivered on  the CPU that we initially set up - look at the code in
> vmbus_negotiate_version().

Ok, missed that. In that case we need to remember which CPU it was --
I'll add this in v2.

> So on post 2012 R2 hosts, the response to CHANNELMSG_UNLOAD_RESPONSE
> will be delivered on the CPU where we initiate the contact with the host - CHANNELMSG_INITIATE_CONTACT message.
> So, maybe we can stash away the CPU on which we made the initial contact and poll the state on that CPU
> to make forward progress in the case of crash.

Yes, we can't have any expectation about other CPUs on crash as they can
be in any state (crashing also, hanging on some mutex/spinlock/...,) so
we need to use current CPU only. I'll fix and resend.

Thanks!

-- 
  Vitaly

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-21 22:44     ` KY Srinivasan
  2016-03-22  9:47       ` Vitaly Kuznetsov
@ 2016-03-22 14:00       ` Vitaly Kuznetsov
  2016-03-22 14:18         ` KY Srinivasan
  1 sibling, 1 reply; 10+ messages in thread
From: Vitaly Kuznetsov @ 2016-03-22 14:00 UTC (permalink / raw)
  To: KY Srinivasan
  Cc: devel, linux-kernel, Haiyang Zhang, Alex Ng (LIS),
	Radim Krcmar, Cathy Avery

[-- Attachment #1: Type: text/plain, Size: 2905 bytes --]

KY Srinivasan <kys@microsoft.com> writes:

>> -----Original Message-----
>> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
>> Sent: Monday, March 21, 2016 12:52 AM
>> To: KY Srinivasan <kys@microsoft.com>
>> Cc: devel@linuxdriverproject.org; linux-kernel@vger.kernel.org; Haiyang
>> Zhang <haiyangz@microsoft.com>; Alex Ng (LIS) <alexng@microsoft.com>;
>> Radim Krcmar <rkrcmar@redhat.com>; Cathy Avery <cavery@redhat.com>
>> Subject: Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
>> 
>> KY Srinivasan <kys@microsoft.com> writes:
>> 
>> >> -----Original Message-----
>> >> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
>> >> Sent: Friday, March 18, 2016 5:33 AM
>> >> To: devel@linuxdriverproject.org
>> >> Cc: linux-kernel@vger.kernel.org; KY Srinivasan <kys@microsoft.com>;
>> >> Haiyang Zhang <haiyangz@microsoft.com>; Alex Ng (LIS)
>> >> <alexng@microsoft.com>; Radim Krcmar <rkrcmar@redhat.com>; Cathy
>> >> Avery <cavery@redhat.com>
>> >> Subject: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
>> >>
>> >> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is
>> always
>> >> delivered to CPU0 regardless of what CPU we're sending
>> >> CHANNELMSG_UNLOAD
>> >> from. vmbus_wait_for_unload() doesn't account for the fact that in case
>> >> we're crashing on some other CPU and CPU0 is still alive and operational
>> >> CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
>> >> vmbus_connection.unload_event, our wait on the current CPU will never
>> >> end.
>> >
>> > What was the host you were testing on?
>> >
>> 
>> I was testing on both 2012R2 and 2016TP4. The bug is easily reproducible
>> by forcing crash on a secondary CPU, e.g.:
>
> Prior to 2012R2, all messages would be delivered on CPU0 and this includes CHANNELMSG_UNLOAD_RESPONSE.
> For this reason we don't support kexec on pre-2012 R2 hosts. On 2012. From 2012 R2 on, all vmbus 
> messages (responses) will be delivered on  the CPU that we initially set up - look at the code in
> vmbus_negotiate_version(). So on post 2012 R2 hosts, the response to CHANNELMSG_UNLOAD_RESPONSE
> will be delivered on the CPU where we initiate the contact with the
> host - CHANNELMSG_INITIATE_CONTACT message.

Unfortunatelly there is a descrepancy between WS2012R2 and WS2016TP4. On
WS2012R2 what you're saying is true and all messages including
CHANNELMSG_UNLOAD_RESPONSE are delivered to the CPU we used for initial
contact. On WS2016TP4 CHANNELMSG_UNLOAD_RESPONSE seems to be a special
case and it is always delivered to CPU0, no matter which CPU we used for
initial contact. This can be a host bug. You can use the attached patch
to see the issue.

For now I can suggest we check message pages for all CPUs from
vmbus_wait_for_unload(). We can race with other CPUs again but we don't
care as we're checking for completion_done() in the loop as well. I'll
try this approach.

-- 
  Vitaly


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-Drivers-hv-vmbus-handle-various-crash-scenarios.patch --]
[-- Type: text/x-patch, Size: 6176 bytes --]

>From 27170c1bb8f21f7b20c1716c1df65e4812b421f8 Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Thu, 17 Mar 2016 14:41:07 +0100
Subject: [PATCH] Drivers: hv: vmbus: handle various crash scenarios

Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
delivered to the cpu which was used to initiate contact regardless of what
CPU we're sending CHANNELMSG_UNLOAD from. vmbus_wait_for_unload() doesn't
account for the fact that in case we're crashing on some other CPU and the
CPU which was used to initate contact is still alive and operational
CHANNELMSG_UNLOAD_RESPONSE will be delivered there completing
vmbus_connection.unload_event, our wait on the current CPU will never
end.

Do the following:
1) Remember the CPU we used to initiate contact in vmbus_connection.

1) Check for completion_done() in the loop. In case interrupt handler is
   still alive we'll get the confirmation we need.

2) Always read the init_cpu's message page as CHANNELMSG_UNLOAD_RESPONSE
   will be delivered there. We can race with still-alive interrupt handler
   doing the same but we don't care as we're checking completion_done()
   now.

3) Cleanup message pages on all CPUs. This is required (at least for the
   current CPU as we're clearing some other CPU's messages now but we may
   want to bring up additional CPUs on crash) as new messages won't be
   delivered till we consume what's pending. On boot we'll place message
   pages somewhere else and we won't be able to read stale messages.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
Changes since v1:
- Use init_cpu instead of CPU0 [K. Y. Srinivasan]
- Style changes in vmbus_wait_for_unload [Radim Krcmar]
---
 drivers/hv/channel_mgmt.c | 39 ++++++++++++++++++++++++++++++++-------
 drivers/hv/connection.c   | 10 +++++++---
 drivers/hv/hyperv_vmbus.h |  3 +++
 drivers/hv/vmbus_drv.c    |  1 +
 4 files changed, 43 insertions(+), 10 deletions(-)

diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 38b682ba..2fa526d 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -597,28 +597,53 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
 
 static void vmbus_wait_for_unload(void)
 {
-	int cpu = smp_processor_id();
-	void *page_addr = hv_context.synic_message_page[cpu];
+	int cpu;
+	void *page_addr =
+		hv_context.synic_message_page[vmbus_connection.init_cpu];
 	struct hv_message *msg = (struct hv_message *)page_addr +
 				  VMBUS_MESSAGE_SINT;
 	struct vmbus_channel_message_header *hdr;
-	bool unloaded = false;
+	enum vmbus_channel_message_type msgtype;
+
+	printk("vmbus_wait_for_unload: %d (%d)\n", vmbus_connection.init_cpu, smp_processor_id());
 
+	/*
+	 * CHANNELMSG_UNLOAD_RESPONSE is always delivered to the CPU which was
+	 * used to initate contact (see vmbus_negotiate_version()). When we're
+	 * crashing on a different CPU let's hope that IRQ handler on that CPU
+	 * is still functional and vmbus_unload_response() will complete
+	 * vmbus_connection.unload_event. If not, the last thing we can do is
+	 * read message page for that CPU regardless of what CPU we're on.
+	 */
 	while (1) {
+		if (completion_done(&vmbus_connection.unload_event))
+			break;
+
 		if (READ_ONCE(msg->header.message_type) == HVMSG_NONE) {
 			mdelay(10);
 			continue;
 		}
 
 		hdr = (struct vmbus_channel_message_header *)msg->u.payload;
-		if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE)
-			unloaded = true;
-
+		msgtype = hdr->msgtype;
 		vmbus_signal_eom(msg);
 
-		if (unloaded)
+		if (msgtype == CHANNELMSG_UNLOAD_RESPONSE)
 			break;
 	}
+
+	/*
+	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
+	 * maybe-pending messages on all CPUs to be able to receive new
+	 * messages after we reconnect.
+	 */
+	for_each_online_cpu(cpu) {
+		page_addr = hv_context.synic_message_page[cpu];
+		msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
+		msg->header.message_type = HVMSG_NONE;
+	}
+
+	printk("vmbus_wait_for_unload done: %d (%d)\n", vmbus_connection.init_cpu, smp_processor_id());
 }
 
 /*
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index d02f137..4ab91b8 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -70,7 +70,7 @@ static __u32 vmbus_get_next_version(__u32 current_version)
 static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
 					__u32 version)
 {
-	int ret = 0;
+	int ret = 0, cpu = smp_processor_id();
 	struct vmbus_channel_initiate_contact *msg;
 	unsigned long flags;
 
@@ -91,12 +91,16 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
 	 * For post win8 hosts, we support receiving channel messagges on
 	 * all the CPUs. This is needed for kexec to work correctly where
 	 * the CPU attempting to connect may not be CPU 0.
+	 * We need to remember the CPU we use here as in case of unload
+	 * CHANNELMSG_UNLOAD_RESPONSE will be delivered to this CPU.
 	 */
 	if (version >= VERSION_WIN8_1) {
-		msg->target_vcpu = hv_context.vp_index[get_cpu()];
-		put_cpu();
+		printk("vmbus_negotiate_version: %d %d\n", cpu, hv_context.vp_index[cpu]);
+		msg->target_vcpu = hv_context.vp_index[cpu];
+		vmbus_connection.init_cpu = cpu;
 	} else {
 		msg->target_vcpu = 0;
+		vmbus_connection.init_cpu = 0;
 	}
 
 	/*
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index 12321b9..3adf30b 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -563,6 +563,9 @@ struct vmbus_connection {
 
 	atomic_t next_gpadl_handle;
 
+	/* CPU which was used to initiate contact */
+	int init_cpu;
+
 	struct completion  unload_event;
 	/*
 	 * Represents channel interrupts. Each bit position represents a
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 64713ff..570dd639 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -732,6 +732,7 @@ void vmbus_on_msg_dpc(unsigned long data)
 		goto msg_handled;
 	}
 
+	printk("vmbus_on_msg_dpc: %d on %d\n", hdr->msgtype, cpu);
 	entry = &channel_message_table[hdr->msgtype];
 	if (entry->handler_type	== VMHT_BLOCKING) {
 		ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
-- 
2.5.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* RE: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
  2016-03-22 14:00       ` Vitaly Kuznetsov
@ 2016-03-22 14:18         ` KY Srinivasan
  0 siblings, 0 replies; 10+ messages in thread
From: KY Srinivasan @ 2016-03-22 14:18 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: devel, linux-kernel, Haiyang Zhang, Alex Ng (LIS),
	Radim Krcmar, Cathy Avery



> -----Original Message-----
> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> Sent: Tuesday, March 22, 2016 7:01 AM
> To: KY Srinivasan <kys@microsoft.com>
> Cc: devel@linuxdriverproject.org; linux-kernel@vger.kernel.org; Haiyang
> Zhang <haiyangz@microsoft.com>; Alex Ng (LIS) <alexng@microsoft.com>;
> Radim Krcmar <rkrcmar@redhat.com>; Cathy Avery <cavery@redhat.com>
> Subject: Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
> 
> KY Srinivasan <kys@microsoft.com> writes:
> 
> >> -----Original Message-----
> >> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> >> Sent: Monday, March 21, 2016 12:52 AM
> >> To: KY Srinivasan <kys@microsoft.com>
> >> Cc: devel@linuxdriverproject.org; linux-kernel@vger.kernel.org; Haiyang
> >> Zhang <haiyangz@microsoft.com>; Alex Ng (LIS)
> <alexng@microsoft.com>;
> >> Radim Krcmar <rkrcmar@redhat.com>; Cathy Avery
> <cavery@redhat.com>
> >> Subject: Re: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
> >>
> >> KY Srinivasan <kys@microsoft.com> writes:
> >>
> >> >> -----Original Message-----
> >> >> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> >> >> Sent: Friday, March 18, 2016 5:33 AM
> >> >> To: devel@linuxdriverproject.org
> >> >> Cc: linux-kernel@vger.kernel.org; KY Srinivasan <kys@microsoft.com>;
> >> >> Haiyang Zhang <haiyangz@microsoft.com>; Alex Ng (LIS)
> >> >> <alexng@microsoft.com>; Radim Krcmar <rkrcmar@redhat.com>;
> Cathy
> >> >> Avery <cavery@redhat.com>
> >> >> Subject: [PATCH] Drivers: hv: vmbus: handle various crash scenarios
> >> >>
> >> >> Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is
> >> always
> >> >> delivered to CPU0 regardless of what CPU we're sending
> >> >> CHANNELMSG_UNLOAD
> >> >> from. vmbus_wait_for_unload() doesn't account for the fact that in
> case
> >> >> we're crashing on some other CPU and CPU0 is still alive and
> operational
> >> >> CHANNELMSG_UNLOAD_RESPONSE will be delivered there
> completing
> >> >> vmbus_connection.unload_event, our wait on the current CPU will
> never
> >> >> end.
> >> >
> >> > What was the host you were testing on?
> >> >
> >>
> >> I was testing on both 2012R2 and 2016TP4. The bug is easily reproducible
> >> by forcing crash on a secondary CPU, e.g.:
> >
> > Prior to 2012R2, all messages would be delivered on CPU0 and this includes
> CHANNELMSG_UNLOAD_RESPONSE.
> > For this reason we don't support kexec on pre-2012 R2 hosts. On 2012.
> From 2012 R2 on, all vmbus
> > messages (responses) will be delivered on  the CPU that we initially set up -
> look at the code in
> > vmbus_negotiate_version(). So on post 2012 R2 hosts, the response to
> CHANNELMSG_UNLOAD_RESPONSE
> > will be delivered on the CPU where we initiate the contact with the
> > host - CHANNELMSG_INITIATE_CONTACT message.
> 
> Unfortunatelly there is a descrepancy between WS2012R2 and WS2016TP4.
> On
> WS2012R2 what you're saying is true and all messages including
> CHANNELMSG_UNLOAD_RESPONSE are delivered to the CPU we used for
> initial
> contact. On WS2016TP4 CHANNELMSG_UNLOAD_RESPONSE seems to be a
> special
> case and it is always delivered to CPU0, no matter which CPU we used for
> initial contact. This can be a host bug. You can use the attached patch
> to see the issue.

This looks like a host bug and I will try to get is addressed before ws2016
ships.
> 
> For now I can suggest we check message pages for all CPUs from
> vmbus_wait_for_unload(). We can race with other CPUs again but we don't
> care as we're checking for completion_done() in the loop as well. I'll
> try this approach.
Thank you.

K. Y

> 
> --
>   Vitaly

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-03-22 14:18 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-18 12:33 [PATCH] Drivers: hv: vmbus: handle various crash scenarios Vitaly Kuznetsov
2016-03-18 15:20 ` Radim Krcmar
2016-03-18 15:53   ` Vitaly Kuznetsov
2016-03-18 16:11     ` Radim Krcmar
2016-03-18 18:02 ` KY Srinivasan
2016-03-21  7:51   ` Vitaly Kuznetsov
2016-03-21 22:44     ` KY Srinivasan
2016-03-22  9:47       ` Vitaly Kuznetsov
2016-03-22 14:00       ` Vitaly Kuznetsov
2016-03-22 14:18         ` KY Srinivasan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.