From: Vitaly Kuznetsov <vkuznets@redhat.com>
To: Michael Kelley <mikelley@microsoft.com>
Cc: mikelley@microsoft.com, kys@microsoft.com,
haiyangz@microsoft.com, sthemmin@microsoft.com,
wei.liu@kernel.org, linux-kernel@vger.kernel.org,
linux-hyperv@vger.kernel.org, decui@microsoft.com
Subject: Re: ** POTENTIAL FRAUD ALERT - RED HAT ** [PATCH v2 1/1] Drivers: hv: vmbus: Increase wait time for VMbus unload
Date: Tue, 20 Apr 2021 11:31:54 +0200 [thread overview]
Message-ID: <87tuo1i9o5.fsf@vitty.brq.redhat.com> (raw)
In-Reply-To: <1618894089-126662-1-git-send-email-mikelley@microsoft.com>
Michael Kelley <mikelley@microsoft.com> writes:
> When running in Azure, disks may be connected to a Linux VM with
> read/write caching enabled. If a VM panics and issues a VMbus
> UNLOAD request to Hyper-V, the response is delayed until all dirty
> data in the disk cache is flushed. In extreme cases, this flushing
> can take 10's of seconds, depending on the disk speed and the amount
> of dirty data. If kdump is configured for the VM, the current 10 second
> timeout in vmbus_wait_for_unload() may be exceeded, and the UNLOAD
> complete message may arrive well after the kdump kernel is already
> running, causing problems. Note that no problem occurs if kdump is
> not enabled because Hyper-V waits for the cache flush before doing
> a reboot through the BIOS/UEFI code.
>
> Fix this problem by increasing the timeout in vmbus_wait_for_unload()
> to 100 seconds. Also output periodic messages so that if anyone is
> watching the serial console, they won't think the VM is completely
> hung.
>
> Fixes: 911e1987efc8 ("Drivers: hv: vmbus: Add timeout to vmbus_wait_for_unload")
> Signed-off-by: Michael Kelley <mikelley@microsoft.com>
> ---
>
> Changed in v2: Fixed silly error in the argument to mdelay()
>
> ---
> drivers/hv/channel_mgmt.c | 30 +++++++++++++++++++++++++-----
> 1 file changed, 25 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> index f3cf4af..ef4685c 100644
> --- a/drivers/hv/channel_mgmt.c
> +++ b/drivers/hv/channel_mgmt.c
> @@ -755,6 +755,12 @@ static void init_vp_index(struct vmbus_channel *channel)
> free_cpumask_var(available_mask);
> }
>
> +#define UNLOAD_DELAY_UNIT_MS 10 /* 10 milliseconds */
> +#define UNLOAD_WAIT_MS (100*1000) /* 100 seconds */
> +#define UNLOAD_WAIT_LOOPS (UNLOAD_WAIT_MS/UNLOAD_DELAY_UNIT_MS)
> +#define UNLOAD_MSG_MS (5*1000) /* Every 5 seconds */
> +#define UNLOAD_MSG_LOOPS (UNLOAD_MSG_MS/UNLOAD_DELAY_UNIT_MS)
> +
> static void vmbus_wait_for_unload(void)
> {
> int cpu;
> @@ -772,12 +778,17 @@ static void vmbus_wait_for_unload(void)
> * vmbus_connection.unload_event. If not, the last thing we can do is
> * read message pages for all CPUs directly.
> *
> - * Wait no more than 10 seconds so that the panic path can't get
> - * hung forever in case the response message isn't seen.
> + * Wait up to 100 seconds since an Azure host must writeback any dirty
> + * data in its disk cache before the VMbus UNLOAD request will
> + * complete. This flushing has been empirically observed to take up
> + * to 50 seconds in cases with a lot of dirty data, so allow additional
> + * leeway and for inaccuracies in mdelay(). But eventually time out so
> + * that the panic path can't get hung forever in case the response
> + * message isn't seen.
I vaguely remember debugging cases when CHANNELMSG_UNLOAD_RESPONSE never
arrives, it was kind of pointless to proceed to kexec as attempts to
reconnect Vmbus devices were failing (no devices were offered after
CHANNELMSG_REQUESTOFFERS AFAIR). Would it maybe make sense to just do
emergency reboot instead of proceeding to kexec when this happens? Just
wondering.
> */
> - for (i = 0; i < 1000; i++) {
> + for (i = 1; i <= UNLOAD_WAIT_LOOPS; i++) {
> if (completion_done(&vmbus_connection.unload_event))
> - break;
> + goto completed;
>
> for_each_online_cpu(cpu) {
> struct hv_per_cpu_context *hv_cpu
> @@ -800,9 +811,18 @@ static void vmbus_wait_for_unload(void)
> vmbus_signal_eom(msg, message_type);
> }
>
> - mdelay(10);
> + /*
> + * Give a notice periodically so someone watching the
> + * serial output won't think it is completely hung.
> + */
> + if (!(i % UNLOAD_MSG_LOOPS))
> + pr_notice("Waiting for VMBus UNLOAD to complete\n");
> +
> + mdelay(UNLOAD_DELAY_UNIT_MS);
> }
> + pr_err("Continuing even though VMBus UNLOAD did not complete\n");
>
> +completed:
> /*
> * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
> * maybe-pending messages on all CPUs to be able to receive new
This is definitely an improvement,
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
--
Vitaly
next prev parent reply other threads:[~2021-04-20 9:32 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-20 4:48 [PATCH v2 1/1] Drivers: hv: vmbus: Increase wait time for VMbus unload Michael Kelley
2021-04-20 9:31 ` Vitaly Kuznetsov [this message]
2021-04-20 19:46 ` ** POTENTIAL FRAUD ALERT - RED HAT ** " Wei Liu
2021-04-20 21:23 ` Michael Kelley
2021-04-21 8:24 ` Vitaly Kuznetsov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87tuo1i9o5.fsf@vitty.brq.redhat.com \
--to=vkuznets@redhat.com \
--cc=decui@microsoft.com \
--cc=haiyangz@microsoft.com \
--cc=kys@microsoft.com \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mikelley@microsoft.com \
--cc=sthemmin@microsoft.com \
--cc=wei.liu@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).