linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs
@ 2020-02-14 23:21 Anchal Agarwal
  2020-02-14 23:22 ` [RFC PATCH v3 01/12] xen/manage: keep track of the on-going suspend mode Anchal Agarwal
                   ` (11 more replies)
  0 siblings, 12 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:21 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

Resending this in a more threaded format.
  
Hello,
I am sending out a v3 version of series of patches that implements guest
PM hibernation.
These guests are running on xen hypervisor. The patches had been tested
against mainstream kernel. EC2 instance hibernation feature is provided
to the AWS EC2 customers. PM hibernation uses swap space carved out within
the guest[or can be a separate partition], where hibernation image is
stored and restored from.

Doing guest hibernation does not involve any support from hypervisor and
this way guest has complete control over its state. Infrastructure
restrictions for saving up guest state can be overcome by guest initiated
hibernation.

This series includes some improvements over RFC series sent last year:
https://lists.xenproject.org/archives/html/xen-devel/2018-06/msg00823.html

Changelog v3:
1. Feedback from V2
2. Introduced 2 new patches for xen sched clock offset fix
3. Fixed pirq shutdown/restore in generic irq subsystem
4. Split save/restore steal clock patches into 2 for better readability

Changelog v2:
1. Removed timeout/request present on the ring in xen-blkfront during blkfront freeze
2. Fixed restoring of PIRQs which was apparently working for 4.9 kernels but not for
newer kernel. [Legacy irqs were no longer restored after hibernation introduced with
this commit "020db9d3c1dc0"]
3. Merged couple of related patches to make the code more coherent and readable
4. Code refactoring
5. Sched clock fix when hibernating guest is under heavy CPU load
Note: Under very rare circumstances we see resume failures with KASLR enabled only
on xen instances.  We are roughly seeing 3% failures [>1000 runs] when testing with
various instance sizes and some workload running on each instance. I am currently
investigating the issue as to confirm if its a xen issue or kernel issue.
However, it should not hold back anyone from reviewing/accepting these patches.

Testing done:
All testing is done for multiple hibernation cycle for 5.4 kernel on EC2.

Testing How to:
---------------
Example:
Set up a file-backed swap space. Swap file size>=Total memory on the system
sudo dd if=/dev/zero of=/swap bs=$(( 1024 * 1024 )) count=4096 # 4096MiB
sudo chmod 600 /swap
sudo mkswap /swap
sudo swapon /swap

Update resume device/resume offset in grub if using swap file:
resume=/dev/xvda1 resume_offset=200704

Execute:
--------
sudo pm-hibernate
OR
echo disk > /sys/power/state && echo reboot > /sys/power/disk

Compute resume offset code:
"
#!/usr/bin/env python
import sys
import array
import fcntl

#swap file
f = open(sys.argv[1], 'r')
buf = array.array('L', [0])

#FIBMAP
ret = fcntl.ioctl(f.fileno(), 0x01, buf)
print buf[0]
"

Aleksei Besogonov (1):
  PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA

Anchal Agarwal (4):
  x86/xen: Introduce new function to map HYPERVISOR_shared_info on
    Resume
  genirq: Shutdown irq chips in suspend/resume during hibernation
  xen: Introduce wrapper for save/restore sched clock offset
  xen: Update sched clock offset to avoid system instability in
    hibernation

Munehisa Kamata (7):
  xen/manage: keep track of the on-going suspend mode
  xenbus: add freeze/thaw/restore callbacks support
  x86/xen: add system core suspend and resume callbacks
  xen-netfront: add callbacks for PM suspend and hibernation support
  xen-blkfront: add callbacks for PM suspend and hibernation
  xen/time: introduce xen_{save,restore}_steal_clock
  x86/xen: save and restore steal clock

 arch/x86/xen/enlighten_hvm.c      |   8 ++
 arch/x86/xen/suspend.c            |  72 ++++++++++++++++++
 arch/x86/xen/time.c               |  18 ++++-
 arch/x86/xen/xen-ops.h            |   3 +
 drivers/block/xen-blkfront.c      | 119 ++++++++++++++++++++++++++++--
 drivers/net/xen-netfront.c        |  98 +++++++++++++++++++++++-
 drivers/xen/events/events_base.c  |   1 +
 drivers/xen/manage.c              |  73 ++++++++++++++++++
 drivers/xen/time.c                |  29 +++++++-
 drivers/xen/xenbus/xenbus_probe.c |  99 ++++++++++++++++++++-----
 include/linux/irq.h               |   2 +
 include/xen/xen-ops.h             |   8 ++
 include/xen/xenbus.h              |   3 +
 kernel/irq/chip.c                 |   2 +-
 kernel/irq/internals.h            |   1 +
 kernel/irq/pm.c                   |  31 +++++---
 kernel/power/user.c               |   6 +-
 17 files changed, 533 insertions(+), 40 deletions(-)

-- 
2.24.1.AMZN


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 01/12] xen/manage: keep track of the on-going suspend mode
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
@ 2020-02-14 23:22 ` Anchal Agarwal
  2020-02-14 23:23 ` [RFC PATCH v3 02/12] xenbus: add freeze/thaw/restore callbacks support Anchal Agarwal
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:22 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Munehisa Kamata <kamatam@amazon.com>

Guest hibernation is different from xen suspend/resume/live migration.
Xen save/restore does not use pm_ops as is needed by guest hibernation.
Hibernation in guest follows ACPI path and is guest inititated , the
hibernation image is saved within guest as compared to later modes
which are xen toolstack assisted and image creation/storage is in
control of hypervisor/host machine.
To differentiate between Xen suspend and PM hibernation, keep track
of the on-going suspend mode by mainly using a new PM notifier.
Introduce simple functions which help to know the on-going suspend mode
so that other Xen-related code can behave differently according to the
current suspend mode.
Since Xen suspend doesn't have corresponding PM event, its main logic
is modfied to acquire pm_mutex and set the current mode.

Though, acquirng pm_mutex is still right thing to do, we may
see deadlock if PM hibernation is interrupted by Xen suspend.
PM hibernation depends on xenwatch thread to process xenbus state
transactions, but the thread will sleep to wait pm_mutex which is
already held by PM hibernation context in the scenario. Xen shutdown
code may need some changes to avoid the issue.

[Anchal Changelog: Merged patch xen/manage: introduce helper function
to know the on-going suspend mode into this one for better readability]
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/xen/manage.c  | 73 +++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h |  3 ++
 2 files changed, 76 insertions(+)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index cd046684e0d1..0b30ab522b77 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -14,6 +14,7 @@
 #include <linux/freezer.h>
 #include <linux/syscore_ops.h>
 #include <linux/export.h>
+#include <linux/suspend.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -40,6 +41,31 @@ enum shutdown_state {
 /* Ignore multiple shutdown requests. */
 static enum shutdown_state shutting_down = SHUTDOWN_INVALID;
 
+enum suspend_modes {
+	NO_SUSPEND = 0,
+	XEN_SUSPEND,
+	PM_SUSPEND,
+	PM_HIBERNATION,
+};
+
+/* Protected by pm_mutex */
+static enum suspend_modes suspend_mode = NO_SUSPEND;
+
+bool xen_suspend_mode_is_xen_suspend(void)
+{
+	return suspend_mode == XEN_SUSPEND;
+}
+
+bool xen_suspend_mode_is_pm_suspend(void)
+{
+	return suspend_mode == PM_SUSPEND;
+}
+
+bool xen_suspend_mode_is_pm_hibernation(void)
+{
+	return suspend_mode == PM_HIBERNATION;
+}
+
 struct suspend_info {
 	int cancelled;
 };
@@ -99,6 +125,10 @@ static void do_suspend(void)
 	int err;
 	struct suspend_info si;
 
+	lock_system_sleep();
+
+	suspend_mode = XEN_SUSPEND;
+
 	shutting_down = SHUTDOWN_SUSPEND;
 
 	err = freeze_processes();
@@ -162,6 +192,10 @@ static void do_suspend(void)
 	thaw_processes();
 out:
 	shutting_down = SHUTDOWN_INVALID;
+
+	suspend_mode = NO_SUSPEND;
+
+	unlock_system_sleep();
 }
 #endif	/* CONFIG_HIBERNATE_CALLBACKS */
 
@@ -387,3 +421,42 @@ int xen_setup_shutdown_event(void)
 EXPORT_SYMBOL_GPL(xen_setup_shutdown_event);
 
 subsys_initcall(xen_setup_shutdown_event);
+
+static int xen_pm_notifier(struct notifier_block *notifier,
+			   unsigned long pm_event, void *unused)
+{
+	switch (pm_event) {
+	case PM_SUSPEND_PREPARE:
+		suspend_mode = PM_SUSPEND;
+		break;
+	case PM_HIBERNATION_PREPARE:
+	case PM_RESTORE_PREPARE:
+		suspend_mode = PM_HIBERNATION;
+		break;
+	case PM_POST_SUSPEND:
+	case PM_POST_RESTORE:
+	case PM_POST_HIBERNATION:
+		/* Set back to the default */
+		suspend_mode = NO_SUSPEND;
+		break;
+	default:
+		pr_warn("Receive unknown PM event 0x%lx\n", pm_event);
+		return -EINVAL;
+	}
+
+	return 0;
+};
+
+static struct notifier_block xen_pm_notifier_block = {
+	.notifier_call = xen_pm_notifier
+};
+
+static int xen_setup_pm_notifier(void)
+{
+	if (!xen_hvm_domain())
+		return -ENODEV;
+
+	return register_pm_notifier(&xen_pm_notifier_block);
+}
+
+subsys_initcall(xen_setup_pm_notifier);
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index d89969aa9942..6c36e161dfd1 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -40,6 +40,9 @@ u64 xen_steal_clock(int cpu);
 
 int xen_setup_shutdown_event(void);
 
+bool xen_suspend_mode_is_xen_suspend(void);
+bool xen_suspend_mode_is_pm_suspend(void);
+bool xen_suspend_mode_is_pm_hibernation(void);
 extern unsigned long *xen_contiguous_bitmap;
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 02/12] xenbus: add freeze/thaw/restore callbacks support
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
  2020-02-14 23:22 ` [RFC PATCH v3 01/12] xen/manage: keep track of the on-going suspend mode Anchal Agarwal
@ 2020-02-14 23:23 ` Anchal Agarwal
  2020-02-14 23:23 ` [RFC PATCH v3 03/12] x86/xen: Introduce new function to map HYPERVISOR_shared_info on Resume Anchal Agarwal
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:23 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Munehisa Kamata <kamatam@amazon.com>

Since commit b3e96c0c7562 ("xen: use freeze/restore/thaw PM events for
suspend/resume/chkpt"), xenbus uses PMSG_FREEZE, PMSG_THAW and
PMSG_RESTORE events for Xen suspend. However, they're actually assigned
to xenbus_dev_suspend(), xenbus_dev_cancel() and xenbus_dev_resume()
respectively, and only suspend and resume callbacks are supported at
driver level. To support PM suspend and PM hibernation, modify the bus
level PM callbacks to invoke not only device driver's suspend/resume but
also freeze/thaw/restore.

Note that we'll use freeze/restore callbacks even for PM suspend whereas
suspend/resume callbacks are normally used in the case, becausae the
existing xenbus device drivers already have suspend/resume callbacks
specifically designed for Xen suspend. So we can allow the device
drivers to keep the existing callbacks wihtout modification.

[Anchal Changelog: Refactored the callbacks code]
Signed-off-by: Agarwal Anchal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/xen/xenbus/xenbus_probe.c | 99 +++++++++++++++++++++++++------
 include/xen/xenbus.h              |  3 +
 2 files changed, 84 insertions(+), 18 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 5b471889d723..0fa8eeee68c2 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -49,6 +49,7 @@
 #include <linux/io.h>
 #include <linux/slab.h>
 #include <linux/module.h>
+#include <linux/suspend.h>
 
 #include <asm/page.h>
 #include <asm/pgtable.h>
@@ -597,27 +598,44 @@ int xenbus_dev_suspend(struct device *dev)
 	struct xenbus_driver *drv;
 	struct xenbus_device *xdev
 		= container_of(dev, struct xenbus_device, dev);
-
+	bool xen_suspend = xen_suspend_mode_is_xen_suspend();
 	DPRINTK("%s", xdev->nodename);
 
 	if (dev->driver == NULL)
 		return 0;
 	drv = to_xenbus_driver(dev->driver);
-	if (drv->suspend)
-		err = drv->suspend(xdev);
-	if (err)
-		pr_warn("suspend %s failed: %i\n", dev_name(dev), err);
+
+	if (xen_suspend) {
+		if (drv->suspend)
+			err = drv->suspend(xdev);
+	} else {
+		if (drv->freeze) {
+			err = drv->freeze(xdev);
+			if (!err) {
+				free_otherend_watch(xdev);
+				free_otherend_details(xdev);
+				return 0;
+			}
+		}
+	}
+
+	if (err) {
+		pr_warn("%s %s failed: %i\n", xen_suspend ?
+			"suspend" : "freeze", dev_name(dev), err);
+		return err;
+	}
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xenbus_dev_suspend);
 
 int xenbus_dev_resume(struct device *dev)
 {
-	int err;
+	int err = 0;
 	struct xenbus_driver *drv;
 	struct xenbus_device *xdev
 		= container_of(dev, struct xenbus_device, dev);
-
+	bool xen_suspend = xen_suspend_mode_is_xen_suspend();
 	DPRINTK("%s", xdev->nodename);
 
 	if (dev->driver == NULL)
@@ -625,24 +643,32 @@ int xenbus_dev_resume(struct device *dev)
 	drv = to_xenbus_driver(dev->driver);
 	err = talk_to_otherend(xdev);
 	if (err) {
-		pr_warn("resume (talk_to_otherend) %s failed: %i\n",
+		pr_warn("%s (talk_to_otherend) %s failed: %i\n",
+			xen_suspend ? "resume" : "restore",
 			dev_name(dev), err);
 		return err;
 	}
 
-	xdev->state = XenbusStateInitialising;
+	if (xen_suspend) {
+		xdev->state = XenbusStateInitialising;
+		if (drv->resume)
+			err = drv->resume(xdev);
+	} else {
+		if (drv->restore)
+			err = drv->restore(xdev);
+	}
 
-	if (drv->resume) {
-		err = drv->resume(xdev);
-		if (err) {
-			pr_warn("resume %s failed: %i\n", dev_name(dev), err);
-			return err;
-		}
+	if (err) {
+		pr_warn("%s %s failed: %i\n",
+			xen_suspend ? "resume" : "restore",
+			dev_name(dev), err);
+		return err;
 	}
 
 	err = watch_otherend(xdev);
 	if (err) {
-		pr_warn("resume (watch_otherend) %s failed: %d.\n",
+		pr_warn("%s (watch_otherend) %s failed: %d.\n",
+			xen_suspend ? "resume" : "restore",
 			dev_name(dev), err);
 		return err;
 	}
@@ -653,8 +679,45 @@ EXPORT_SYMBOL_GPL(xenbus_dev_resume);
 
 int xenbus_dev_cancel(struct device *dev)
 {
-	/* Do nothing */
-	DPRINTK("cancel");
+	int err = 0;
+	struct xenbus_driver *drv;
+	struct xenbus_device *xdev
+		= container_of(dev, struct xenbus_device, dev);
+	bool xen_suspend = xen_suspend_mode_is_xen_suspend();
+
+	if (xen_suspend) {
+		/* Do nothing */
+		DPRINTK("cancel");
+		return 0;
+	}
+
+	DPRINTK("%s", xdev->nodename);
+
+	if (dev->driver == NULL)
+		return 0;
+	drv = to_xenbus_driver(dev->driver);
+	err = talk_to_otherend(xdev);
+	if (err) {
+		pr_warn("thaw (talk_to_otherend) %s failed: %d.\n",
+			dev_name(dev), err);
+		return err;
+	}
+
+	if (drv->thaw) {
+		err = drv->thaw(xdev);
+		if (err) {
+			pr_warn("thaw %s failed: %i\n", dev_name(dev), err);
+			return err;
+		}
+	}
+
+	err = watch_otherend(xdev);
+	if (err) {
+		pr_warn("thaw (watch_otherend) %s failed: %d.\n",
+			dev_name(dev), err);
+		return err;
+	}
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xenbus_dev_cancel);
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 869c816d5f8c..20261d5f4e78 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -100,6 +100,9 @@ struct xenbus_driver {
 	int (*remove)(struct xenbus_device *dev);
 	int (*suspend)(struct xenbus_device *dev);
 	int (*resume)(struct xenbus_device *dev);
+	int (*freeze)(struct xenbus_device *dev);
+	int (*thaw)(struct xenbus_device *dev);
+	int (*restore)(struct xenbus_device *dev);
 	int (*uevent)(struct xenbus_device *, struct kobj_uevent_env *);
 	struct device_driver driver;
 	int (*read_otherend_details)(struct xenbus_device *dev);
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 03/12] x86/xen: Introduce new function to map HYPERVISOR_shared_info on Resume
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
  2020-02-14 23:22 ` [RFC PATCH v3 01/12] xen/manage: keep track of the on-going suspend mode Anchal Agarwal
  2020-02-14 23:23 ` [RFC PATCH v3 02/12] xenbus: add freeze/thaw/restore callbacks support Anchal Agarwal
@ 2020-02-14 23:23 ` Anchal Agarwal
  2020-02-14 23:24 ` [RFC PATCH v3 04/12] x86/xen: add system core suspend and resume callbacks Anchal Agarwal
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:23 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

Introduce a small function which re-uses shared page's PA allocated
during guest initialization time in reserve_shared_info() and not
allocate new page during resume flow.
It also  does the mapping of shared_info_page by calling
xen_hvm_init_shared_info() to use the function.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/enlighten_hvm.c | 7 +++++++
 arch/x86/xen/xen-ops.h       | 1 +
 2 files changed, 8 insertions(+)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index e138f7de52d2..75b1ec7a0fcd 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -27,6 +27,13 @@
 
 static unsigned long shared_info_pfn;
 
+void xen_hvm_map_shared_info(void)
+{
+	xen_hvm_init_shared_info();
+	if (shared_info_pfn)
+		HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn));
+}
+
 void xen_hvm_init_shared_info(void)
 {
 	struct xen_add_to_physmap xatp;
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 45a441c33d6d..d84c357994bd 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -56,6 +56,7 @@ void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
 
 void xen_callback_vector(void);
+void xen_hvm_map_shared_info(void);
 void xen_hvm_init_shared_info(void);
 void xen_unplug_emulated_devices(void);
 
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 04/12] x86/xen: add system core suspend and resume callbacks
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (2 preceding siblings ...)
  2020-02-14 23:23 ` [RFC PATCH v3 03/12] x86/xen: Introduce new function to map HYPERVISOR_shared_info on Resume Anchal Agarwal
@ 2020-02-14 23:24 ` Anchal Agarwal
  2020-02-14 23:24 ` [RFC PATCH v3 05/12] xen-netfront: add callbacks for PM suspend and hibernation support Anchal Agarwal
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:24 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Munehisa Kamata <kamatam@amazon.com>

Add Xen PVHVM specific system core callbacks for PM suspend and
hibernation support. The callbacks suspend and resume Xen
primitives,like shared_info, pvclock and grant table. Note that
Xen suspend can handle them in a different manner, but system
core callbacks are called from the context. So if the callbacks
are called from Xen suspend context, return immediately.

Signed-off-by: Agarwal Anchal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 arch/x86/xen/enlighten_hvm.c |  1 +
 arch/x86/xen/suspend.c       | 53 ++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h        |  3 ++
 3 files changed, 57 insertions(+)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 75b1ec7a0fcd..138e71786e03 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -204,6 +204,7 @@ static void __init xen_hvm_guest_init(void)
 	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_have_vector_callback = 1;
 
+	xen_setup_syscore_ops();
 	xen_hvm_smp_init();
 	WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_hvm, xen_cpu_dead_hvm));
 	xen_unplug_emulated_devices();
diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index 1d83152c761b..784c4484100b 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -2,17 +2,22 @@
 #include <linux/types.h>
 #include <linux/tick.h>
 #include <linux/percpu-defs.h>
+#include <linux/syscore_ops.h>
+#include <linux/kernel_stat.h>
 
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
+#include <xen/interface/memory.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
+#include <xen/xen-ops.h>
 
 #include <asm/cpufeatures.h>
 #include <asm/msr-index.h>
 #include <asm/xen/hypercall.h>
 #include <asm/xen/page.h>
 #include <asm/fixmap.h>
+#include <asm/pvclock.h>
 
 #include "xen-ops.h"
 #include "mmu.h"
@@ -82,3 +87,51 @@ void xen_arch_suspend(void)
 
 	on_each_cpu(xen_vcpu_notify_suspend, NULL, 1);
 }
+
+static int xen_syscore_suspend(void)
+{
+	struct xen_remove_from_physmap xrfp;
+	int ret;
+
+	/* Xen suspend does similar stuffs in its own logic */
+	if (xen_suspend_mode_is_xen_suspend())
+		return 0;
+
+	xrfp.domid = DOMID_SELF;
+	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
+
+	ret = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrfp);
+	if (!ret)
+		HYPERVISOR_shared_info = &xen_dummy_shared_info;
+
+	return ret;
+}
+
+static void xen_syscore_resume(void)
+{
+	/* Xen suspend does similar stuffs in its own logic */
+	if (xen_suspend_mode_is_xen_suspend())
+		return;
+
+	/* No need to setup vcpu_info as it's already moved off */
+	xen_hvm_map_shared_info();
+
+	pvclock_resume();
+
+	gnttab_resume();
+}
+
+/*
+ * These callbacks will be called with interrupts disabled and when having only
+ * one CPU online.
+ */
+static struct syscore_ops xen_hvm_syscore_ops = {
+	.suspend = xen_syscore_suspend,
+	.resume = xen_syscore_resume
+};
+
+void __init xen_setup_syscore_ops(void)
+{
+	if (xen_hvm_domain())
+		register_syscore_ops(&xen_hvm_syscore_ops);
+}
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 6c36e161dfd1..3b3992b5b0c2 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -43,6 +43,9 @@ int xen_setup_shutdown_event(void);
 bool xen_suspend_mode_is_xen_suspend(void);
 bool xen_suspend_mode_is_pm_suspend(void);
 bool xen_suspend_mode_is_pm_hibernation(void);
+
+void xen_setup_syscore_ops(void);
+
 extern unsigned long *xen_contiguous_bitmap;
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 05/12] xen-netfront: add callbacks for PM suspend and hibernation support
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (3 preceding siblings ...)
  2020-02-14 23:24 ` [RFC PATCH v3 04/12] x86/xen: add system core suspend and resume callbacks Anchal Agarwal
@ 2020-02-14 23:24 ` Anchal Agarwal
  2020-02-14 23:25 ` [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:24 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Munehisa Kamata <kamatam@amazon.com>

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. The freeze handler simply disconnects the frotnend from the
backend and frees resources associated with queues after disabling the
net_device from the system. The restore handler just changes the
frontend state and let the xenbus handler to re-allocate the resources
and re-connect to the backend. This can be performed transparently to
the rest of the system. The handlers are used for both PM suspend and
hibernation so that we can keep the existing suspend/resume callbacks
for Xen suspend without modification. Freezing netfront devices is
normally expected to finish within a few hundred milliseconds, but it
can rarely take more than 5 seconds and hit the hard coded timeout,
it would depend on backend state which may be congested and/or have
complex configuration. While it's rare case, longer default timeout
seems a bit more reasonable here to avoid hitting the timeout.
Also, make it configurable via module parameter so that we can cover
broader setups than what we know currently.

[Anchal changelog: Variable name fix and checkpatch.pl fixes]
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/net/xen-netfront.c | 98 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 97 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 482c6c8b0fb7..65edcdd6e05f 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -43,6 +43,7 @@
 #include <linux/moduleparam.h>
 #include <linux/mm.h>
 #include <linux/slab.h>
+#include <linux/completion.h>
 #include <net/ip.h>
 
 #include <xen/xen.h>
@@ -56,6 +57,12 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+enum netif_freeze_state {
+	NETIF_FREEZE_STATE_UNFROZEN,
+	NETIF_FREEZE_STATE_FREEZING,
+	NETIF_FREEZE_STATE_FROZEN,
+};
+
 /* Module parameters */
 #define MAX_QUEUES_DEFAULT 8
 static unsigned int xennet_max_queues;
@@ -63,6 +70,12 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+static unsigned int netfront_freeze_timeout_secs = 10;
+module_param_named(freeze_timeout_secs,
+		   netfront_freeze_timeout_secs, uint, 0644);
+MODULE_PARM_DESC(freeze_timeout_secs,
+		 "timeout when freezing netfront device in seconds");
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -160,6 +173,10 @@ struct netfront_info {
 	struct netfront_stats __percpu *tx_stats;
 
 	atomic_t rx_gso_checksum_fixup;
+
+	int freeze_state;
+
+	struct completion wait_backend_disconnected;
 };
 
 struct netfront_rx_info {
@@ -721,6 +738,21 @@ static int xennet_close(struct net_device *dev)
 	return 0;
 }
 
+static int xennet_disable_interrupts(struct net_device *dev)
+{
+	struct netfront_info *np = netdev_priv(dev);
+	unsigned int num_queues = dev->real_num_tx_queues;
+	unsigned int queue_index;
+	struct netfront_queue *queue;
+
+	for (queue_index = 0; queue_index < num_queues; ++queue_index) {
+		queue = &np->queues[queue_index];
+		disable_irq(queue->tx_irq);
+		disable_irq(queue->rx_irq);
+	}
+	return 0;
+}
+
 static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
@@ -1301,6 +1333,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	np->queues = NULL;
 
+	init_completion(&np->wait_backend_disconnected);
+
 	err = -ENOMEM;
 	np->rx_stats = netdev_alloc_pcpu_stats(struct netfront_stats);
 	if (np->rx_stats == NULL)
@@ -1794,6 +1828,50 @@ static int xennet_create_queues(struct netfront_info *info,
 	return 0;
 }
 
+static int netfront_freeze(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	unsigned long timeout = netfront_freeze_timeout_secs * HZ;
+	int err = 0;
+
+	xennet_disable_interrupts(info->netdev);
+
+	netif_device_detach(info->netdev);
+
+	info->freeze_state = NETIF_FREEZE_STATE_FREEZING;
+
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/* We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+					      timeout);
+	if (!timeout) {
+		err = -EBUSY;
+		xenbus_dev_error(dev, err, "Freezing timed out;"
+				 "the device may become inconsistent state");
+		return err;
+	}
+
+	/* Tear down queues */
+	xennet_disconnect_backend(info);
+	xennet_destroy_queues(info);
+
+	info->freeze_state = NETIF_FREEZE_STATE_FROZEN;
+
+	return err;
+}
+
+static int netfront_restore(struct xenbus_device *dev)
+{
+	/* Kick the backend to re-connect */
+	xenbus_switch_state(dev, XenbusStateInitialising);
+
+	return 0;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1999,6 +2077,8 @@ static int xennet_connect(struct net_device *dev)
 		spin_unlock_bh(&queue->rx_lock);
 	}
 
+	np->freeze_state = NETIF_FREEZE_STATE_UNFROZEN;
+
 	return 0;
 }
 
@@ -2036,10 +2116,23 @@ static void netback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+		     /* dpm context is waiting for the backend */
+			if (np->freeze_state == NETIF_FREEZE_STATE_FREEZING)
+				complete(&np->wait_backend_disconnected);
 			break;
+		}
+
 		/* Fall through - Missed the backend's CLOSING state. */
 	case XenbusStateClosing:
+	       /* We may see unexpected Closed or Closing from the backend.
+		* Just ignore it not to prevent the frontend from being
+		* re-connected in the case of PM suspend or hibernation.
+		*/
+		if (np->freeze_state == NETIF_FREEZE_STATE_FROZEN &&
+		    dev->state == XenbusStateInitialising) {
+			break;
+		}
 		xenbus_frontend_closed(dev);
 		break;
 	}
@@ -2186,6 +2279,9 @@ static struct xenbus_driver netfront_driver = {
 	.probe = netfront_probe,
 	.remove = xennet_remove,
 	.resume = netfront_resume,
+	.freeze = netfront_freeze,
+	.thaw	= netfront_restore,
+	.restore = netfront_restore,
 	.otherend_changed = netback_changed,
 };
 
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (4 preceding siblings ...)
  2020-02-14 23:24 ` [RFC PATCH v3 05/12] xen-netfront: add callbacks for PM suspend and hibernation support Anchal Agarwal
@ 2020-02-14 23:25 ` Anchal Agarwal
  2020-02-17 10:05   ` Roger Pau Monné
  2020-02-21 14:24   ` Roger Pau Monné
  2020-02-14 23:25 ` [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation Anchal Agarwal
                   ` (5 subsequent siblings)
  11 siblings, 2 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:25 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Munehisa Kamata <kamatam@amazon.com

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
events, need to implement these xenbus_driver callbacks.
The freeze handler stops a block-layer queue and disconnect the
frontend from the backend while freeing ring_info and associated resources.
The restore handler re-allocates ring_info and re-connect to the
backend, so the rest of the kernel can continue to use the block device
transparently. Also, the handlers are used for both PM suspend and
hibernation so that we can keep the existing suspend/resume callbacks for
Xen suspend without modification. Before disconnecting from backend,
we need to prevent any new IO from being queued and wait for existing
IO to complete. Freeze/unfreeze of the queues will guarantee that there
are no requests in use on the shared ring.

Note:For older backends,if a backend doesn't have commit'12ea729645ace'
xen/blkback: unmap all persistent grants when frontend gets disconnected,
the frontend may see massive amount of grant table warning when freeing
resources.
[   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
[   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!

In this case, persistent grants would need to be disabled.

[Anchal Changelog: Removed timeout/request during blkfront freeze.
Fixed major part of the code to work with blk-mq]
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++---
 1 file changed, 112 insertions(+), 7 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 478120233750..d715ed3cb69a 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -47,6 +47,8 @@
 #include <linux/bitmap.h>
 #include <linux/list.h>
 #include <linux/workqueue.h>
+#include <linux/completion.h>
+#include <linux/delay.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -79,6 +81,8 @@ enum blkif_state {
 	BLKIF_STATE_DISCONNECTED,
 	BLKIF_STATE_CONNECTED,
 	BLKIF_STATE_SUSPENDED,
+	BLKIF_STATE_FREEZING,
+	BLKIF_STATE_FROZEN
 };
 
 struct grant {
@@ -220,6 +224,7 @@ struct blkfront_info
 	struct list_head requests;
 	struct bio_list bio_list;
 	struct list_head info_list;
+	struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
 static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
 static void blkfront_gather_backend_features(struct blkfront_info *info);
 static int negotiate_mq(struct blkfront_info *info);
+static void __blkif_free(struct blkfront_info *info);
 
 static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
 {
@@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 	blkif_set_queue_limits(info);
+	init_completion(&info->wait_backend_disconnected);
 
 	return 0;
 }
@@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
 /* Already hold rinfo->ring_lock. */
 static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
 {
+	if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
+		return;
 	if (!RING_FULL(&rinfo->ring))
 		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
 }
@@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
 
 static void blkif_free(struct blkfront_info *info, int suspend)
 {
-	unsigned int i;
-
 	/* Prevent new requests being issued until we fix things up. */
 	info->connected = suspend ?
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	if (info->rq)
 		blk_mq_stop_hw_queues(info->rq);
 
+	__blkif_free(info);
+}
+
+static void __blkif_free(struct blkfront_info *info)
+{
+	unsigned int i;
+
 	for (i = 0; i < info->nr_rings; i++)
 		blkif_free_ring(&info->rinfo[i]);
 
@@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
 	struct blkfront_info *info = rinfo->dev_info;
 
-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
-		return IRQ_HANDLED;
+	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
+		if (info->connected != BLKIF_STATE_FREEZING)
+			return IRQ_HANDLED;
+	}
 
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
  again:
@@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
 	struct bio *bio;
 	unsigned int segs;
 
+	bool frozen = info->connected == BLKIF_STATE_FROZEN;
 	blkfront_gather_backend_features(info);
 	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
 	blkif_set_queue_limits(info);
@@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
 		kick_pending_request_queues(rinfo);
 	}
 
+	if (frozen)
+		return 0;
+
 	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
 		/* Requeue pending requests (flush or discard) */
 		list_del_init(&req->queuelist);
@@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
+	case BLKIF_STATE_FROZEN:
 		/*
 		 * If we are recovering from suspension, we need to wait
 		 * for the backend to announce it's features before
@@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+			if (info->connected == BLKIF_STATE_FREEZING) {
+				__blkif_free(info);
+				info->connected = BLKIF_STATE_FROZEN;
+				complete(&info->wait_backend_disconnected);
+				break;
+			}
+
 			break;
+		}
+
+		/*
+		 * We may somehow receive backend's Closed again while thawing
+		 * or restoring and it causes thawing or restoring to fail.
+		 * Ignore such unexpected state anyway.
+		 */
+		if (info->connected == BLKIF_STATE_FROZEN &&
+				dev->state == XenbusStateInitialised) {
+			dev_dbg(&dev->dev,
+					"ignore the backend's Closed state: %s",
+					dev->nodename);
+			break;
+		}
 		/* fall through */
 	case XenbusStateClosing:
-		if (info)
-			blkfront_closing(info);
+		if (info) {
+			if (info->connected == BLKIF_STATE_FREEZING)
+				xenbus_frontend_closed(dev);
+			else
+				blkfront_closing(info);
+		}
 		break;
 	}
 }
@@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 	mutex_unlock(&blkfront_mutex);
 }
 
+static int blkfront_freeze(struct xenbus_device *dev)
+{
+	unsigned int i;
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	struct blkfront_ring_info *rinfo;
+	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
+	unsigned int timeout = 5 * HZ;
+	int err = 0;
+
+	info->connected = BLKIF_STATE_FREEZING;
+
+	blk_mq_freeze_queue(info->rq);
+	blk_mq_quiesce_queue(info->rq);
+
+	for (i = 0; i < info->nr_rings; i++) {
+		rinfo = &info->rinfo[i];
+
+		gnttab_cancel_free_callback(&rinfo->callback);
+		flush_work(&rinfo->work);
+	}
+
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/*
+	 * We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+					      timeout);
+	if (!timeout) {
+		err = -EBUSY;
+		xenbus_dev_error(dev, err, "Freezing timed out;"
+				 "the device may become inconsistent state");
+	}
+
+	return err;
+}
+
+static int blkfront_restore(struct xenbus_device *dev)
+{
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	int err = 0;
+
+	err = talk_to_blkback(dev, info);
+	blk_mq_unquiesce_queue(info->rq);
+	blk_mq_unfreeze_queue(info->rq);
+
+	if (err)
+		goto out;
+	blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
+
+out:
+	return err;
+}
+
 static const struct block_device_operations xlvbd_block_fops =
 {
 	.owner = THIS_MODULE,
@@ -2647,6 +2749,9 @@ static struct xenbus_driver blkfront_driver = {
 	.resume = blkfront_resume,
 	.otherend_changed = blkback_changed,
 	.is_ready = blkfront_is_ready,
+	.freeze = blkfront_freeze,
+	.thaw = blkfront_restore,
+	.restore = blkfront_restore
 };
 
 static void purge_persistent_grants(struct blkfront_info *info)
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (5 preceding siblings ...)
  2020-02-14 23:25 ` [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
@ 2020-02-14 23:25 ` Anchal Agarwal
  2020-03-06 23:03   ` Thomas Gleixner
  2020-02-14 23:26 ` [RFC PATCH v3 08/12] xen/time: introduce xen_{save,restore}_steal_clock Anchal Agarwal
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:25 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

There are no pm handlers for the legacy devices, so during tear down
stale event channel <> IRQ mapping may still remain in the image and
resume may fail. To avoid adding much code by implementing handlers for
legacy devices, add a new irq_chip flag IRQCHIP_SHUTDOWN_ON_SUSPEND which
when enabled on an irq-chip e.g xen-pirq, it will let core suspend/resume
irq code to shutdown and restart the active irqs. PM suspend/hibernation
code will rely on this.
Without this, in PM hibernation, information about the event channel
remains in hibernation image, but there is no guarantee that the same
event channel numbers are assigned to the devices when restoring the
system. This may cause conflict like the following and prevent some
devices from being restored correctly.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/xen/events/events_base.c |  1 +
 include/linux/irq.h              |  2 ++
 kernel/irq/chip.c                |  2 +-
 kernel/irq/internals.h           |  1 +
 kernel/irq/pm.c                  | 31 ++++++++++++++++++++++---------
 5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 6c8843968a52..e44f27b45bef 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1620,6 +1620,7 @@ static struct irq_chip xen_pirq_chip __read_mostly = {
 	.irq_set_affinity	= set_affinity_irq,
 
 	.irq_retrigger		= retrigger_dynirq,
+	.flags                  = IRQCHIP_SHUTDOWN_ON_SUSPEND,
 };
 
 static struct irq_chip xen_percpu_chip __read_mostly = {
diff --git a/include/linux/irq.h b/include/linux/irq.h
index fb301cf29148..2873a579fd9d 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -511,6 +511,7 @@ struct irq_chip {
  * IRQCHIP_EOI_THREADED:	Chip requires eoi() on unmask in threaded mode
  * IRQCHIP_SUPPORTS_LEVEL_MSI	Chip can provide two doorbells for Level MSIs
  * IRQCHIP_SUPPORTS_NMI:	Chip can deliver NMIs, only for root irqchips
+ * IRQCHIP_SHUTDOWN_ON_SUSPEND: Shutdown non wake irqs in the suspend path
  */
 enum {
 	IRQCHIP_SET_TYPE_MASKED		= (1 <<  0),
@@ -522,6 +523,7 @@ enum {
 	IRQCHIP_EOI_THREADED		= (1 <<  6),
 	IRQCHIP_SUPPORTS_LEVEL_MSI	= (1 <<  7),
 	IRQCHIP_SUPPORTS_NMI		= (1 <<  8),
+	IRQCHIP_SHUTDOWN_ON_SUSPEND     = (1 <<  9),
 };
 
 #include <linux/irqdesc.h>
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index b76703b2c0af..a1e8df5193ba 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -233,7 +233,7 @@ __irq_startup_managed(struct irq_desc *desc, struct cpumask *aff, bool force)
 }
 #endif
 
-static int __irq_startup(struct irq_desc *desc)
+int __irq_startup(struct irq_desc *desc)
 {
 	struct irq_data *d = irq_desc_get_irq_data(desc);
 	int ret = 0;
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 3924fbe829d4..11c7c55bda63 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -80,6 +80,7 @@ extern void __enable_irq(struct irq_desc *desc);
 extern int irq_activate(struct irq_desc *desc);
 extern int irq_activate_and_startup(struct irq_desc *desc, bool resend);
 extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
+extern int __irq_startup(struct irq_desc *desc);
 
 extern void irq_shutdown(struct irq_desc *desc);
 extern void irq_shutdown_and_deactivate(struct irq_desc *desc);
diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c
index 8f557fa1f4fe..dc48a25f1756 100644
--- a/kernel/irq/pm.c
+++ b/kernel/irq/pm.c
@@ -85,16 +85,25 @@ static bool suspend_device_irq(struct irq_desc *desc)
 	}
 
 	desc->istate |= IRQS_SUSPENDED;
-	__disable_irq(desc);
-
 	/*
-	 * Hardware which has no wakeup source configuration facility
-	 * requires that the non wakeup interrupts are masked at the
-	 * chip level. The chip implementation indicates that with
-	 * IRQCHIP_MASK_ON_SUSPEND.
+	 * Some irq chips (e.g. XEN PIRQ) require a full shutdown on suspend
+	 * as some of the legacy drivers(e.g. floppy) do nothing during the
+	 * suspend path
 	 */
-	if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
-		mask_irq(desc);
+	if (irq_desc_get_chip(desc)->flags & IRQCHIP_SHUTDOWN_ON_SUSPEND) {
+		irq_shutdown(desc);
+	} else {
+		__disable_irq(desc);
+
+	       /*
+		* Hardware which has no wakeup source configuration facility
+		* requires that the non wakeup interrupts are masked at the
+		* chip level. The chip implementation indicates that with
+		* IRQCHIP_MASK_ON_SUSPEND.
+		*/
+		if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
+			mask_irq(desc);
+	}
 	return true;
 }
 
@@ -152,7 +161,11 @@ static void resume_irq(struct irq_desc *desc)
 	irq_state_set_masked(desc);
 resume:
 	desc->istate &= ~IRQS_SUSPENDED;
-	__enable_irq(desc);
+
+	if (irq_desc_get_chip(desc)->flags & IRQCHIP_SHUTDOWN_ON_SUSPEND)
+		__irq_startup(desc);
+	else
+		__enable_irq(desc);
 }
 
 static void resume_irqs(bool want_early)
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 08/12] xen/time: introduce xen_{save,restore}_steal_clock
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (6 preceding siblings ...)
  2020-02-14 23:25 ` [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation Anchal Agarwal
@ 2020-02-14 23:26 ` Anchal Agarwal
  2020-02-14 23:27 ` [RFC PATCH v3 09/12] x86/xen: save and restore steal clock Anchal Agarwal
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:26 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Munehisa Kamata <kamatam@amazon.com>

Currently, steal time accounting code in scheduler expects steal clock
callback to provide monotonically increasing value. If the accounting
code receives a smaller value than previous one, it uses a negative
value to calculate steal time and results in incorrectly updated idle
and steal time accounting. This breaks userspace tools which read
/proc/stat.

top - 08:05:35 up  2:12,  3 users,  load average: 0.00, 0.07, 0.23
Tasks:  80 total,   1 running,  79 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,30100.0%id,  0.0%wa,  0.0%hi, 0.0%si,-1253874204672.0%st

This can actually happen when a Xen PVHVM guest gets restored from
hibernation, because such a restored guest is just a fresh domain from
Xen perspective and the time information in runstate info starts over
from scratch.

This patch introduces xen_save_steal_clock() which saves current values
in runstate info into per-cpu variables. Its couterpart,
xen_restore_steal_clock(), sets offset if it found the current values in
runstate info are smaller than previous ones. xen_steal_clock() is also
modified to use the offset to ensure that scheduler only sees
monotonically increasing number.

Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 drivers/xen/time.c    | 29 ++++++++++++++++++++++++++++-
 include/xen/xen-ops.h |  2 ++
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/time.c b/drivers/xen/time.c
index 0968859c29d0..3560222cc0dd 100644
--- a/drivers/xen/time.c
+++ b/drivers/xen/time.c
@@ -23,6 +23,9 @@ static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
 
 static DEFINE_PER_CPU(u64[4], old_runstate_time);
 
+static DEFINE_PER_CPU(u64, xen_prev_steal_clock);
+static DEFINE_PER_CPU(u64, xen_steal_clock_offset);
+
 /* return an consistent snapshot of 64-bit time/counter value */
 static u64 get64(const u64 *p)
 {
@@ -149,7 +152,7 @@ bool xen_vcpu_stolen(int vcpu)
 	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
 }
 
-u64 xen_steal_clock(int cpu)
+static u64 __xen_steal_clock(int cpu)
 {
 	struct vcpu_runstate_info state;
 
@@ -157,6 +160,30 @@ u64 xen_steal_clock(int cpu)
 	return state.time[RUNSTATE_runnable] + state.time[RUNSTATE_offline];
 }
 
+u64 xen_steal_clock(int cpu)
+{
+	return __xen_steal_clock(cpu) + per_cpu(xen_steal_clock_offset, cpu);
+}
+
+void xen_save_steal_clock(int cpu)
+{
+	per_cpu(xen_prev_steal_clock, cpu) = xen_steal_clock(cpu);
+}
+
+void xen_restore_steal_clock(int cpu)
+{
+	u64 steal_clock = __xen_steal_clock(cpu);
+
+	if (per_cpu(xen_prev_steal_clock, cpu) > steal_clock) {
+		/* Need to update the offset */
+		per_cpu(xen_steal_clock_offset, cpu) =
+		    per_cpu(xen_prev_steal_clock, cpu) - steal_clock;
+	} else {
+		/* Avoid unnecessary steal clock warp */
+		per_cpu(xen_steal_clock_offset, cpu) = 0;
+	}
+}
+
 void xen_setup_runstate_info(int cpu)
 {
 	struct vcpu_register_runstate_memory_area area;
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 3b3992b5b0c2..12b3f4474a05 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -37,6 +37,8 @@ void xen_time_setup_guest(void);
 void xen_manage_runstate_time(int action);
 void xen_get_runstate_snapshot(struct vcpu_runstate_info *res);
 u64 xen_steal_clock(int cpu);
+void xen_save_steal_clock(int cpu);
+void xen_restore_steal_clock(int cpu);
 
 int xen_setup_shutdown_event(void);
 
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 09/12] x86/xen: save and restore steal clock
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (7 preceding siblings ...)
  2020-02-14 23:26 ` [RFC PATCH v3 08/12] xen/time: introduce xen_{save,restore}_steal_clock Anchal Agarwal
@ 2020-02-14 23:27 ` Anchal Agarwal
  2020-02-14 23:27 ` [RFC PATCH v3 10/12] xen: Introduce wrapper for save/restore sched clock offset Anchal Agarwal
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:27 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Munehisa Kamata <kamatam@amazon.com>

Save steal clock values of all present CPUs in the system core ops
suspend callbacks. Also, restore a boot CPU's steal clock in the system
core resume callback. For non-boot CPUs, restore after they're brought
up, because runstate info for non-boot CPUs are not active until then.

Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/suspend.c | 13 ++++++++++++-
 arch/x86/xen/time.c    |  3 +++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index 784c4484100b..dae0f74f5390 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -91,12 +91,20 @@ void xen_arch_suspend(void)
 static int xen_syscore_suspend(void)
 {
 	struct xen_remove_from_physmap xrfp;
-	int ret;
+	int cpu, ret;
 
 	/* Xen suspend does similar stuffs in its own logic */
 	if (xen_suspend_mode_is_xen_suspend())
 		return 0;
 
+	for_each_present_cpu(cpu) {
+		/*
+		 * Nonboot CPUs are already offline, but the last copy of
+		 * runstate info is still accessible.
+		 */
+		xen_save_steal_clock(cpu);
+	}
+
 	xrfp.domid = DOMID_SELF;
 	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
 
@@ -118,6 +126,9 @@ static void xen_syscore_resume(void)
 
 	pvclock_resume();
 
+	/* Nonboot CPUs will be resumed when they're brought up */
+	xen_restore_steal_clock(smp_processor_id());
+
 	gnttab_resume();
 }
 
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index befbdd8b17f0..8cf632dda605 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -537,6 +537,9 @@ static void xen_hvm_setup_cpu_clockevents(void)
 {
 	int cpu = smp_processor_id();
 	xen_setup_runstate_info(cpu);
+	if (cpu)
+		xen_restore_steal_clock(cpu);
+
 	/*
 	 * xen_setup_timer(cpu) - snprintf is bad in atomic context. Hence
 	 * doing it xen_hvm_cpu_notify (which gets called by smp_init during
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 10/12] xen: Introduce wrapper for save/restore sched clock offset
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (8 preceding siblings ...)
  2020-02-14 23:27 ` [RFC PATCH v3 09/12] x86/xen: save and restore steal clock Anchal Agarwal
@ 2020-02-14 23:27 ` Anchal Agarwal
  2020-02-14 23:27 ` [RFC PATCH v3 11/12] xen: Update sched clock offset to avoid system instability in hibernation Anchal Agarwal
  2020-02-14 23:28 ` [RFC PATCH v3 12/12] PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA Anchal Agarwal
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:27 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

Introduce wrappers for save/restore xen_sched_clock_offset to be
used by PM hibernation code to avoid system instability during resume.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/time.c    | 15 +++++++++++++--
 arch/x86/xen/xen-ops.h |  2 ++
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 8cf632dda605..eeb6d3d2eaab 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -379,12 +379,23 @@ static const struct pv_time_ops xen_time_ops __initconst = {
 static struct pvclock_vsyscall_time_info *xen_clock __read_mostly;
 static u64 xen_clock_value_saved;
 
+/*This is needed to maintain a monotonic clock value during PM hibernation */
+void xen_save_sched_clock_offset(void)
+{
+	xen_clock_value_saved = xen_clocksource_read() - xen_sched_clock_offset;
+}
+
+void xen_restore_sched_clock_offset(void)
+{
+	xen_sched_clock_offset = xen_clocksource_read() - xen_clock_value_saved;
+}
+
 void xen_save_time_memory_area(void)
 {
 	struct vcpu_register_time_memory_area t;
 	int ret;
 
-	xen_clock_value_saved = xen_clocksource_read() - xen_sched_clock_offset;
+	xen_save_sched_clock_offset();
 
 	if (!xen_clock)
 		return;
@@ -426,7 +437,7 @@ void xen_restore_time_memory_area(void)
 out:
 	/* Need pvclock_resume() before using xen_clocksource_read(). */
 	pvclock_resume();
-	xen_sched_clock_offset = xen_clocksource_read() - xen_clock_value_saved;
+	xen_restore_sched_clock_offset();
 }
 
 static void xen_setup_vsyscall_time_info(void)
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index d84c357994bd..9f49124df033 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -72,6 +72,8 @@ void xen_save_time_memory_area(void);
 void xen_restore_time_memory_area(void);
 void xen_init_time_ops(void);
 void xen_hvm_init_time_ops(void);
+void xen_save_sched_clock_offset(void);
+void xen_restore_sched_clock_offset(void);
 
 irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
 
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 11/12] xen: Update sched clock offset to avoid system instability in hibernation
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (9 preceding siblings ...)
  2020-02-14 23:27 ` [RFC PATCH v3 10/12] xen: Introduce wrapper for save/restore sched clock offset Anchal Agarwal
@ 2020-02-14 23:27 ` Anchal Agarwal
  2020-02-14 23:28 ` [RFC PATCH v3 12/12] PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA Anchal Agarwal
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:27 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

Save/restore xen_sched_clock_offset in syscore suspend/resume during PM
hibernation. Commit '867cefb4cb1012: ("xen: Fix x86 sched_clock() interface
for xen")' fixes xen guest time handling during migration. A similar issue
is seen during PM hibernation when system runs CPU intensive workload.
Post resume pvclock resets the value to 0 however, xen sched_clock_offset
is never updated. System instability is seen during resume from hibernation
when system is under heavy CPU load. Since xen_sched_clock_offset is not
updated, system does not see the monotonic clock value and the scheduler
would then think that heavy CPU hog tasks need more time in CPU, causing
the system to freeze

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/suspend.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index dae0f74f5390..7e5275944810 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -105,6 +105,8 @@ static int xen_syscore_suspend(void)
 		xen_save_steal_clock(cpu);
 	}
 
+	xen_save_sched_clock_offset();
+
 	xrfp.domid = DOMID_SELF;
 	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
 
@@ -126,6 +128,12 @@ static void xen_syscore_resume(void)
 
 	pvclock_resume();
 
+	/*
+	 * Restore xen_sched_clock_offset during resume to maintain
+	 * monotonic clock value
+	 */
+	xen_restore_sched_clock_offset();
+
 	/* Nonboot CPUs will be resumed when they're brought up */
 	xen_restore_steal_clock(smp_processor_id());
 
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 12/12] PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA
  2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
                   ` (10 preceding siblings ...)
  2020-02-14 23:27 ` [RFC PATCH v3 11/12] xen: Update sched clock offset to avoid system instability in hibernation Anchal Agarwal
@ 2020-02-14 23:28 ` Anchal Agarwal
  11 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-14 23:28 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Aleksei Besogonov <cyberax@amazon.com>

The SNAPSHOT_SET_SWAP_AREA is supposed to be used to set the hibernation
offset on a running kernel to enable hibernating to a swap file.
However, it doesn't actually update the swsusp_resume_block variable. As
a result, the hibernation fails at the last step (after all the data is
written out) in the validation of the swap signature in
mark_swapfiles().

Before this patch, the command line processing was the only place where
swsusp_resume_block was set.

Signed-off-by: Aleksei Besogonov <cyberax@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 kernel/power/user.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/power/user.c b/kernel/power/user.c
index 77438954cc2b..d396e313cb7b 100644
--- a/kernel/power/user.c
+++ b/kernel/power/user.c
@@ -374,8 +374,12 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd,
 			if (swdev) {
 				offset = swap_area.offset;
 				data->swap = swap_type_of(swdev, offset, NULL);
-				if (data->swap < 0)
+				if (data->swap < 0) {
 					error = -ENODEV;
+				} else {
+					swsusp_resume_device = swdev;
+					swsusp_resume_block = offset;
+				}
 			} else {
 				data->swap = -1;
 				error = -EINVAL;
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-14 23:25 ` [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
@ 2020-02-17 10:05   ` Roger Pau Monné
  2020-02-17 23:05     ` Anchal Agarwal
  2020-02-21 14:24   ` Roger Pau Monné
  1 sibling, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-17 10:05 UTC (permalink / raw)
  To: Anchal Agarwal
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com
> 
> Add freeze, thaw and restore callbacks for PM suspend and hibernation
> support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> events, need to implement these xenbus_driver callbacks.
> The freeze handler stops a block-layer queue and disconnect the
> frontend from the backend while freeing ring_info and associated resources.
> The restore handler re-allocates ring_info and re-connect to the
> backend, so the rest of the kernel can continue to use the block device
> transparently. Also, the handlers are used for both PM suspend and
> hibernation so that we can keep the existing suspend/resume callbacks for
> Xen suspend without modification. Before disconnecting from backend,
> we need to prevent any new IO from being queued and wait for existing
> IO to complete.

This is different from Xen (xenstore) initiated suspension, as in that
case Linux doesn't flush the rings or disconnects from the backend.

This is done so that in case suspensions fails the recovery doesn't
need to reconnect the PV devices, and in order to speed up suspension
time (ie: waiting for all queues to be flushed can take time as Linux
supports multiqueue, multipage rings and indirect descriptors), and
the backend could be contended if there's a lot of IO pressure from
guests.

Linux already keeps a shadow of the ring contents, so in-flight
requests can be re-issued after the frontend has reconnected during
resume.

> Freeze/unfreeze of the queues will guarantee that there
> are no requests in use on the shared ring.
> 
> Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> xen/blkback: unmap all persistent grants when frontend gets disconnected,
> the frontend may see massive amount of grant table warning when freeing
> resources.
> [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
> [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> 
> In this case, persistent grants would need to be disabled.
> 
> [Anchal Changelog: Removed timeout/request during blkfront freeze.
> Fixed major part of the code to work with blk-mq]
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> ---
>  drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++---
>  1 file changed, 112 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 478120233750..d715ed3cb69a 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -47,6 +47,8 @@
>  #include <linux/bitmap.h>
>  #include <linux/list.h>
>  #include <linux/workqueue.h>
> +#include <linux/completion.h>
> +#include <linux/delay.h>
>  
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
> @@ -79,6 +81,8 @@ enum blkif_state {
>  	BLKIF_STATE_DISCONNECTED,
>  	BLKIF_STATE_CONNECTED,
>  	BLKIF_STATE_SUSPENDED,
> +	BLKIF_STATE_FREEZING,
> +	BLKIF_STATE_FROZEN
>  };
>  
>  struct grant {
> @@ -220,6 +224,7 @@ struct blkfront_info
>  	struct list_head requests;
>  	struct bio_list bio_list;
>  	struct list_head info_list;
> +	struct completion wait_backend_disconnected;
>  };
>  
>  static unsigned int nr_minors;
> @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
>  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
>  static void blkfront_gather_backend_features(struct blkfront_info *info);
>  static int negotiate_mq(struct blkfront_info *info);
> +static void __blkif_free(struct blkfront_info *info);
>  
>  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
>  {
> @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
>  	info->sector_size = sector_size;
>  	info->physical_sector_size = physical_sector_size;
>  	blkif_set_queue_limits(info);
> +	init_completion(&info->wait_backend_disconnected);
>  
>  	return 0;
>  }
> @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
>  /* Already hold rinfo->ring_lock. */
>  static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
>  {
> +	if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> +		return;
>  	if (!RING_FULL(&rinfo->ring))
>  		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
>  }
> @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
>  
>  static void blkif_free(struct blkfront_info *info, int suspend)
>  {
> -	unsigned int i;
> -
>  	/* Prevent new requests being issued until we fix things up. */
>  	info->connected = suspend ?
>  		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int suspend)
>  	if (info->rq)
>  		blk_mq_stop_hw_queues(info->rq);
>  
> +	__blkif_free(info);
> +}
> +
> +static void __blkif_free(struct blkfront_info *info)
> +{
> +	unsigned int i;
> +
>  	for (i = 0; i < info->nr_rings; i++)
>  		blkif_free_ring(&info->rinfo[i]);
>  
> @@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
>  	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
>  	struct blkfront_info *info = rinfo->dev_info;
>  
> -	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
> -		return IRQ_HANDLED;
> +	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
> +		if (info->connected != BLKIF_STATE_FREEZING)
> +			return IRQ_HANDLED;
> +	}
>  
>  	spin_lock_irqsave(&rinfo->ring_lock, flags);
>   again:
> @@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
>  	struct bio *bio;
>  	unsigned int segs;
>  
> +	bool frozen = info->connected == BLKIF_STATE_FROZEN;
>  	blkfront_gather_backend_features(info);
>  	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
>  	blkif_set_queue_limits(info);
> @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
>  		kick_pending_request_queues(rinfo);
>  	}
>  
> +	if (frozen)
> +		return 0;
> +
>  	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
>  		/* Requeue pending requests (flush or discard) */
>  		list_del_init(&req->queuelist);
> @@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
>  
>  		return;
>  	case BLKIF_STATE_SUSPENDED:
> +	case BLKIF_STATE_FROZEN:
>  		/*
>  		 * If we are recovering from suspension, we need to wait
>  		 * for the backend to announce it's features before
> @@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
>  		break;
>  
>  	case XenbusStateClosed:
> -		if (dev->state == XenbusStateClosed)
> +		if (dev->state == XenbusStateClosed) {
> +			if (info->connected == BLKIF_STATE_FREEZING) {
> +				__blkif_free(info);
> +				info->connected = BLKIF_STATE_FROZEN;
> +				complete(&info->wait_backend_disconnected);
> +				break;
> +			}
> +
>  			break;
> +		}
> +
> +		/*
> +		 * We may somehow receive backend's Closed again while thawing
> +		 * or restoring and it causes thawing or restoring to fail.
> +		 * Ignore such unexpected state anyway.
> +		 */
> +		if (info->connected == BLKIF_STATE_FROZEN &&
> +				dev->state == XenbusStateInitialised) {
> +			dev_dbg(&dev->dev,
> +					"ignore the backend's Closed state: %s",
> +					dev->nodename);
> +			break;
> +		}
>  		/* fall through */
>  	case XenbusStateClosing:
> -		if (info)
> -			blkfront_closing(info);
> +		if (info) {
> +			if (info->connected == BLKIF_STATE_FREEZING)
> +				xenbus_frontend_closed(dev);
> +			else
> +				blkfront_closing(info);
> +		}
>  		break;
>  	}
>  }
> @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
>  	mutex_unlock(&blkfront_mutex);
>  }
>  
> +static int blkfront_freeze(struct xenbus_device *dev)
> +{
> +	unsigned int i;
> +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> +	struct blkfront_ring_info *rinfo;
> +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> +	unsigned int timeout = 5 * HZ;
> +	int err = 0;
> +
> +	info->connected = BLKIF_STATE_FREEZING;
> +
> +	blk_mq_freeze_queue(info->rq);
> +	blk_mq_quiesce_queue(info->rq);
> +
> +	for (i = 0; i < info->nr_rings; i++) {
> +		rinfo = &info->rinfo[i];
> +
> +		gnttab_cancel_free_callback(&rinfo->callback);
> +		flush_work(&rinfo->work);
> +	}
> +
> +	/* Kick the backend to disconnect */
> +	xenbus_switch_state(dev, XenbusStateClosing);

Are you sure this is safe?

I don't think you wait for all requests pending on the ring to be
finished by the backend, and hence you might loose requests as the
ones on the ring would not be re-issued by blkfront_restore AFAICT.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-17 10:05   ` Roger Pau Monné
@ 2020-02-17 23:05     ` Anchal Agarwal
  2020-02-18  9:16       ` Roger Pau Monné
  0 siblings, 1 reply; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-17 23:05 UTC (permalink / raw)
  To: Roger Pau Monné,
	tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, anchalag, xen-devel,
	vkuznets, netdev, linux-kernel, dwmw, fllinden, benh
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > From: Munehisa Kamata <kamatam@amazon.com
> > 
> > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > events, need to implement these xenbus_driver callbacks.
> > The freeze handler stops a block-layer queue and disconnect the
> > frontend from the backend while freeing ring_info and associated resources.
> > The restore handler re-allocates ring_info and re-connect to the
> > backend, so the rest of the kernel can continue to use the block device
> > transparently. Also, the handlers are used for both PM suspend and
> > hibernation so that we can keep the existing suspend/resume callbacks for
> > Xen suspend without modification. Before disconnecting from backend,
> > we need to prevent any new IO from being queued and wait for existing
> > IO to complete.
> 
> This is different from Xen (xenstore) initiated suspension, as in that
> case Linux doesn't flush the rings or disconnects from the backend.
Yes, AFAIK in xen initiated suspension backend takes care of it. 
> 
> This is done so that in case suspensions fails the recovery doesn't
> need to reconnect the PV devices, and in order to speed up suspension
> time (ie: waiting for all queues to be flushed can take time as Linux
> supports multiqueue, multipage rings and indirect descriptors), and
> the backend could be contended if there's a lot of IO pressure from
> guests.
> 
> Linux already keeps a shadow of the ring contents, so in-flight
> requests can be re-issued after the frontend has reconnected during
> resume.
> 
> > Freeze/unfreeze of the queues will guarantee that there
> > are no requests in use on the shared ring.
> > 
> > Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> > xen/blkback: unmap all persistent grants when frontend gets disconnected,
> > the frontend may see massive amount of grant table warning when freeing
> > resources.
> > [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
> > [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> > 
> > In this case, persistent grants would need to be disabled.
> > 
> > [Anchal Changelog: Removed timeout/request during blkfront freeze.
> > Fixed major part of the code to work with blk-mq]
> > Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> > Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> > ---
> >  drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++---
> >  1 file changed, 112 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 478120233750..d715ed3cb69a 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -47,6 +47,8 @@
> >  #include <linux/bitmap.h>
> >  #include <linux/list.h>
> >  #include <linux/workqueue.h>
> > +#include <linux/completion.h>
> > +#include <linux/delay.h>
> >  
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > @@ -79,6 +81,8 @@ enum blkif_state {
> >  	BLKIF_STATE_DISCONNECTED,
> >  	BLKIF_STATE_CONNECTED,
> >  	BLKIF_STATE_SUSPENDED,
> > +	BLKIF_STATE_FREEZING,
> > +	BLKIF_STATE_FROZEN
> >  };
> >  
> >  struct grant {
> > @@ -220,6 +224,7 @@ struct blkfront_info
> >  	struct list_head requests;
> >  	struct bio_list bio_list;
> >  	struct list_head info_list;
> > +	struct completion wait_backend_disconnected;
> >  };
> >  
> >  static unsigned int nr_minors;
> > @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
> >  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
> >  static void blkfront_gather_backend_features(struct blkfront_info *info);
> >  static int negotiate_mq(struct blkfront_info *info);
> > +static void __blkif_free(struct blkfront_info *info);
> >  
> >  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
> >  {
> > @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
> >  	info->sector_size = sector_size;
> >  	info->physical_sector_size = physical_sector_size;
> >  	blkif_set_queue_limits(info);
> > +	init_completion(&info->wait_backend_disconnected);
> >  
> >  	return 0;
> >  }
> > @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
> >  /* Already hold rinfo->ring_lock. */
> >  static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
> >  {
> > +	if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> > +		return;
> >  	if (!RING_FULL(&rinfo->ring))
> >  		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
> >  }
> > @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
> >  
> >  static void blkif_free(struct blkfront_info *info, int suspend)
> >  {
> > -	unsigned int i;
> > -
> >  	/* Prevent new requests being issued until we fix things up. */
> >  	info->connected = suspend ?
> >  		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> > @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int suspend)
> >  	if (info->rq)
> >  		blk_mq_stop_hw_queues(info->rq);
> >  
> > +	__blkif_free(info);
> > +}
> > +
> > +static void __blkif_free(struct blkfront_info *info)
> > +{
> > +	unsigned int i;
> > +
> >  	for (i = 0; i < info->nr_rings; i++)
> >  		blkif_free_ring(&info->rinfo[i]);
> >  
> > @@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
> >  	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
> >  	struct blkfront_info *info = rinfo->dev_info;
> >  
> > -	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
> > -		return IRQ_HANDLED;
> > +	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
> > +		if (info->connected != BLKIF_STATE_FREEZING)
> > +			return IRQ_HANDLED;
> > +	}
> >  
> >  	spin_lock_irqsave(&rinfo->ring_lock, flags);
> >   again:
> > @@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
> >  	struct bio *bio;
> >  	unsigned int segs;
> >  
> > +	bool frozen = info->connected == BLKIF_STATE_FROZEN;
> >  	blkfront_gather_backend_features(info);
> >  	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> >  	blkif_set_queue_limits(info);
> > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> >  		kick_pending_request_queues(rinfo);
> >  	}
> >  
> > +	if (frozen)
> > +		return 0;
> > +
> >  	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
> >  		/* Requeue pending requests (flush or discard) */
> >  		list_del_init(&req->queuelist);
> > @@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
> >  
> >  		return;
> >  	case BLKIF_STATE_SUSPENDED:
> > +	case BLKIF_STATE_FROZEN:
> >  		/*
> >  		 * If we are recovering from suspension, we need to wait
> >  		 * for the backend to announce it's features before
> > @@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
> >  		break;
> >  
> >  	case XenbusStateClosed:
> > -		if (dev->state == XenbusStateClosed)
> > +		if (dev->state == XenbusStateClosed) {
> > +			if (info->connected == BLKIF_STATE_FREEZING) {
> > +				__blkif_free(info);
> > +				info->connected = BLKIF_STATE_FROZEN;
> > +				complete(&info->wait_backend_disconnected);
> > +				break;
> > +			}
> > +
> >  			break;
> > +		}
> > +
> > +		/*
> > +		 * We may somehow receive backend's Closed again while thawing
> > +		 * or restoring and it causes thawing or restoring to fail.
> > +		 * Ignore such unexpected state anyway.
> > +		 */
> > +		if (info->connected == BLKIF_STATE_FROZEN &&
> > +				dev->state == XenbusStateInitialised) {
> > +			dev_dbg(&dev->dev,
> > +					"ignore the backend's Closed state: %s",
> > +					dev->nodename);
> > +			break;
> > +		}
> >  		/* fall through */
> >  	case XenbusStateClosing:
> > -		if (info)
> > -			blkfront_closing(info);
> > +		if (info) {
> > +			if (info->connected == BLKIF_STATE_FREEZING)
> > +				xenbus_frontend_closed(dev);
> > +			else
> > +				blkfront_closing(info);
> > +		}
> >  		break;
> >  	}
> >  }
> > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
> >  	mutex_unlock(&blkfront_mutex);
> >  }
> >  
> > +static int blkfront_freeze(struct xenbus_device *dev)
> > +{
> > +	unsigned int i;
> > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > +	struct blkfront_ring_info *rinfo;
> > +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > +	unsigned int timeout = 5 * HZ;
> > +	int err = 0;
> > +
> > +	info->connected = BLKIF_STATE_FREEZING;
> > +
> > +	blk_mq_freeze_queue(info->rq);
> > +	blk_mq_quiesce_queue(info->rq);
> > +
> > +	for (i = 0; i < info->nr_rings; i++) {
> > +		rinfo = &info->rinfo[i];
> > +
> > +		gnttab_cancel_free_callback(&rinfo->callback);
> > +		flush_work(&rinfo->work);
> > +	}
> > +
> > +	/* Kick the backend to disconnect */
> > +	xenbus_switch_state(dev, XenbusStateClosing);
> 
> Are you sure this is safe?
> 
In my testing running multiple fio jobs, other test scenarios running
a memory loader works fine. I did not came across a scenario that would
have failed resume due to blkfront issues unless you can sugest some?
> I don't think you wait for all requests pending on the ring to be
> finished by the backend, and hence you might loose requests as the
> ones on the ring would not be re-issued by blkfront_restore AFAICT.
> 
AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
request on the shared ring. Also, we I want to pause the queue and flush all
the pending requests in the shared ring before disconnecting from backend.
Quiescing the queue seemed a better option here as we want to make sure ongoing
requests dispatches are totally drained.
I should accept that some of these notion is borrowed from how nvme freeze/unfreeze 
is done although its not apple to apple comparison.

Do you have any particular scenario in mind which may cause resume to fail?
> Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-17 23:05     ` Anchal Agarwal
@ 2020-02-18  9:16       ` Roger Pau Monné
  2020-02-19 18:04         ` Anchal Agarwal
  0 siblings, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-18  9:16 UTC (permalink / raw)
  To: Anchal Agarwal
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
> On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > > From: Munehisa Kamata <kamatam@amazon.com
> > > 
> > > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > > events, need to implement these xenbus_driver callbacks.
> > > The freeze handler stops a block-layer queue and disconnect the
> > > frontend from the backend while freeing ring_info and associated resources.
> > > The restore handler re-allocates ring_info and re-connect to the
> > > backend, so the rest of the kernel can continue to use the block device
> > > transparently. Also, the handlers are used for both PM suspend and
> > > hibernation so that we can keep the existing suspend/resume callbacks for
> > > Xen suspend without modification. Before disconnecting from backend,
> > > we need to prevent any new IO from being queued and wait for existing
> > > IO to complete.
> > 
> > This is different from Xen (xenstore) initiated suspension, as in that
> > case Linux doesn't flush the rings or disconnects from the backend.
> Yes, AFAIK in xen initiated suspension backend takes care of it. 

No, in Xen initiated suspension backend doesn't take care of flushing
the rings, the frontend has a shadow copy of the ring contents and it
re-issues the requests on resume.

> > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > +{
> > > +	unsigned int i;
> > > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > > +	struct blkfront_ring_info *rinfo;
> > > +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > > +	unsigned int timeout = 5 * HZ;
> > > +	int err = 0;
> > > +
> > > +	info->connected = BLKIF_STATE_FREEZING;
> > > +
> > > +	blk_mq_freeze_queue(info->rq);
> > > +	blk_mq_quiesce_queue(info->rq);
> > > +
> > > +	for (i = 0; i < info->nr_rings; i++) {
> > > +		rinfo = &info->rinfo[i];
> > > +
> > > +		gnttab_cancel_free_callback(&rinfo->callback);
> > > +		flush_work(&rinfo->work);
> > > +	}
> > > +
> > > +	/* Kick the backend to disconnect */
> > > +	xenbus_switch_state(dev, XenbusStateClosing);
> > 
> > Are you sure this is safe?
> > 
> In my testing running multiple fio jobs, other test scenarios running
> a memory loader works fine. I did not came across a scenario that would
> have failed resume due to blkfront issues unless you can sugest some?

AFAICT you don't wait for the in-flight requests to be finished, and
just rely on blkback to finish processing those. I'm not sure all
blkback implementations out there can guarantee that.

The approach used by Xen initiated suspension is to re-issue the
in-flight requests when resuming. I have to admit I don't think this
is the best approach, but I would like to keep both the Xen and the PM
initiated suspension using the same logic, and hence I would request
that you try to re-use the existing resume logic (blkfront_resume).

> > I don't think you wait for all requests pending on the ring to be
> > finished by the backend, and hence you might loose requests as the
> > ones on the ring would not be re-issued by blkfront_restore AFAICT.
> > 
> AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
> request on the shared ring. Also, we I want to pause the queue and flush all
> the pending requests in the shared ring before disconnecting from backend.

Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
finished. I guess it's fine then.

> Quiescing the queue seemed a better option here as we want to make sure ongoing
> requests dispatches are totally drained.
> I should accept that some of these notion is borrowed from how nvme freeze/unfreeze 
> is done although its not apple to apple comparison.

That's fine, but I would still like to requests that you use the same
logic (as much as possible) for both the Xen and the PM initiated
suspension.

So you either apply this freeze/unfreeze to the Xen suspension (and
drop the re-issuing of requests on resume) or adapt the same approach
as the Xen initiated suspension. Keeping two completely different
approaches to suspension / resume on blkfront is not suitable long
term.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-18  9:16       ` Roger Pau Monné
@ 2020-02-19 18:04         ` Anchal Agarwal
  2020-02-20  8:39           ` Roger Pau Monné
  0 siblings, 1 reply; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-19 18:04 UTC (permalink / raw)
  To: Roger Pau Monné,
	tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, anchalag, xen-devel,
	vkuznets, netdev, linux-kernel, dwmw, fllinden, benh
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
> > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > > > From: Munehisa Kamata <kamatam@amazon.com
> > > > 
> > > > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > > > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > > > events, need to implement these xenbus_driver callbacks.
> > > > The freeze handler stops a block-layer queue and disconnect the
> > > > frontend from the backend while freeing ring_info and associated resources.
> > > > The restore handler re-allocates ring_info and re-connect to the
> > > > backend, so the rest of the kernel can continue to use the block device
> > > > transparently. Also, the handlers are used for both PM suspend and
> > > > hibernation so that we can keep the existing suspend/resume callbacks for
> > > > Xen suspend without modification. Before disconnecting from backend,
> > > > we need to prevent any new IO from being queued and wait for existing
> > > > IO to complete.
> > > 
> > > This is different from Xen (xenstore) initiated suspension, as in that
> > > case Linux doesn't flush the rings or disconnects from the backend.
> > Yes, AFAIK in xen initiated suspension backend takes care of it. 
> 
> No, in Xen initiated suspension backend doesn't take care of flushing
> the rings, the frontend has a shadow copy of the ring contents and it
> re-issues the requests on resume.
> 
Yes, I meant suspension in general where both xenstore and backend knows
system is going under suspension and not flushing of rings. That happens
in frontend when backend indicates that state is closing and so on.
I may have written it in wrong context.
> > > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > > +{
> > > > +	unsigned int i;
> > > > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > > > +	struct blkfront_ring_info *rinfo;
> > > > +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > > > +	unsigned int timeout = 5 * HZ;
> > > > +	int err = 0;
> > > > +
> > > > +	info->connected = BLKIF_STATE_FREEZING;
> > > > +
> > > > +	blk_mq_freeze_queue(info->rq);
> > > > +	blk_mq_quiesce_queue(info->rq);
> > > > +
> > > > +	for (i = 0; i < info->nr_rings; i++) {
> > > > +		rinfo = &info->rinfo[i];
> > > > +
> > > > +		gnttab_cancel_free_callback(&rinfo->callback);
> > > > +		flush_work(&rinfo->work);
> > > > +	}
> > > > +
> > > > +	/* Kick the backend to disconnect */
> > > > +	xenbus_switch_state(dev, XenbusStateClosing);
> > > 
> > > Are you sure this is safe?
> > > 
> > In my testing running multiple fio jobs, other test scenarios running
> > a memory loader works fine. I did not came across a scenario that would
> > have failed resume due to blkfront issues unless you can sugest some?
> 
> AFAICT you don't wait for the in-flight requests to be finished, and
> just rely on blkback to finish processing those. I'm not sure all
> blkback implementations out there can guarantee that.
> 
> The approach used by Xen initiated suspension is to re-issue the
> in-flight requests when resuming. I have to admit I don't think this
> is the best approach, but I would like to keep both the Xen and the PM
> initiated suspension using the same logic, and hence I would request
> that you try to re-use the existing resume logic (blkfront_resume).
> 
> > > I don't think you wait for all requests pending on the ring to be
> > > finished by the backend, and hence you might loose requests as the
> > > ones on the ring would not be re-issued by blkfront_restore AFAICT.
> > > 
> > AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
> > request on the shared ring. Also, we I want to pause the queue and flush all
> > the pending requests in the shared ring before disconnecting from backend.
> 
> Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
> finished. I guess it's fine then.
> 
Ok.
> > Quiescing the queue seemed a better option here as we want to make sure ongoing
> > requests dispatches are totally drained.
> > I should accept that some of these notion is borrowed from how nvme freeze/unfreeze 
> > is done although its not apple to apple comparison.
> 
> That's fine, but I would still like to requests that you use the same
> logic (as much as possible) for both the Xen and the PM initiated
> suspension.
> 
> So you either apply this freeze/unfreeze to the Xen suspension (and
> drop the re-issuing of requests on resume) or adapt the same approach
> as the Xen initiated suspension. Keeping two completely different
> approaches to suspension / resume on blkfront is not suitable long
> term.
> 
I agree with you on overhaul of xen suspend/resume wrt blkfront is a good
idea however, IMO that is a work for future and this patch series should 
not be blocked for it. What do you think?
> Thanks, Roger.
> 
Thanks,
Anchal

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-19 18:04         ` Anchal Agarwal
@ 2020-02-20  8:39           ` Roger Pau Monné
  2020-02-20  8:54             ` [Xen-devel] " Durrant, Paul
  0 siblings, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-20  8:39 UTC (permalink / raw)
  To: Anchal Agarwal
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

Thanks for this work, please see below.

On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
> On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
> > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > > > > From: Munehisa Kamata <kamatam@amazon.com
> > > > > 
> > > > > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > > > > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > > > > events, need to implement these xenbus_driver callbacks.
> > > > > The freeze handler stops a block-layer queue and disconnect the
> > > > > frontend from the backend while freeing ring_info and associated resources.
> > > > > The restore handler re-allocates ring_info and re-connect to the
> > > > > backend, so the rest of the kernel can continue to use the block device
> > > > > transparently. Also, the handlers are used for both PM suspend and
> > > > > hibernation so that we can keep the existing suspend/resume callbacks for
> > > > > Xen suspend without modification. Before disconnecting from backend,
> > > > > we need to prevent any new IO from being queued and wait for existing
> > > > > IO to complete.
> > > > 
> > > > This is different from Xen (xenstore) initiated suspension, as in that
> > > > case Linux doesn't flush the rings or disconnects from the backend.
> > > Yes, AFAIK in xen initiated suspension backend takes care of it. 
> > 
> > No, in Xen initiated suspension backend doesn't take care of flushing
> > the rings, the frontend has a shadow copy of the ring contents and it
> > re-issues the requests on resume.
> > 
> Yes, I meant suspension in general where both xenstore and backend knows
> system is going under suspension and not flushing of rings.

backend has no idea the guest is going to be suspended. Backend code
is completely agnostic to suspension/resume.

> That happens
> in frontend when backend indicates that state is closing and so on.
> I may have written it in wrong context.

I'm afraid I'm not sure I fully understand this last sentence.

> > > > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > > > +{
> > > > > +	unsigned int i;
> > > > > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > > > > +	struct blkfront_ring_info *rinfo;
> > > > > +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > > > > +	unsigned int timeout = 5 * HZ;
> > > > > +	int err = 0;
> > > > > +
> > > > > +	info->connected = BLKIF_STATE_FREEZING;
> > > > > +
> > > > > +	blk_mq_freeze_queue(info->rq);
> > > > > +	blk_mq_quiesce_queue(info->rq);
> > > > > +
> > > > > +	for (i = 0; i < info->nr_rings; i++) {
> > > > > +		rinfo = &info->rinfo[i];
> > > > > +
> > > > > +		gnttab_cancel_free_callback(&rinfo->callback);
> > > > > +		flush_work(&rinfo->work);
> > > > > +	}
> > > > > +
> > > > > +	/* Kick the backend to disconnect */
> > > > > +	xenbus_switch_state(dev, XenbusStateClosing);
> > > > 
> > > > Are you sure this is safe?
> > > > 
> > > In my testing running multiple fio jobs, other test scenarios running
> > > a memory loader works fine. I did not came across a scenario that would
> > > have failed resume due to blkfront issues unless you can sugest some?
> > 
> > AFAICT you don't wait for the in-flight requests to be finished, and
> > just rely on blkback to finish processing those. I'm not sure all
> > blkback implementations out there can guarantee that.
> > 
> > The approach used by Xen initiated suspension is to re-issue the
> > in-flight requests when resuming. I have to admit I don't think this
> > is the best approach, but I would like to keep both the Xen and the PM
> > initiated suspension using the same logic, and hence I would request
> > that you try to re-use the existing resume logic (blkfront_resume).
> > 
> > > > I don't think you wait for all requests pending on the ring to be
> > > > finished by the backend, and hence you might loose requests as the
> > > > ones on the ring would not be re-issued by blkfront_restore AFAICT.
> > > > 
> > > AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
> > > request on the shared ring. Also, we I want to pause the queue and flush all
> > > the pending requests in the shared ring before disconnecting from backend.
> > 
> > Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
> > finished. I guess it's fine then.
> > 
> Ok.
> > > Quiescing the queue seemed a better option here as we want to make sure ongoing
> > > requests dispatches are totally drained.
> > > I should accept that some of these notion is borrowed from how nvme freeze/unfreeze 
> > > is done although its not apple to apple comparison.
> > 
> > That's fine, but I would still like to requests that you use the same
> > logic (as much as possible) for both the Xen and the PM initiated
> > suspension.
> > 
> > So you either apply this freeze/unfreeze to the Xen suspension (and
> > drop the re-issuing of requests on resume) or adapt the same approach
> > as the Xen initiated suspension. Keeping two completely different
> > approaches to suspension / resume on blkfront is not suitable long
> > term.
> > 
> I agree with you on overhaul of xen suspend/resume wrt blkfront is a good
> idea however, IMO that is a work for future and this patch series should 
> not be blocked for it. What do you think?

It's not so much that I think an overhaul of suspend/resume in
blkfront is needed, it's just that I don't want to have two completely
different suspend/resume paths inside blkfront.

So from my PoV I think the right solution is to either use the same
code (as much as possible) as it's currently used by Xen initiated
suspend/resume, or to also switch Xen initiated suspension to use the
newly introduced code.

Having two different approaches to suspend/resume in the same driver
is a recipe for disaster IMO: it adds complexity by forcing developers
to take into account two different suspend/resume approaches when
there's no need for it.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-20  8:39           ` Roger Pau Monné
@ 2020-02-20  8:54             ` Durrant, Paul
  2020-02-20 15:45               ` Roger Pau Monné
  0 siblings, 1 reply; 37+ messages in thread
From: Durrant, Paul @ 2020-02-20  8:54 UTC (permalink / raw)
  To: Roger Pau Monné, Agarwal, Anchal
  Cc: Valentin, Eduardo, len.brown, peterz, benh, x86, linux-mm, pavel,
	hpa, tglx, sstabellini, fllinden, Kamata, Munehisa, mingo,
	xen-devel, Singh, Balbir, axboe, konrad.wilk, bp,
	boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> Roger Pau Monné
> Sent: 20 February 2020 08:39
> To: Agarwal, Anchal <anchalag@amazon.com>
> Cc: Valentin, Eduardo <eduval@amazon.com>; len.brown@intel.com;
> peterz@infradead.org; benh@kernel.crashing.org; x86@kernel.org; linux-
> mm@kvack.org; pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> sstabellini@kernel.org; fllinden@amaozn.com; Kamata, Munehisa
> <kamatam@amazon.com>; mingo@redhat.com; xen-devel@lists.xenproject.org;
> Singh, Balbir <sblbir@amazon.com>; axboe@kernel.dk;
> konrad.wilk@oracle.com; bp@alien8.de; boris.ostrovsky@oracle.com;
> jgross@suse.com; netdev@vger.kernel.org; linux-pm@vger.kernel.org;
> rjw@rjwysocki.net; linux-kernel@vger.kernel.org; vkuznets@redhat.com;
> davem@davemloft.net; Woodhouse, David <dwmw@amazon.co.uk>
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> Thanks for this work, please see below.
> 
> On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
> > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
> > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > > > > > From: Munehisa Kamata <kamatam@amazon.com
> > > > > >
> > > > > > Add freeze, thaw and restore callbacks for PM suspend and
> hibernation
> > > > > > support. All frontend drivers that needs to use
> PM_HIBERNATION/PM_SUSPEND
> > > > > > events, need to implement these xenbus_driver callbacks.
> > > > > > The freeze handler stops a block-layer queue and disconnect the
> > > > > > frontend from the backend while freeing ring_info and associated
> resources.
> > > > > > The restore handler re-allocates ring_info and re-connect to the
> > > > > > backend, so the rest of the kernel can continue to use the block
> device
> > > > > > transparently. Also, the handlers are used for both PM suspend
> and
> > > > > > hibernation so that we can keep the existing suspend/resume
> callbacks for
> > > > > > Xen suspend without modification. Before disconnecting from
> backend,
> > > > > > we need to prevent any new IO from being queued and wait for
> existing
> > > > > > IO to complete.
> > > > >
> > > > > This is different from Xen (xenstore) initiated suspension, as in
> that
> > > > > case Linux doesn't flush the rings or disconnects from the
> backend.
> > > > Yes, AFAIK in xen initiated suspension backend takes care of it.
> > >
> > > No, in Xen initiated suspension backend doesn't take care of flushing
> > > the rings, the frontend has a shadow copy of the ring contents and it
> > > re-issues the requests on resume.
> > >
> > Yes, I meant suspension in general where both xenstore and backend knows
> > system is going under suspension and not flushing of rings.
> 
> backend has no idea the guest is going to be suspended. Backend code
> is completely agnostic to suspension/resume.
> 
> > That happens
> > in frontend when backend indicates that state is closing and so on.
> > I may have written it in wrong context.
> 
> I'm afraid I'm not sure I fully understand this last sentence.
> 
> > > > > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > > > > +{
> > > > > > +	unsigned int i;
> > > > > > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > > > > > +	struct blkfront_ring_info *rinfo;
> > > > > > +	/* This would be reasonable timeout as used in
> xenbus_dev_shutdown() */
> > > > > > +	unsigned int timeout = 5 * HZ;
> > > > > > +	int err = 0;
> > > > > > +
> > > > > > +	info->connected = BLKIF_STATE_FREEZING;
> > > > > > +
> > > > > > +	blk_mq_freeze_queue(info->rq);
> > > > > > +	blk_mq_quiesce_queue(info->rq);
> > > > > > +
> > > > > > +	for (i = 0; i < info->nr_rings; i++) {
> > > > > > +		rinfo = &info->rinfo[i];
> > > > > > +
> > > > > > +		gnttab_cancel_free_callback(&rinfo->callback);
> > > > > > +		flush_work(&rinfo->work);
> > > > > > +	}
> > > > > > +
> > > > > > +	/* Kick the backend to disconnect */
> > > > > > +	xenbus_switch_state(dev, XenbusStateClosing);
> > > > >
> > > > > Are you sure this is safe?
> > > > >
> > > > In my testing running multiple fio jobs, other test scenarios
> running
> > > > a memory loader works fine. I did not came across a scenario that
> would
> > > > have failed resume due to blkfront issues unless you can sugest
> some?
> > >
> > > AFAICT you don't wait for the in-flight requests to be finished, and
> > > just rely on blkback to finish processing those. I'm not sure all
> > > blkback implementations out there can guarantee that.
> > >
> > > The approach used by Xen initiated suspension is to re-issue the
> > > in-flight requests when resuming. I have to admit I don't think this
> > > is the best approach, but I would like to keep both the Xen and the PM
> > > initiated suspension using the same logic, and hence I would request
> > > that you try to re-use the existing resume logic (blkfront_resume).
> > >
> > > > > I don't think you wait for all requests pending on the ring to be
> > > > > finished by the backend, and hence you might loose requests as the
> > > > > ones on the ring would not be re-issued by blkfront_restore
> AFAICT.
> > > > >
> > > > AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of
> no used
> > > > request on the shared ring. Also, we I want to pause the queue and
> flush all
> > > > the pending requests in the shared ring before disconnecting from
> backend.
> > >
> > > Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
> > > finished. I guess it's fine then.
> > >
> > Ok.
> > > > Quiescing the queue seemed a better option here as we want to make
> sure ongoing
> > > > requests dispatches are totally drained.
> > > > I should accept that some of these notion is borrowed from how nvme
> freeze/unfreeze
> > > > is done although its not apple to apple comparison.
> > >
> > > That's fine, but I would still like to requests that you use the same
> > > logic (as much as possible) for both the Xen and the PM initiated
> > > suspension.
> > >
> > > So you either apply this freeze/unfreeze to the Xen suspension (and
> > > drop the re-issuing of requests on resume) or adapt the same approach
> > > as the Xen initiated suspension. Keeping two completely different
> > > approaches to suspension / resume on blkfront is not suitable long
> > > term.
> > >
> > I agree with you on overhaul of xen suspend/resume wrt blkfront is a
> good
> > idea however, IMO that is a work for future and this patch series should
> > not be blocked for it. What do you think?
> 
> It's not so much that I think an overhaul of suspend/resume in
> blkfront is needed, it's just that I don't want to have two completely
> different suspend/resume paths inside blkfront.
> 
> So from my PoV I think the right solution is to either use the same
> code (as much as possible) as it's currently used by Xen initiated
> suspend/resume, or to also switch Xen initiated suspension to use the
> newly introduced code.
> 
> Having two different approaches to suspend/resume in the same driver
> is a recipe for disaster IMO: it adds complexity by forcing developers
> to take into account two different suspend/resume approaches when
> there's no need for it.

I disagree. S3 or S4 suspend/resume (or perhaps we should call them power state transitions to avoid confusion) are quite different from Xen suspend/resume.
Power state transitions ought to be, and indeed are, visible to the software running inside the guest. Applications, as well as drivers, can receive notification and take whatever action they deem appropriate.
Xen suspend/resume OTOH is used when a guest is migrated and the code should go to all lengths possible to make any software running inside the guest (other than Xen specific enlightened code, such as PV drivers) completely unaware that anything has actually happened.
So, whilst it may be possible to use common routines to, for example, re-establish PV frontend/backend communication, PV frontend code should be acutely aware of the circumstances they are operating in. I can cite example code in the Windows PV driver, which have supported guest S3/S4 power state transitions since day 1.

  Paul

> 
> Thanks, Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-20  8:54             ` [Xen-devel] " Durrant, Paul
@ 2020-02-20 15:45               ` Roger Pau Monné
  2020-02-20 16:23                 ` Durrant, Paul
  0 siblings, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-20 15:45 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

On Thu, Feb 20, 2020 at 08:54:36AM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> > Roger Pau Monné
> > Sent: 20 February 2020 08:39
> > To: Agarwal, Anchal <anchalag@amazon.com>
> > Cc: Valentin, Eduardo <eduval@amazon.com>; len.brown@intel.com;
> > peterz@infradead.org; benh@kernel.crashing.org; x86@kernel.org; linux-
> > mm@kvack.org; pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> > sstabellini@kernel.org; fllinden@amaozn.com; Kamata, Munehisa
> > <kamatam@amazon.com>; mingo@redhat.com; xen-devel@lists.xenproject.org;
> > Singh, Balbir <sblbir@amazon.com>; axboe@kernel.dk;
> > konrad.wilk@oracle.com; bp@alien8.de; boris.ostrovsky@oracle.com;
> > jgross@suse.com; netdev@vger.kernel.org; linux-pm@vger.kernel.org;
> > rjw@rjwysocki.net; linux-kernel@vger.kernel.org; vkuznets@redhat.com;
> > davem@davemloft.net; Woodhouse, David <dwmw@amazon.co.uk>
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > Thanks for this work, please see below.
> > 
> > On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
> > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
> > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > > > > Quiescing the queue seemed a better option here as we want to make
> > sure ongoing
> > > > > requests dispatches are totally drained.
> > > > > I should accept that some of these notion is borrowed from how nvme
> > freeze/unfreeze
> > > > > is done although its not apple to apple comparison.
> > > >
> > > > That's fine, but I would still like to requests that you use the same
> > > > logic (as much as possible) for both the Xen and the PM initiated
> > > > suspension.
> > > >
> > > > So you either apply this freeze/unfreeze to the Xen suspension (and
> > > > drop the re-issuing of requests on resume) or adapt the same approach
> > > > as the Xen initiated suspension. Keeping two completely different
> > > > approaches to suspension / resume on blkfront is not suitable long
> > > > term.
> > > >
> > > I agree with you on overhaul of xen suspend/resume wrt blkfront is a
> > good
> > > idea however, IMO that is a work for future and this patch series should
> > > not be blocked for it. What do you think?
> > 
> > It's not so much that I think an overhaul of suspend/resume in
> > blkfront is needed, it's just that I don't want to have two completely
> > different suspend/resume paths inside blkfront.
> > 
> > So from my PoV I think the right solution is to either use the same
> > code (as much as possible) as it's currently used by Xen initiated
> > suspend/resume, or to also switch Xen initiated suspension to use the
> > newly introduced code.
> > 
> > Having two different approaches to suspend/resume in the same driver
> > is a recipe for disaster IMO: it adds complexity by forcing developers
> > to take into account two different suspend/resume approaches when
> > there's no need for it.
> 
> I disagree. S3 or S4 suspend/resume (or perhaps we should call them power state transitions to avoid confusion) are quite different from Xen suspend/resume.
> Power state transitions ought to be, and indeed are, visible to the software running inside the guest. Applications, as well as drivers, can receive notification and take whatever action they deem appropriate.
> Xen suspend/resume OTOH is used when a guest is migrated and the code should go to all lengths possible to make any software running inside the guest (other than Xen specific enlightened code, such as PV drivers) completely unaware that anything has actually happened.

So from what you say above PM state transitions are notified to all
drivers, and Xen suspend/resume is only notified to PV drivers, and
here we are speaking about blkfront which is a PV driver, and should
get notified in both cases. So I'm unsure why the same (or at least
very similar) approach can't be used in both cases.

The suspend/resume approach proposed by this patch is completely
different than the one used by a xenbus initiated suspend/resume, and
I don't see a technical reason that warrants this difference.

I'm not saying that the approach used here is wrong, it's just that I
don't see the point in having two different ways to do suspend/resume
in the same driver, unless there's a technical reason for it, which I
don't think has been provided.

I would be fine with switching xenbus initiated suspend/resume to also
use the approach proposed here: freeze the queues and drain the shared
rings before suspending.

> So, whilst it may be possible to use common routines to, for example, re-establish PV frontend/backend communication, PV frontend code should be acutely aware of the circumstances they are operating in. I can cite example code in the Windows PV driver, which have supported guest S3/S4 power state transitions since day 1.

Hm, please bear with me, as I'm not sure I fully understand. Why isn't
the current suspend/resume logic suitable for PM transitions?

As said above, I'm happy to switch xenbus initiated suspend/resume to
use the logic in this patch, but unless there's a technical reason for
it I don't see why blkfront should have two completely different
approaches to suspend/resume depending on whether it's a PM or a
xenbus state change.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-20 15:45               ` Roger Pau Monné
@ 2020-02-20 16:23                 ` Durrant, Paul
  2020-02-20 16:48                   ` Roger Pau Monné
  0 siblings, 1 reply; 37+ messages in thread
From: Durrant, Paul @ 2020-02-20 16:23 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

> -----Original Message-----
> From: Roger Pau Monné <roger.pau@citrix.com>
> Sent: 20 February 2020 15:45
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> Woodhouse, David <dwmw@amazon.co.uk>
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> On Thu, Feb 20, 2020 at 08:54:36AM +0000, Durrant, Paul wrote:
> > > -----Original Message-----
> > > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> > > Roger Pau Monné
> > > Sent: 20 February 2020 08:39
> > > To: Agarwal, Anchal <anchalag@amazon.com>
> > > Cc: Valentin, Eduardo <eduval@amazon.com>; len.brown@intel.com;
> > > peterz@infradead.org; benh@kernel.crashing.org; x86@kernel.org; linux-
> > > mm@kvack.org; pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> > > sstabellini@kernel.org; fllinden@amaozn.com; Kamata, Munehisa
> > > <kamatam@amazon.com>; mingo@redhat.com; xen-
> devel@lists.xenproject.org;
> > > Singh, Balbir <sblbir@amazon.com>; axboe@kernel.dk;
> > > konrad.wilk@oracle.com; bp@alien8.de; boris.ostrovsky@oracle.com;
> > > jgross@suse.com; netdev@vger.kernel.org; linux-pm@vger.kernel.org;
> > > rjw@rjwysocki.net; linux-kernel@vger.kernel.org; vkuznets@redhat.com;
> > > davem@davemloft.net; Woodhouse, David <dwmw@amazon.co.uk>
> > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> callbacks
> > > for PM suspend and hibernation
> > >
> > > Thanks for this work, please see below.
> > >
> > > On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
> > > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
> > > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal
> wrote:
> > > > > > Quiescing the queue seemed a better option here as we want to
> make
> > > sure ongoing
> > > > > > requests dispatches are totally drained.
> > > > > > I should accept that some of these notion is borrowed from how
> nvme
> > > freeze/unfreeze
> > > > > > is done although its not apple to apple comparison.
> > > > >
> > > > > That's fine, but I would still like to requests that you use the
> same
> > > > > logic (as much as possible) for both the Xen and the PM initiated
> > > > > suspension.
> > > > >
> > > > > So you either apply this freeze/unfreeze to the Xen suspension
> (and
> > > > > drop the re-issuing of requests on resume) or adapt the same
> approach
> > > > > as the Xen initiated suspension. Keeping two completely different
> > > > > approaches to suspension / resume on blkfront is not suitable long
> > > > > term.
> > > > >
> > > > I agree with you on overhaul of xen suspend/resume wrt blkfront is a
> > > good
> > > > idea however, IMO that is a work for future and this patch series
> should
> > > > not be blocked for it. What do you think?
> > >
> > > It's not so much that I think an overhaul of suspend/resume in
> > > blkfront is needed, it's just that I don't want to have two completely
> > > different suspend/resume paths inside blkfront.
> > >
> > > So from my PoV I think the right solution is to either use the same
> > > code (as much as possible) as it's currently used by Xen initiated
> > > suspend/resume, or to also switch Xen initiated suspension to use the
> > > newly introduced code.
> > >
> > > Having two different approaches to suspend/resume in the same driver
> > > is a recipe for disaster IMO: it adds complexity by forcing developers
> > > to take into account two different suspend/resume approaches when
> > > there's no need for it.
> >
> > I disagree. S3 or S4 suspend/resume (or perhaps we should call them
> power state transitions to avoid confusion) are quite different from Xen
> suspend/resume.
> > Power state transitions ought to be, and indeed are, visible to the
> software running inside the guest. Applications, as well as drivers, can
> receive notification and take whatever action they deem appropriate.
> > Xen suspend/resume OTOH is used when a guest is migrated and the code
> should go to all lengths possible to make any software running inside the
> guest (other than Xen specific enlightened code, such as PV drivers)
> completely unaware that anything has actually happened.
> 
> So from what you say above PM state transitions are notified to all
> drivers, and Xen suspend/resume is only notified to PV drivers, and
> here we are speaking about blkfront which is a PV driver, and should
> get notified in both cases. So I'm unsure why the same (or at least
> very similar) approach can't be used in both cases.
> 
> The suspend/resume approach proposed by this patch is completely
> different than the one used by a xenbus initiated suspend/resume, and
> I don't see a technical reason that warrants this difference.
>

Within an individual PV driver it may well be ok to use common mechanisms for connecting to the backend but issues will arise if any subsequent action is visible to the guest. E.g. a network frontend needs to issue gratuitous ARPs without anything else in the network stack (or monitoring the network stack) knowing that it has happened. 
 
> I'm not saying that the approach used here is wrong, it's just that I
> don't see the point in having two different ways to do suspend/resume
> in the same driver, unless there's a technical reason for it, which I
> don't think has been provided.

The technical justification is that the driver needs to know what kind of suspend or resume it is doing, so that it doesn't do the wrong thing. There may also be differences in the state of the system e.g. in Windows, at least some of the resume-from-xen-suspend code runs with interrupts disabled (which is necessary to make sure enough state is restored before things become visible to other kernel code).

> 
> I would be fine with switching xenbus initiated suspend/resume to also
> use the approach proposed here: freeze the queues and drain the shared
> rings before suspending.
> 

I think abstracting away at the xenbus level to some degree is probably feasible, but some sort of flag should be passed to the individual drivers so they know what circumstances they are operating under.

> > So, whilst it may be possible to use common routines to, for example,
> re-establish PV frontend/backend communication, PV frontend code should be
> acutely aware of the circumstances they are operating in. I can cite
> example code in the Windows PV driver, which have supported guest S3/S4
> power state transitions since day 1.
> 
> Hm, please bear with me, as I'm not sure I fully understand. Why isn't
> the current suspend/resume logic suitable for PM transitions?
> 

I don’t know the details for Linux but it may well be to do with assumptions made about the system e.g. the ability to block waiting for something to happen on another CPU (which may have already been quiesced in a PM context).

> As said above, I'm happy to switch xenbus initiated suspend/resume to
> use the logic in this patch, but unless there's a technical reason for
> it I don't see why blkfront should have two completely different
> approaches to suspend/resume depending on whether it's a PM or a
> xenbus state change.
> 

Hopefully what I said above illustrates why it may not be 100% common.

  Paul


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-20 16:23                 ` Durrant, Paul
@ 2020-02-20 16:48                   ` Roger Pau Monné
  2020-02-20 17:01                     ` Durrant, Paul
  0 siblings, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-20 16:48 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

On Thu, Feb 20, 2020 at 04:23:13PM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Roger Pau Monné <roger.pau@citrix.com>
> > Sent: 20 February 2020 15:45
> > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> > Woodhouse, David <dwmw@amazon.co.uk>
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > On Thu, Feb 20, 2020 at 08:54:36AM +0000, Durrant, Paul wrote:
> > > > -----Original Message-----
> > > > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of
> > > > Roger Pau Monné
> > > > Sent: 20 February 2020 08:39
> > > > To: Agarwal, Anchal <anchalag@amazon.com>
> > > > Cc: Valentin, Eduardo <eduval@amazon.com>; len.brown@intel.com;
> > > > peterz@infradead.org; benh@kernel.crashing.org; x86@kernel.org; linux-
> > > > mm@kvack.org; pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> > > > sstabellini@kernel.org; fllinden@amaozn.com; Kamata, Munehisa
> > > > <kamatam@amazon.com>; mingo@redhat.com; xen-
> > devel@lists.xenproject.org;
> > > > Singh, Balbir <sblbir@amazon.com>; axboe@kernel.dk;
> > > > konrad.wilk@oracle.com; bp@alien8.de; boris.ostrovsky@oracle.com;
> > > > jgross@suse.com; netdev@vger.kernel.org; linux-pm@vger.kernel.org;
> > > > rjw@rjwysocki.net; linux-kernel@vger.kernel.org; vkuznets@redhat.com;
> > > > davem@davemloft.net; Woodhouse, David <dwmw@amazon.co.uk>
> > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > callbacks
> > > > for PM suspend and hibernation
> > > >
> > > > Thanks for this work, please see below.
> > > >
> > > > On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
> > > > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > > > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
> > > > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal
> > wrote:
> > > > > > > Quiescing the queue seemed a better option here as we want to
> > make
> > > > sure ongoing
> > > > > > > requests dispatches are totally drained.
> > > > > > > I should accept that some of these notion is borrowed from how
> > nvme
> > > > freeze/unfreeze
> > > > > > > is done although its not apple to apple comparison.
> > > > > >
> > > > > > That's fine, but I would still like to requests that you use the
> > same
> > > > > > logic (as much as possible) for both the Xen and the PM initiated
> > > > > > suspension.
> > > > > >
> > > > > > So you either apply this freeze/unfreeze to the Xen suspension
> > (and
> > > > > > drop the re-issuing of requests on resume) or adapt the same
> > approach
> > > > > > as the Xen initiated suspension. Keeping two completely different
> > > > > > approaches to suspension / resume on blkfront is not suitable long
> > > > > > term.
> > > > > >
> > > > > I agree with you on overhaul of xen suspend/resume wrt blkfront is a
> > > > good
> > > > > idea however, IMO that is a work for future and this patch series
> > should
> > > > > not be blocked for it. What do you think?
> > > >
> > > > It's not so much that I think an overhaul of suspend/resume in
> > > > blkfront is needed, it's just that I don't want to have two completely
> > > > different suspend/resume paths inside blkfront.
> > > >
> > > > So from my PoV I think the right solution is to either use the same
> > > > code (as much as possible) as it's currently used by Xen initiated
> > > > suspend/resume, or to also switch Xen initiated suspension to use the
> > > > newly introduced code.
> > > >
> > > > Having two different approaches to suspend/resume in the same driver
> > > > is a recipe for disaster IMO: it adds complexity by forcing developers
> > > > to take into account two different suspend/resume approaches when
> > > > there's no need for it.
> > >
> > > I disagree. S3 or S4 suspend/resume (or perhaps we should call them
> > power state transitions to avoid confusion) are quite different from Xen
> > suspend/resume.
> > > Power state transitions ought to be, and indeed are, visible to the
> > software running inside the guest. Applications, as well as drivers, can
> > receive notification and take whatever action they deem appropriate.
> > > Xen suspend/resume OTOH is used when a guest is migrated and the code
> > should go to all lengths possible to make any software running inside the
> > guest (other than Xen specific enlightened code, such as PV drivers)
> > completely unaware that anything has actually happened.
> > 
> > So from what you say above PM state transitions are notified to all
> > drivers, and Xen suspend/resume is only notified to PV drivers, and
> > here we are speaking about blkfront which is a PV driver, and should
> > get notified in both cases. So I'm unsure why the same (or at least
> > very similar) approach can't be used in both cases.
> > 
> > The suspend/resume approach proposed by this patch is completely
> > different than the one used by a xenbus initiated suspend/resume, and
> > I don't see a technical reason that warrants this difference.
> >
> 
> Within an individual PV driver it may well be ok to use common mechanisms for connecting to the backend but issues will arise if any subsequent action is visible to the guest. E.g. a network frontend needs to issue gratuitous ARPs without anything else in the network stack (or monitoring the network stack) knowing that it has happened. 
>  
> > I'm not saying that the approach used here is wrong, it's just that I
> > don't see the point in having two different ways to do suspend/resume
> > in the same driver, unless there's a technical reason for it, which I
> > don't think has been provided.
> 
> The technical justification is that the driver needs to know what kind of suspend or resume it is doing, so that it doesn't do the wrong thing. There may also be differences in the state of the system e.g. in Windows, at least some of the resume-from-xen-suspend code runs with interrupts disabled (which is necessary to make sure enough state is restored before things become visible to other kernel code).
> 
> > 
> > I would be fine with switching xenbus initiated suspend/resume to also
> > use the approach proposed here: freeze the queues and drain the shared
> > rings before suspending.
> > 
> 
> I think abstracting away at the xenbus level to some degree is probably feasible, but some sort of flag should be passed to the individual drivers so they know what circumstances they are operating under.
> 
> > > So, whilst it may be possible to use common routines to, for example,
> > re-establish PV frontend/backend communication, PV frontend code should be
> > acutely aware of the circumstances they are operating in. I can cite
> > example code in the Windows PV driver, which have supported guest S3/S4
> > power state transitions since day 1.
> > 
> > Hm, please bear with me, as I'm not sure I fully understand. Why isn't
> > the current suspend/resume logic suitable for PM transitions?
> > 
> 
> I don’t know the details for Linux but it may well be to do with assumptions made about the system e.g. the ability to block waiting for something to happen on another CPU (which may have already been quiesced in a PM context).
> 
> > As said above, I'm happy to switch xenbus initiated suspend/resume to
> > use the logic in this patch, but unless there's a technical reason for
> > it I don't see why blkfront should have two completely different
> > approaches to suspend/resume depending on whether it's a PM or a
> > xenbus state change.
> > 
> 
> Hopefully what I said above illustrates why it may not be 100% common.

Yes, that's fine. I don't expect it to be 100% common (as I guess
that the hooks will have different prototypes), but I expect
that routines can be shared, and that the approach taken can be the
same.

For example one necessary difference will be that xenbus initiated
suspend won't close the PV connection, in case suspension fails. On PM
suspend you seem to always close the connection beforehand, so you
will always have to re-negotiate on resume even if suspension failed.

What I'm mostly worried about is the different approach to ring
draining. Ie: either xenbus is changed to freeze the queues and drain
the shared rings, or PM uses the already existing logic of not
flushing the rings an re-issuing in-flight requests on resume.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-20 16:48                   ` Roger Pau Monné
@ 2020-02-20 17:01                     ` Durrant, Paul
  2020-02-21  0:49                       ` Anchal Agarwal
  2020-02-21  9:22                       ` Roger Pau Monné
  0 siblings, 2 replies; 37+ messages in thread
From: Durrant, Paul @ 2020-02-20 17:01 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

> -----Original Message-----
> From: Roger Pau Monné <roger.pau@citrix.com>
> Sent: 20 February 2020 16:49
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> Woodhouse, David <dwmw@amazon.co.uk>
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> On Thu, Feb 20, 2020 at 04:23:13PM +0000, Durrant, Paul wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > Sent: 20 February 2020 15:45
> > > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> sstabellini@kernel.org;
> > > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > > linux-kernel@vger.kernel.org; vkuznets@redhat.com;
> davem@davemloft.net;
> > > Woodhouse, David <dwmw@amazon.co.uk>
> > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> callbacks
> > > for PM suspend and hibernation
> > >
> > > On Thu, Feb 20, 2020 at 08:54:36AM +0000, Durrant, Paul wrote:
> > > > > -----Original Message-----
> > > > > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf
> Of
> > > > > Roger Pau Monné
> > > > > Sent: 20 February 2020 08:39
> > > > > To: Agarwal, Anchal <anchalag@amazon.com>
> > > > > Cc: Valentin, Eduardo <eduval@amazon.com>; len.brown@intel.com;
> > > > > peterz@infradead.org; benh@kernel.crashing.org; x86@kernel.org;
> linux-
> > > > > mm@kvack.org; pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> > > > > sstabellini@kernel.org; fllinden@amaozn.com; Kamata, Munehisa
> > > > > <kamatam@amazon.com>; mingo@redhat.com; xen-
> > > devel@lists.xenproject.org;
> > > > > Singh, Balbir <sblbir@amazon.com>; axboe@kernel.dk;
> > > > > konrad.wilk@oracle.com; bp@alien8.de; boris.ostrovsky@oracle.com;
> > > > > jgross@suse.com; netdev@vger.kernel.org; linux-pm@vger.kernel.org;
> > > > > rjw@rjwysocki.net; linux-kernel@vger.kernel.org;
> vkuznets@redhat.com;
> > > > > davem@davemloft.net; Woodhouse, David <dwmw@amazon.co.uk>
> > > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > > callbacks
> > > > > for PM suspend and hibernation
> > > > >
> > > > > Thanks for this work, please see below.
> > > > >
> > > > > On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
> > > > > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > > > > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal
> wrote:
> > > > > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné
> wrote:
> > > > > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal
> > > wrote:
> > > > > > > > Quiescing the queue seemed a better option here as we want
> to
> > > make
> > > > > sure ongoing
> > > > > > > > requests dispatches are totally drained.
> > > > > > > > I should accept that some of these notion is borrowed from
> how
> > > nvme
> > > > > freeze/unfreeze
> > > > > > > > is done although its not apple to apple comparison.
> > > > > > >
> > > > > > > That's fine, but I would still like to requests that you use
> the
> > > same
> > > > > > > logic (as much as possible) for both the Xen and the PM
> initiated
> > > > > > > suspension.
> > > > > > >
> > > > > > > So you either apply this freeze/unfreeze to the Xen suspension
> > > (and
> > > > > > > drop the re-issuing of requests on resume) or adapt the same
> > > approach
> > > > > > > as the Xen initiated suspension. Keeping two completely
> different
> > > > > > > approaches to suspension / resume on blkfront is not suitable
> long
> > > > > > > term.
> > > > > > >
> > > > > > I agree with you on overhaul of xen suspend/resume wrt blkfront
> is a
> > > > > good
> > > > > > idea however, IMO that is a work for future and this patch
> series
> > > should
> > > > > > not be blocked for it. What do you think?
> > > > >
> > > > > It's not so much that I think an overhaul of suspend/resume in
> > > > > blkfront is needed, it's just that I don't want to have two
> completely
> > > > > different suspend/resume paths inside blkfront.
> > > > >
> > > > > So from my PoV I think the right solution is to either use the
> same
> > > > > code (as much as possible) as it's currently used by Xen initiated
> > > > > suspend/resume, or to also switch Xen initiated suspension to use
> the
> > > > > newly introduced code.
> > > > >
> > > > > Having two different approaches to suspend/resume in the same
> driver
> > > > > is a recipe for disaster IMO: it adds complexity by forcing
> developers
> > > > > to take into account two different suspend/resume approaches when
> > > > > there's no need for it.
> > > >
> > > > I disagree. S3 or S4 suspend/resume (or perhaps we should call them
> > > power state transitions to avoid confusion) are quite different from
> Xen
> > > suspend/resume.
> > > > Power state transitions ought to be, and indeed are, visible to the
> > > software running inside the guest. Applications, as well as drivers,
> can
> > > receive notification and take whatever action they deem appropriate.
> > > > Xen suspend/resume OTOH is used when a guest is migrated and the
> code
> > > should go to all lengths possible to make any software running inside
> the
> > > guest (other than Xen specific enlightened code, such as PV drivers)
> > > completely unaware that anything has actually happened.
> > >
> > > So from what you say above PM state transitions are notified to all
> > > drivers, and Xen suspend/resume is only notified to PV drivers, and
> > > here we are speaking about blkfront which is a PV driver, and should
> > > get notified in both cases. So I'm unsure why the same (or at least
> > > very similar) approach can't be used in both cases.
> > >
> > > The suspend/resume approach proposed by this patch is completely
> > > different than the one used by a xenbus initiated suspend/resume, and
> > > I don't see a technical reason that warrants this difference.
> > >
> >
> > Within an individual PV driver it may well be ok to use common
> mechanisms for connecting to the backend but issues will arise if any
> subsequent action is visible to the guest. E.g. a network frontend needs
> to issue gratuitous ARPs without anything else in the network stack (or
> monitoring the network stack) knowing that it has happened.
> >
> > > I'm not saying that the approach used here is wrong, it's just that I
> > > don't see the point in having two different ways to do suspend/resume
> > > in the same driver, unless there's a technical reason for it, which I
> > > don't think has been provided.
> >
> > The technical justification is that the driver needs to know what kind
> of suspend or resume it is doing, so that it doesn't do the wrong thing.
> There may also be differences in the state of the system e.g. in Windows,
> at least some of the resume-from-xen-suspend code runs with interrupts
> disabled (which is necessary to make sure enough state is restored before
> things become visible to other kernel code).
> >
> > >
> > > I would be fine with switching xenbus initiated suspend/resume to also
> > > use the approach proposed here: freeze the queues and drain the shared
> > > rings before suspending.
> > >
> >
> > I think abstracting away at the xenbus level to some degree is probably
> feasible, but some sort of flag should be passed to the individual drivers
> so they know what circumstances they are operating under.
> >
> > > > So, whilst it may be possible to use common routines to, for
> example,
> > > re-establish PV frontend/backend communication, PV frontend code
> should be
> > > acutely aware of the circumstances they are operating in. I can cite
> > > example code in the Windows PV driver, which have supported guest
> S3/S4
> > > power state transitions since day 1.
> > >
> > > Hm, please bear with me, as I'm not sure I fully understand. Why isn't
> > > the current suspend/resume logic suitable for PM transitions?
> > >
> >
> > I don’t know the details for Linux but it may well be to do with
> assumptions made about the system e.g. the ability to block waiting for
> something to happen on another CPU (which may have already been quiesced
> in a PM context).
> >
> > > As said above, I'm happy to switch xenbus initiated suspend/resume to
> > > use the logic in this patch, but unless there's a technical reason for
> > > it I don't see why blkfront should have two completely different
> > > approaches to suspend/resume depending on whether it's a PM or a
> > > xenbus state change.
> > >
> >
> > Hopefully what I said above illustrates why it may not be 100% common.
> 
> Yes, that's fine. I don't expect it to be 100% common (as I guess
> that the hooks will have different prototypes), but I expect
> that routines can be shared, and that the approach taken can be the
> same.
> 
> For example one necessary difference will be that xenbus initiated
> suspend won't close the PV connection, in case suspension fails. On PM
> suspend you seem to always close the connection beforehand, so you
> will always have to re-negotiate on resume even if suspension failed.
> 
> What I'm mostly worried about is the different approach to ring
> draining. Ie: either xenbus is changed to freeze the queues and drain
> the shared rings, or PM uses the already existing logic of not
> flushing the rings an re-issuing in-flight requests on resume.
> 

Yes, that's needs consideration. I don’t think the same semantic can be suitable for both. E.g. in a xen-suspend we need to freeze with as little processing as possible to avoid dirtying RAM late in the migration cycle, and we know that in-flight data can wait. But in a transition to S4 we need to make sure that at least all the in-flight blkif requests get completed, since they probably contain bits of the guest's memory image and that's not going to get saved any other way.

  Paul

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-20 17:01                     ` Durrant, Paul
@ 2020-02-21  0:49                       ` Anchal Agarwal
  2020-02-21  9:47                         ` Roger Pau Monné
  2020-02-21  9:22                       ` Roger Pau Monné
  1 sibling, 1 reply; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-21  0:49 UTC (permalink / raw)
  To: Durrant, Paul, Roger Pau Monné
  Cc: Roger Pau Monné,
	Valentin, Eduardo, len.brown, peterz, benh, x86, linux-mm, pavel,
	hpa, tglx, sstabellini, fllinden, Kamata, Munehisa, mingo,
	xen-devel, Singh, Balbir, axboe, konrad.wilk, bp,
	boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David, anchalag

On Thu, Feb 20, 2020 at 10:01:52AM -0700, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Roger Pau Monné <roger.pau@citrix.com>
> > Sent: 20 February 2020 16:49
> > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> > Woodhouse, David <dwmw@amazon.co.uk>
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > On Thu, Feb 20, 2020 at 04:23:13PM +0000, Durrant, Paul wrote:
> > > > -----Original Message-----
> > > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > > Sent: 20 February 2020 15:45
> > > > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > > > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > > > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > > > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > > > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> > sstabellini@kernel.org;
> > > > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > > > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > > > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > > > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > > > linux-kernel@vger.kernel.org; vkuznets@redhat.com;
> > davem@davemloft.net;
> > > > Woodhouse, David <dwmw@amazon.co.uk>
> > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > callbacks
> > > > for PM suspend and hibernation
> > > >
> > > > On Thu, Feb 20, 2020 at 08:54:36AM +0000, Durrant, Paul wrote:
> > > > > > -----Original Message-----
> > > > > > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf
> > Of
> > > > > > Roger Pau Monné
> > > > > > Sent: 20 February 2020 08:39
> > > > > > To: Agarwal, Anchal <anchalag@amazon.com>
> > > > > > Cc: Valentin, Eduardo <eduval@amazon.com>; len.brown@intel.com;
> > > > > > peterz@infradead.org; benh@kernel.crashing.org; x86@kernel.org;
> > linux-
> > > > > > mm@kvack.org; pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> > > > > > sstabellini@kernel.org; fllinden@amaozn.com; Kamata, Munehisa
> > > > > > <kamatam@amazon.com>; mingo@redhat.com; xen-
> > > > devel@lists.xenproject.org;
> > > > > > Singh, Balbir <sblbir@amazon.com>; axboe@kernel.dk;
> > > > > > konrad.wilk@oracle.com; bp@alien8.de; boris.ostrovsky@oracle.com;
> > > > > > jgross@suse.com; netdev@vger.kernel.org; linux-pm@vger.kernel.org;
> > > > > > rjw@rjwysocki.net; linux-kernel@vger.kernel.org;
> > vkuznets@redhat.com;
> > > > > > davem@davemloft.net; Woodhouse, David <dwmw@amazon.co.uk>
> > > > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > > > callbacks
> > > > > > for PM suspend and hibernation
> > > > > >
> > > > > > Thanks for this work, please see below.
> > > > > >
> > > > > > On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
> > > > > > > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > > > > > > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal
> > wrote:
> > > > > > > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné
> > wrote:
> > > > > > > > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal
> > > > wrote:
> > > > > > > > > Quiescing the queue seemed a better option here as we want
> > to
> > > > make
> > > > > > sure ongoing
> > > > > > > > > requests dispatches are totally drained.
> > > > > > > > > I should accept that some of these notion is borrowed from
> > how
> > > > nvme
> > > > > > freeze/unfreeze
> > > > > > > > > is done although its not apple to apple comparison.
> > > > > > > >
> > > > > > > > That's fine, but I would still like to requests that you use
> > the
> > > > same
> > > > > > > > logic (as much as possible) for both the Xen and the PM
> > initiated
> > > > > > > > suspension.
> > > > > > > >
> > > > > > > > So you either apply this freeze/unfreeze to the Xen suspension
> > > > (and
> > > > > > > > drop the re-issuing of requests on resume) or adapt the same
> > > > approach
> > > > > > > > as the Xen initiated suspension. Keeping two completely
> > different
> > > > > > > > approaches to suspension / resume on blkfront is not suitable
> > long
> > > > > > > > term.
> > > > > > > >
> > > > > > > I agree with you on overhaul of xen suspend/resume wrt blkfront
> > is a
> > > > > > good
> > > > > > > idea however, IMO that is a work for future and this patch
> > series
> > > > should
> > > > > > > not be blocked for it. What do you think?
> > > > > >
> > > > > > It's not so much that I think an overhaul of suspend/resume in
> > > > > > blkfront is needed, it's just that I don't want to have two
> > completely
> > > > > > different suspend/resume paths inside blkfront.
> > > > > >
> > > > > > So from my PoV I think the right solution is to either use the
> > same
> > > > > > code (as much as possible) as it's currently used by Xen initiated
> > > > > > suspend/resume, or to also switch Xen initiated suspension to use
> > the
> > > > > > newly introduced code.
> > > > > >
> > > > > > Having two different approaches to suspend/resume in the same
> > driver
> > > > > > is a recipe for disaster IMO: it adds complexity by forcing
> > developers
> > > > > > to take into account two different suspend/resume approaches when
> > > > > > there's no need for it.
> > > > >
> > > > > I disagree. S3 or S4 suspend/resume (or perhaps we should call them
> > > > power state transitions to avoid confusion) are quite different from
> > Xen
> > > > suspend/resume.
> > > > > Power state transitions ought to be, and indeed are, visible to the
> > > > software running inside the guest. Applications, as well as drivers,
> > can
> > > > receive notification and take whatever action they deem appropriate.
> > > > > Xen suspend/resume OTOH is used when a guest is migrated and the
> > code
> > > > should go to all lengths possible to make any software running inside
> > the
> > > > guest (other than Xen specific enlightened code, such as PV drivers)
> > > > completely unaware that anything has actually happened.
> > > >
> > > > So from what you say above PM state transitions are notified to all
> > > > drivers, and Xen suspend/resume is only notified to PV drivers, and
> > > > here we are speaking about blkfront which is a PV driver, and should
> > > > get notified in both cases. So I'm unsure why the same (or at least
> > > > very similar) approach can't be used in both cases.
> > > >
> > > > The suspend/resume approach proposed by this patch is completely
> > > > different than the one used by a xenbus initiated suspend/resume, and
> > > > I don't see a technical reason that warrants this difference.
> > > >
> > >
> > > Within an individual PV driver it may well be ok to use common
> > mechanisms for connecting to the backend but issues will arise if any
> > subsequent action is visible to the guest. E.g. a network frontend needs
> > to issue gratuitous ARPs without anything else in the network stack (or
> > monitoring the network stack) knowing that it has happened.
> > >
> > > > I'm not saying that the approach used here is wrong, it's just that I
> > > > don't see the point in having two different ways to do suspend/resume
> > > > in the same driver, unless there's a technical reason for it, which I
> > > > don't think has been provided.
> > >
> > > The technical justification is that the driver needs to know what kind
> > of suspend or resume it is doing, so that it doesn't do the wrong thing.
> > There may also be differences in the state of the system e.g. in Windows,
> > at least some of the resume-from-xen-suspend code runs with interrupts
> > disabled (which is necessary to make sure enough state is restored before
> > things become visible to other kernel code).
> > >
> > > >
> > > > I would be fine with switching xenbus initiated suspend/resume to also
> > > > use the approach proposed here: freeze the queues and drain the shared
> > > > rings before suspending.
> > > >
> > >
> > > I think abstracting away at the xenbus level to some degree is probably
> > feasible, but some sort of flag should be passed to the individual drivers
> > so they know what circumstances they are operating under.
> > >
> > > > > So, whilst it may be possible to use common routines to, for
> > example,
> > > > re-establish PV frontend/backend communication, PV frontend code
> > should be
> > > > acutely aware of the circumstances they are operating in. I can cite
> > > > example code in the Windows PV driver, which have supported guest
> > S3/S4
> > > > power state transitions since day 1.
> > > >
> > > > Hm, please bear with me, as I'm not sure I fully understand. Why isn't
> > > > the current suspend/resume logic suitable for PM transitions?
> > > >
> > >
> > > I don’t know the details for Linux but it may well be to do with
> > assumptions made about the system e.g. the ability to block waiting for
> > something to happen on another CPU (which may have already been quiesced
> > in a PM context).
> > >
> > > > As said above, I'm happy to switch xenbus initiated suspend/resume to
> > > > use the logic in this patch, but unless there's a technical reason for
> > > > it I don't see why blkfront should have two completely different
> > > > approaches to suspend/resume depending on whether it's a PM or a
> > > > xenbus state change.
> > > >
> > >
> > > Hopefully what I said above illustrates why it may not be 100% common.
> > 
> > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > that the hooks will have different prototypes), but I expect
> > that routines can be shared, and that the approach taken can be the
> > same.
> > 
> > For example one necessary difference will be that xenbus initiated
> > suspend won't close the PV connection, in case suspension fails. On PM
> > suspend you seem to always close the connection beforehand, so you
> > will always have to re-negotiate on resume even if suspension failed.
> >
I don't get what you mean, 'suspension failure' during disconnecting frontend from 
backend? [as in this case we mark frontend closed and then wait for completion]
Or do you mean suspension fail in general post bkacend is disconnected from
frontend for blkfront? 

In case of later, if anything fails after the dpm_suspend(),
things need to be thawed or set back up so it should ok to always 
re-negotitate just to avoid errors. 

> > What I'm mostly worried about is the different approach to ring
> > draining. Ie: either xenbus is changed to freeze the queues and drain
> > the shared rings, or PM uses the already existing logic of not
> > flushing the rings an re-issuing in-flight requests on resume.
> > 
> 
> Yes, that's needs consideration. I don’t think the same semantic can be suitable for both. E.g. in a xen-suspend we need to freeze with as little processing as possible to avoid dirtying RAM late in the migration cycle, and we know that in-flight data can wait. But in a transition to S4 we need to make sure that at least all the in-flight blkif requests get completed, since they probably contain bits of the guest's memory image and that's not going to get saved any other way.
> 
>   Paul
I agree with Paul here. Just so as you know, I did try a hacky way in the past 
to re-queue requests in the past and failed miserably.
I doubt[just from my experimentation]re-queuing the requests will work for PM 
Hibernation for the same reason Paul mentioned above unless you give me pressing
reason why it should work.
Also, won't it effect the migration time if we start waiting for all the
inflight requests to complete[last min page faults] ?


Thanks,
Anchal

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-20 17:01                     ` Durrant, Paul
  2020-02-21  0:49                       ` Anchal Agarwal
@ 2020-02-21  9:22                       ` Roger Pau Monné
  2020-02-21  9:56                         ` Durrant, Paul
  1 sibling, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-21  9:22 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

On Thu, Feb 20, 2020 at 05:01:52PM +0000, Durrant, Paul wrote:
> > > Hopefully what I said above illustrates why it may not be 100% common.
> > 
> > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > that the hooks will have different prototypes), but I expect
> > that routines can be shared, and that the approach taken can be the
> > same.
> > 
> > For example one necessary difference will be that xenbus initiated
> > suspend won't close the PV connection, in case suspension fails. On PM
> > suspend you seem to always close the connection beforehand, so you
> > will always have to re-negotiate on resume even if suspension failed.
> > 
> > What I'm mostly worried about is the different approach to ring
> > draining. Ie: either xenbus is changed to freeze the queues and drain
> > the shared rings, or PM uses the already existing logic of not
> > flushing the rings an re-issuing in-flight requests on resume.
> > 
> 
> Yes, that's needs consideration. I don’t think the same semantic can be suitable for both. E.g. in a xen-suspend we need to freeze with as little processing as possible to avoid dirtying RAM late in the migration cycle, and we know that in-flight data can wait. But in a transition to S4 we need to make sure that at least all the in-flight blkif requests get completed, since they probably contain bits of the guest's memory image and that's not going to get saved any other way.

Thanks, that makes sense and something along this lines should be
added to the commit message IMO.

Wondering about S4, shouldn't we expect the queues to already be
empty? As any subsystem that wanted to store something to disk should
make sure requests have been successfully completed before
suspending.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-21  0:49                       ` Anchal Agarwal
@ 2020-02-21  9:47                         ` Roger Pau Monné
  0 siblings, 0 replies; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-21  9:47 UTC (permalink / raw)
  To: Anchal Agarwal
  Cc: Durrant, Paul, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

On Fri, Feb 21, 2020 at 12:49:18AM +0000, Anchal Agarwal wrote:
> On Thu, Feb 20, 2020 at 10:01:52AM -0700, Durrant, Paul wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > Sent: 20 February 2020 16:49
> > > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> > > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > > linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> > > Woodhouse, David <dwmw@amazon.co.uk>
> > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > > for PM suspend and hibernation
> > > For example one necessary difference will be that xenbus initiated
> > > suspend won't close the PV connection, in case suspension fails. On PM
> > > suspend you seem to always close the connection beforehand, so you
> > > will always have to re-negotiate on resume even if suspension failed.
> > >
> I don't get what you mean, 'suspension failure' during disconnecting frontend from 
> backend? [as in this case we mark frontend closed and then wait for completion]
> Or do you mean suspension fail in general post bkacend is disconnected from
> frontend for blkfront? 

I don't think you strictly need to disconnect from the backend when
suspending. Just waiting for all requests to finish should be enough.

This has the benefit of not having to renegotiate if the suspension
fails, and thus you can recover from suspension faster in case of
failure. Since you haven't closed the connection with the backend just
unfreezing the queues should get you working again, and avoids all the
renegotiation.

> In case of later, if anything fails after the dpm_suspend(),
> things need to be thawed or set back up so it should ok to always 
> re-negotitate just to avoid errors. 
> 
> > > What I'm mostly worried about is the different approach to ring
> > > draining. Ie: either xenbus is changed to freeze the queues and drain
> > > the shared rings, or PM uses the already existing logic of not
> > > flushing the rings an re-issuing in-flight requests on resume.
> > > 
> > 
> > Yes, that's needs consideration. I don’t think the same semantic can be suitable for both. E.g. in a xen-suspend we need to freeze with as little processing as possible to avoid dirtying RAM late in the migration cycle, and we know that in-flight data can wait. But in a transition to S4 we need to make sure that at least all the in-flight blkif requests get completed, since they probably contain bits of the guest's memory image and that's not going to get saved any other way.
> > 
> >   Paul
> I agree with Paul here. Just so as you know, I did try a hacky way in the past 
> to re-queue requests in the past and failed miserably.

Well, it works AFAIK for xenbus initiated suspension, so I would be
interested to know why it doesn't work with PM suspension.

> I doubt[just from my experimentation]re-queuing the requests will work for PM 
> Hibernation for the same reason Paul mentioned above unless you give me pressing
> reason why it should work.

My main reason is that I don't want to maintain two different
approaches to suspend/resume without a technical argument for it. I'm
not happy to take a bunch of new code just because the current one
doesn't seem to work in your use-case.

That being said, if there's a justification for doing it differently
it needs to be stated clearly in the commit. From the current commit
message I didn't gasp that there was a reason for not using the
current xenbus suspend/resume logic.

> Also, won't it effect the migration time if we start waiting for all the
> inflight requests to complete[last min page faults] ?

Well, it's going to dirty pages that would have to be re-send to the
destination side.

Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-21  9:22                       ` Roger Pau Monné
@ 2020-02-21  9:56                         ` Durrant, Paul
  2020-02-21 10:21                           ` Roger Pau Monné
  0 siblings, 1 reply; 37+ messages in thread
From: Durrant, Paul @ 2020-02-21  9:56 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

> -----Original Message-----
> From: Roger Pau Monné <roger.pau@citrix.com>
> Sent: 21 February 2020 09:22
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> Woodhouse, David <dwmw@amazon.co.uk>
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> On Thu, Feb 20, 2020 at 05:01:52PM +0000, Durrant, Paul wrote:
> > > > Hopefully what I said above illustrates why it may not be 100%
> common.
> > >
> > > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > > that the hooks will have different prototypes), but I expect
> > > that routines can be shared, and that the approach taken can be the
> > > same.
> > >
> > > For example one necessary difference will be that xenbus initiated
> > > suspend won't close the PV connection, in case suspension fails. On PM
> > > suspend you seem to always close the connection beforehand, so you
> > > will always have to re-negotiate on resume even if suspension failed.
> > >
> > > What I'm mostly worried about is the different approach to ring
> > > draining. Ie: either xenbus is changed to freeze the queues and drain
> > > the shared rings, or PM uses the already existing logic of not
> > > flushing the rings an re-issuing in-flight requests on resume.
> > >
> >
> > Yes, that's needs consideration. I don’t think the same semantic can be
> suitable for both. E.g. in a xen-suspend we need to freeze with as little
> processing as possible to avoid dirtying RAM late in the migration cycle,
> and we know that in-flight data can wait. But in a transition to S4 we
> need to make sure that at least all the in-flight blkif requests get
> completed, since they probably contain bits of the guest's memory image
> and that's not going to get saved any other way.
> 
> Thanks, that makes sense and something along this lines should be
> added to the commit message IMO.
> 
> Wondering about S4, shouldn't we expect the queues to already be
> empty? As any subsystem that wanted to store something to disk should
> make sure requests have been successfully completed before
> suspending.

What about writing the suspend image itself? Normal filesystem I/O will have been flushed of course, but whatever vestigial kernel actually writes out the hibernation file may well expect a final D0->D3 on the storage device to cause a flush. Again, I don't know the specifics for Linux (and Windows actually uses an incarnation of the crash kernel to do the job, which brings with it a whole other set of complexity as far as PV drivers go).

  Paul

> 
> Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-21  9:56                         ` Durrant, Paul
@ 2020-02-21 10:21                           ` Roger Pau Monné
  2020-02-21 10:33                             ` Durrant, Paul
  0 siblings, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-21 10:21 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

On Fri, Feb 21, 2020 at 09:56:54AM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Roger Pau Monné <roger.pau@citrix.com>
> > Sent: 21 February 2020 09:22
> > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> > Woodhouse, David <dwmw@amazon.co.uk>
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > On Thu, Feb 20, 2020 at 05:01:52PM +0000, Durrant, Paul wrote:
> > > > > Hopefully what I said above illustrates why it may not be 100%
> > common.
> > > >
> > > > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > > > that the hooks will have different prototypes), but I expect
> > > > that routines can be shared, and that the approach taken can be the
> > > > same.
> > > >
> > > > For example one necessary difference will be that xenbus initiated
> > > > suspend won't close the PV connection, in case suspension fails. On PM
> > > > suspend you seem to always close the connection beforehand, so you
> > > > will always have to re-negotiate on resume even if suspension failed.
> > > >
> > > > What I'm mostly worried about is the different approach to ring
> > > > draining. Ie: either xenbus is changed to freeze the queues and drain
> > > > the shared rings, or PM uses the already existing logic of not
> > > > flushing the rings an re-issuing in-flight requests on resume.
> > > >
> > >
> > > Yes, that's needs consideration. I don’t think the same semantic can be
> > suitable for both. E.g. in a xen-suspend we need to freeze with as little
> > processing as possible to avoid dirtying RAM late in the migration cycle,
> > and we know that in-flight data can wait. But in a transition to S4 we
> > need to make sure that at least all the in-flight blkif requests get
> > completed, since they probably contain bits of the guest's memory image
> > and that's not going to get saved any other way.
> > 
> > Thanks, that makes sense and something along this lines should be
> > added to the commit message IMO.
> > 
> > Wondering about S4, shouldn't we expect the queues to already be
> > empty? As any subsystem that wanted to store something to disk should
> > make sure requests have been successfully completed before
> > suspending.
> 
> What about writing the suspend image itself? Normal filesystem I/O
> will have been flushed of course, but whatever vestigial kernel
> actually writes out the hibernation file may well expect a final
> D0->D3 on the storage device to cause a flush.

Hm, I have no idea really. I think whatever writes to the disk before
suspend should actually make sure requests have completed, but what
you suggest might also be a possibility.

Can you figure out whether there are requests on the ring or in the
queue before suspending?

> Again, I don't know the specifics for Linux (and Windows actually
> uses an incarnation of the crash kernel to do the job, which brings
> with it a whole other set of complexity as far as PV drivers go).

That seems extremely complex, I'm sure there's a reason for it :).

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* RE: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-21 10:21                           ` Roger Pau Monné
@ 2020-02-21 10:33                             ` Durrant, Paul
  2020-02-21 11:51                               ` Roger Pau Monné
  0 siblings, 1 reply; 37+ messages in thread
From: Durrant, Paul @ 2020-02-21 10:33 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

> -----Original Message-----
> From: Roger Pau Monné <roger.pau@citrix.com>
> Sent: 21 February 2020 10:22
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> Woodhouse, David <dwmw@amazon.co.uk>
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> On Fri, Feb 21, 2020 at 09:56:54AM +0000, Durrant, Paul wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > Sent: 21 February 2020 09:22
> > > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> sstabellini@kernel.org;
> > > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > > linux-kernel@vger.kernel.org; vkuznets@redhat.com;
> davem@davemloft.net;
> > > Woodhouse, David <dwmw@amazon.co.uk>
> > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> callbacks
> > > for PM suspend and hibernation
> > >
> > > On Thu, Feb 20, 2020 at 05:01:52PM +0000, Durrant, Paul wrote:
> > > > > > Hopefully what I said above illustrates why it may not be 100%
> > > common.
> > > > >
> > > > > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > > > > that the hooks will have different prototypes), but I expect
> > > > > that routines can be shared, and that the approach taken can be
> the
> > > > > same.
> > > > >
> > > > > For example one necessary difference will be that xenbus initiated
> > > > > suspend won't close the PV connection, in case suspension fails.
> On PM
> > > > > suspend you seem to always close the connection beforehand, so you
> > > > > will always have to re-negotiate on resume even if suspension
> failed.
> > > > >
> > > > > What I'm mostly worried about is the different approach to ring
> > > > > draining. Ie: either xenbus is changed to freeze the queues and
> drain
> > > > > the shared rings, or PM uses the already existing logic of not
> > > > > flushing the rings an re-issuing in-flight requests on resume.
> > > > >
> > > >
> > > > Yes, that's needs consideration. I don’t think the same semantic can
> be
> > > suitable for both. E.g. in a xen-suspend we need to freeze with as
> little
> > > processing as possible to avoid dirtying RAM late in the migration
> cycle,
> > > and we know that in-flight data can wait. But in a transition to S4 we
> > > need to make sure that at least all the in-flight blkif requests get
> > > completed, since they probably contain bits of the guest's memory
> image
> > > and that's not going to get saved any other way.
> > >
> > > Thanks, that makes sense and something along this lines should be
> > > added to the commit message IMO.
> > >
> > > Wondering about S4, shouldn't we expect the queues to already be
> > > empty? As any subsystem that wanted to store something to disk should
> > > make sure requests have been successfully completed before
> > > suspending.
> >
> > What about writing the suspend image itself? Normal filesystem I/O
> > will have been flushed of course, but whatever vestigial kernel
> > actually writes out the hibernation file may well expect a final
> > D0->D3 on the storage device to cause a flush.
> 
> Hm, I have no idea really. I think whatever writes to the disk before
> suspend should actually make sure requests have completed, but what
> you suggest might also be a possibility.
> 
> Can you figure out whether there are requests on the ring or in the
> queue before suspending?

Well there's clearly pending stuff in the ring if rsp_prod != req_prod :-) As for internal queues, I don't know how blkfront manages that (or whether it has any pending work queue at all).

  Paul

> 
> > Again, I don't know the specifics for Linux (and Windows actually
> > uses an incarnation of the crash kernel to do the job, which brings
> > with it a whole other set of complexity as far as PV drivers go).
> 
> That seems extremely complex, I'm sure there's a reason for it :).
> 
> Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-21 10:33                             ` Durrant, Paul
@ 2020-02-21 11:51                               ` Roger Pau Monné
  0 siblings, 0 replies; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-21 11:51 UTC (permalink / raw)
  To: Durrant, Paul
  Cc: Agarwal, Anchal, Valentin, Eduardo, len.brown, peterz, benh, x86,
	linux-mm, pavel, hpa, tglx, sstabellini, fllinden, Kamata,
	Munehisa, mingo, xen-devel, Singh, Balbir, axboe, konrad.wilk,
	bp, boris.ostrovsky, jgross, netdev, linux-pm, rjw, linux-kernel,
	vkuznets, davem, Woodhouse, David

On Fri, Feb 21, 2020 at 10:33:42AM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Roger Pau Monné <roger.pau@citrix.com>
> > Sent: 21 February 2020 10:22
> > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de; sstabellini@kernel.org;
> > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > linux-kernel@vger.kernel.org; vkuznets@redhat.com; davem@davemloft.net;
> > Woodhouse, David <dwmw@amazon.co.uk>
> > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> > for PM suspend and hibernation
> > 
> > On Fri, Feb 21, 2020 at 09:56:54AM +0000, Durrant, Paul wrote:
> > > > -----Original Message-----
> > > > From: Roger Pau Monné <roger.pau@citrix.com>
> > > > Sent: 21 February 2020 09:22
> > > > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > > > Cc: Agarwal, Anchal <anchalag@amazon.com>; Valentin, Eduardo
> > > > <eduval@amazon.com>; len.brown@intel.com; peterz@infradead.org;
> > > > benh@kernel.crashing.org; x86@kernel.org; linux-mm@kvack.org;
> > > > pavel@ucw.cz; hpa@zytor.com; tglx@linutronix.de;
> > sstabellini@kernel.org;
> > > > fllinden@amaozn.com; Kamata, Munehisa <kamatam@amazon.com>;
> > > > mingo@redhat.com; xen-devel@lists.xenproject.org; Singh, Balbir
> > > > <sblbir@amazon.com>; axboe@kernel.dk; konrad.wilk@oracle.com;
> > > > bp@alien8.de; boris.ostrovsky@oracle.com; jgross@suse.com;
> > > > netdev@vger.kernel.org; linux-pm@vger.kernel.org; rjw@rjwysocki.net;
> > > > linux-kernel@vger.kernel.org; vkuznets@redhat.com;
> > davem@davemloft.net;
> > > > Woodhouse, David <dwmw@amazon.co.uk>
> > > > Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add
> > callbacks
> > > > for PM suspend and hibernation
> > > >
> > > > On Thu, Feb 20, 2020 at 05:01:52PM +0000, Durrant, Paul wrote:
> > > > > > > Hopefully what I said above illustrates why it may not be 100%
> > > > common.
> > > > > >
> > > > > > Yes, that's fine. I don't expect it to be 100% common (as I guess
> > > > > > that the hooks will have different prototypes), but I expect
> > > > > > that routines can be shared, and that the approach taken can be
> > the
> > > > > > same.
> > > > > >
> > > > > > For example one necessary difference will be that xenbus initiated
> > > > > > suspend won't close the PV connection, in case suspension fails.
> > On PM
> > > > > > suspend you seem to always close the connection beforehand, so you
> > > > > > will always have to re-negotiate on resume even if suspension
> > failed.
> > > > > >
> > > > > > What I'm mostly worried about is the different approach to ring
> > > > > > draining. Ie: either xenbus is changed to freeze the queues and
> > drain
> > > > > > the shared rings, or PM uses the already existing logic of not
> > > > > > flushing the rings an re-issuing in-flight requests on resume.
> > > > > >
> > > > >
> > > > > Yes, that's needs consideration. I don’t think the same semantic can
> > be
> > > > suitable for both. E.g. in a xen-suspend we need to freeze with as
> > little
> > > > processing as possible to avoid dirtying RAM late in the migration
> > cycle,
> > > > and we know that in-flight data can wait. But in a transition to S4 we
> > > > need to make sure that at least all the in-flight blkif requests get
> > > > completed, since they probably contain bits of the guest's memory
> > image
> > > > and that's not going to get saved any other way.
> > > >
> > > > Thanks, that makes sense and something along this lines should be
> > > > added to the commit message IMO.
> > > >
> > > > Wondering about S4, shouldn't we expect the queues to already be
> > > > empty? As any subsystem that wanted to store something to disk should
> > > > make sure requests have been successfully completed before
> > > > suspending.
> > >
> > > What about writing the suspend image itself? Normal filesystem I/O
> > > will have been flushed of course, but whatever vestigial kernel
> > > actually writes out the hibernation file may well expect a final
> > > D0->D3 on the storage device to cause a flush.
> > 
> > Hm, I have no idea really. I think whatever writes to the disk before
> > suspend should actually make sure requests have completed, but what
> > you suggest might also be a possibility.
> > 
> > Can you figure out whether there are requests on the ring or in the
> > queue before suspending?
> 
> Well there's clearly pending stuff in the ring if rsp_prod != req_prod :-)

Right, I assume there's no document that states what's the expected
state for queues &c when switching PM states, so we have to assume
that there might be in-flight requests on the ring and in the driver
queues.

> As for internal queues, I don't know how blkfront manages that (or whether it has any pending work queue at all).

There are no internal queues, just the generic ones from blk_mq which
every block device has IIRC.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-14 23:25 ` [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
  2020-02-17 10:05   ` Roger Pau Monné
@ 2020-02-21 14:24   ` Roger Pau Monné
  2020-03-06 18:40     ` Anchal Agarwal
  1 sibling, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-02-21 14:24 UTC (permalink / raw)
  To: Anchal Agarwal
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com
> 
> Add freeze, thaw and restore callbacks for PM suspend and hibernation
> support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> events, need to implement these xenbus_driver callbacks.
> The freeze handler stops a block-layer queue and disconnect the
> frontend from the backend while freeing ring_info and associated resources.
> The restore handler re-allocates ring_info and re-connect to the
> backend, so the rest of the kernel can continue to use the block device
> transparently. Also, the handlers are used for both PM suspend and
> hibernation so that we can keep the existing suspend/resume callbacks for
> Xen suspend without modification. Before disconnecting from backend,
> we need to prevent any new IO from being queued and wait for existing
> IO to complete. Freeze/unfreeze of the queues will guarantee that there
> are no requests in use on the shared ring.
> 
> Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> xen/blkback: unmap all persistent grants when frontend gets disconnected,
> the frontend may see massive amount of grant table warning when freeing
> resources.
> [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
> [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> 
> In this case, persistent grants would need to be disabled.
> 
> [Anchal Changelog: Removed timeout/request during blkfront freeze.
> Fixed major part of the code to work with blk-mq]
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> ---
>  drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++---
>  1 file changed, 112 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 478120233750..d715ed3cb69a 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -47,6 +47,8 @@
>  #include <linux/bitmap.h>
>  #include <linux/list.h>
>  #include <linux/workqueue.h>
> +#include <linux/completion.h>
> +#include <linux/delay.h>
>  
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
> @@ -79,6 +81,8 @@ enum blkif_state {
>  	BLKIF_STATE_DISCONNECTED,
>  	BLKIF_STATE_CONNECTED,
>  	BLKIF_STATE_SUSPENDED,
> +	BLKIF_STATE_FREEZING,
> +	BLKIF_STATE_FROZEN
>  };
>  
>  struct grant {
> @@ -220,6 +224,7 @@ struct blkfront_info
>  	struct list_head requests;
>  	struct bio_list bio_list;
>  	struct list_head info_list;
> +	struct completion wait_backend_disconnected;
>  };
>  
>  static unsigned int nr_minors;
> @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
>  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
>  static void blkfront_gather_backend_features(struct blkfront_info *info);
>  static int negotiate_mq(struct blkfront_info *info);
> +static void __blkif_free(struct blkfront_info *info);

I'm not particularly found of adding underscore prefixes to functions,
I would rather use a more descriptive name if possible.
blkif_free_{queues/rings} maybe?

>  
>  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
>  {
> @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
>  	info->sector_size = sector_size;
>  	info->physical_sector_size = physical_sector_size;
>  	blkif_set_queue_limits(info);
> +	init_completion(&info->wait_backend_disconnected);
>  
>  	return 0;
>  }
> @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
>  /* Already hold rinfo->ring_lock. */
>  static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
>  {
> +	if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> +		return;

Do you really need this check here?

The queue will be frozen and quiesced in blkfront_freeze when the state
is set to BLKIF_STATE_FREEZING, and then the call to
blk_mq_start_stopped_hw_queues is just a noop as long as the queue is
quiesced (see blk_mq_run_hw_queue).

>  	if (!RING_FULL(&rinfo->ring))
>  		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
>  }
> @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
>  
>  static void blkif_free(struct blkfront_info *info, int suspend)
>  {
> -	unsigned int i;
> -
>  	/* Prevent new requests being issued until we fix things up. */
>  	info->connected = suspend ?
>  		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int suspend)
>  	if (info->rq)
>  		blk_mq_stop_hw_queues(info->rq);
>  
> +	__blkif_free(info);
> +}
> +
> +static void __blkif_free(struct blkfront_info *info)
> +{
> +	unsigned int i;
> +
>  	for (i = 0; i < info->nr_rings; i++)
>  		blkif_free_ring(&info->rinfo[i]);
>  
> @@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
>  	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
>  	struct blkfront_info *info = rinfo->dev_info;
>  
> -	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
> -		return IRQ_HANDLED;
> +	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
> +		if (info->connected != BLKIF_STATE_FREEZING)

Please fold this into the previous if condition:

if (unlikely(info->connected != BLKIF_STATE_CONNECTED &&
             info->connected != BLKIF_STATE_FREEZING))
	return IRQ_HANDLED;

> +	}
>  
>  	spin_lock_irqsave(&rinfo->ring_lock, flags);
>   again:
> @@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
>  	struct bio *bio;
>  	unsigned int segs;
>  
> +	bool frozen = info->connected == BLKIF_STATE_FROZEN;

Please place this together with the rest of the local variable
declarations.

>  	blkfront_gather_backend_features(info);
>  	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
>  	blkif_set_queue_limits(info);
> @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
>  		kick_pending_request_queues(rinfo);
>  	}
>  
> +	if (frozen)
> +		return 0;

I have to admit my memory is fuzzy here, but don't you need to
re-queue requests in case the backend has different limits of indirect
descriptors per request for example?

Or do we expect that the frontend is always going to be resumed on the
same backend, and thus features won't change?

> +
>  	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
>  		/* Requeue pending requests (flush or discard) */
>  		list_del_init(&req->queuelist);
> @@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
>  
>  		return;
>  	case BLKIF_STATE_SUSPENDED:
> +	case BLKIF_STATE_FROZEN:
>  		/*
>  		 * If we are recovering from suspension, we need to wait
>  		 * for the backend to announce it's features before
> @@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
>  		break;
>  
>  	case XenbusStateClosed:
> -		if (dev->state == XenbusStateClosed)
> +		if (dev->state == XenbusStateClosed) {
> +			if (info->connected == BLKIF_STATE_FREEZING) {
> +				__blkif_free(info);
> +				info->connected = BLKIF_STATE_FROZEN;
> +				complete(&info->wait_backend_disconnected);
> +				break;
> +			}
> +
>  			break;
> +		}
> +
> +		/*
> +		 * We may somehow receive backend's Closed again while thawing
> +		 * or restoring and it causes thawing or restoring to fail.
> +		 * Ignore such unexpected state anyway.
> +		 */
> +		if (info->connected == BLKIF_STATE_FROZEN &&
> +				dev->state == XenbusStateInitialised) {

I'm not sure you need the extra dev->state == XenbusStateInitialised.
If the frotnend is in state BLKIF_STATE_FROZEN you can likely ignore
the notification of the backend switched to closed state, regardless
of the frontend state?

> +			dev_dbg(&dev->dev,
> +					"ignore the backend's Closed state: %s",
> +					dev->nodename);
> +			break;
> +		}
>  		/* fall through */
>  	case XenbusStateClosing:
> -		if (info)
> -			blkfront_closing(info);
> +		if (info) {
> +			if (info->connected == BLKIF_STATE_FREEZING)
> +				xenbus_frontend_closed(dev);
> +			else
> +				blkfront_closing(info);
> +		}
>  		break;
>  	}
>  }
> @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
>  	mutex_unlock(&blkfront_mutex);
>  }
>  
> +static int blkfront_freeze(struct xenbus_device *dev)
> +{
> +	unsigned int i;
> +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> +	struct blkfront_ring_info *rinfo;
> +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> +	unsigned int timeout = 5 * HZ;
> +	int err = 0;
> +
> +	info->connected = BLKIF_STATE_FREEZING;
> +
> +	blk_mq_freeze_queue(info->rq);
> +	blk_mq_quiesce_queue(info->rq);

Don't you need to also drain the queue and make sure it's empty?

> +
> +	for (i = 0; i < info->nr_rings; i++) {
> +		rinfo = &info->rinfo[i];
> +
> +		gnttab_cancel_free_callback(&rinfo->callback);
> +		flush_work(&rinfo->work);
> +	}
> +
> +	/* Kick the backend to disconnect */
> +	xenbus_switch_state(dev, XenbusStateClosing);
> +
> +	/*
> +	 * We don't want to move forward before the frontend is diconnected
> +	 * from the backend cleanly.
> +	 */
> +	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
> +					      timeout);
> +	if (!timeout) {
> +		err = -EBUSY;
> +		xenbus_dev_error(dev, err, "Freezing timed out;"
> +				 "the device may become inconsistent state");
> +	}
> +
> +	return err;
> +}
> +
> +static int blkfront_restore(struct xenbus_device *dev)
> +{
> +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> +	int err = 0;
> +
> +	err = talk_to_blkback(dev, info);
> +	blk_mq_unquiesce_queue(info->rq);
> +	blk_mq_unfreeze_queue(info->rq);
> +
> +	if (err)
> +		goto out;

There's no need for an out label here, just return err, or even
simpler:

if (!err)
	blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);

return err;

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-02-21 14:24   ` Roger Pau Monné
@ 2020-03-06 18:40     ` Anchal Agarwal
  2020-03-09  9:54       ` Roger Pau Monné
  0 siblings, 1 reply; 37+ messages in thread
From: Anchal Agarwal @ 2020-03-06 18:40 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > From: Munehisa Kamata <kamatam@amazon.com
> > 
> > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > events, need to implement these xenbus_driver callbacks.
> > The freeze handler stops a block-layer queue and disconnect the
> > frontend from the backend while freeing ring_info and associated resources.
> > The restore handler re-allocates ring_info and re-connect to the
> > backend, so the rest of the kernel can continue to use the block device
> > transparently. Also, the handlers are used for both PM suspend and
> > hibernation so that we can keep the existing suspend/resume callbacks for
> > Xen suspend without modification. Before disconnecting from backend,
> > we need to prevent any new IO from being queued and wait for existing
> > IO to complete. Freeze/unfreeze of the queues will guarantee that there
> > are no requests in use on the shared ring.
> > 
> > Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> > xen/blkback: unmap all persistent grants when frontend gets disconnected,
> > the frontend may see massive amount of grant table warning when freeing
> > resources.
> > [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
> > [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> > 
> > In this case, persistent grants would need to be disabled.
> > 
> > [Anchal Changelog: Removed timeout/request during blkfront freeze.
> > Fixed major part of the code to work with blk-mq]
> > Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> > Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> > ---
> >  drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++---
> >  1 file changed, 112 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 478120233750..d715ed3cb69a 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -47,6 +47,8 @@
> >  #include <linux/bitmap.h>
> >  #include <linux/list.h>
> >  #include <linux/workqueue.h>
> > +#include <linux/completion.h>
> > +#include <linux/delay.h>
> >  
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > @@ -79,6 +81,8 @@ enum blkif_state {
> >  	BLKIF_STATE_DISCONNECTED,
> >  	BLKIF_STATE_CONNECTED,
> >  	BLKIF_STATE_SUSPENDED,
> > +	BLKIF_STATE_FREEZING,
> > +	BLKIF_STATE_FROZEN
> >  };
> >  
> >  struct grant {
> > @@ -220,6 +224,7 @@ struct blkfront_info
> >  	struct list_head requests;
> >  	struct bio_list bio_list;
> >  	struct list_head info_list;
> > +	struct completion wait_backend_disconnected;
> >  };
> >  
> >  static unsigned int nr_minors;
> > @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
> >  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
> >  static void blkfront_gather_backend_features(struct blkfront_info *info);
> >  static int negotiate_mq(struct blkfront_info *info);
> > +static void __blkif_free(struct blkfront_info *info);
> 
> I'm not particularly found of adding underscore prefixes to functions,
> I would rather use a more descriptive name if possible.
> blkif_free_{queues/rings} maybe?
>
Apologies for delayed response as I was OOTO.
Appreciate your feedback. Will fix
> >  
> >  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
> >  {
> > @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
> >  	info->sector_size = sector_size;
> >  	info->physical_sector_size = physical_sector_size;
> >  	blkif_set_queue_limits(info);
> > +	init_completion(&info->wait_backend_disconnected);
> >  
> >  	return 0;
> >  }
> > @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
> >  /* Already hold rinfo->ring_lock. */
> >  static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
> >  {
> > +	if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> > +		return;
> 
> Do you really need this check here?
> 
> The queue will be frozen and quiesced in blkfront_freeze when the state
> is set to BLKIF_STATE_FREEZING, and then the call to
> blk_mq_start_stopped_hw_queues is just a noop as long as the queue is
> quiesced (see blk_mq_run_hw_queue).
> 
You are right. Will fix it. May have skipped this part of the patch when fixing
blkfront_freeze.
> >  	if (!RING_FULL(&rinfo->ring))
> >  		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
> >  }
> > @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
> >  
> >  static void blkif_free(struct blkfront_info *info, int suspend)
> >  {
> > -	unsigned int i;
> > -
> >  	/* Prevent new requests being issued until we fix things up. */
> >  	info->connected = suspend ?
> >  		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> > @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int suspend)
> >  	if (info->rq)
> >  		blk_mq_stop_hw_queues(info->rq);
> >  
> > +	__blkif_free(info);
> > +}
> > +
> > +static void __blkif_free(struct blkfront_info *info)
> > +{
> > +	unsigned int i;
> > +
> >  	for (i = 0; i < info->nr_rings; i++)
> >  		blkif_free_ring(&info->rinfo[i]);
> >  
> > @@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
> >  	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
> >  	struct blkfront_info *info = rinfo->dev_info;
> >  
> > -	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
> > -		return IRQ_HANDLED;
> > +	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
> > +		if (info->connected != BLKIF_STATE_FREEZING)
> 
> Please fold this into the previous if condition:
> 
> if (unlikely(info->connected != BLKIF_STATE_CONNECTED &&
>              info->connected != BLKIF_STATE_FREEZING))
> 	return IRQ_HANDLED;
>
ACK
> > +	}
> >  
> >  	spin_lock_irqsave(&rinfo->ring_lock, flags);
> >   again:
> > @@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
> >  	struct bio *bio;
> >  	unsigned int segs;
> >  
> > +	bool frozen = info->connected == BLKIF_STATE_FROZEN;
> 
> Please place this together with the rest of the local variable
> declarations.
> 
ACK
> >  	blkfront_gather_backend_features(info);
> >  	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> >  	blkif_set_queue_limits(info);
> > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> >  		kick_pending_request_queues(rinfo);
> >  	}
> >  
> > +	if (frozen)
> > +		return 0;
> 
> I have to admit my memory is fuzzy here, but don't you need to
> re-queue requests in case the backend has different limits of indirect
> descriptors per request for example?
> 
> Or do we expect that the frontend is always going to be resumed on the
> same backend, and thus features won't change?
> 
So to understand your question better here, AFAIU the  maximum number of indirect 
grefs is fixed by the backend, but the frontend can issue requests with any 
number of indirect segments as long as it's less than the number provided by 
the backend. So by your question you mean this max number of MAX_INDIRECT_SEGMENTS 
256 on backend can change ? 
> > +
> >  	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
> >  		/* Requeue pending requests (flush or discard) */
> >  		list_del_init(&req->queuelist);
> > @@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
> >  
> >  		return;
> >  	case BLKIF_STATE_SUSPENDED:
> > +	case BLKIF_STATE_FROZEN:
> >  		/*
> >  		 * If we are recovering from suspension, we need to wait
> >  		 * for the backend to announce it's features before
> > @@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
> >  		break;
> >  
> >  	case XenbusStateClosed:
> > -		if (dev->state == XenbusStateClosed)
> > +		if (dev->state == XenbusStateClosed) {
> > +			if (info->connected == BLKIF_STATE_FREEZING) {
> > +				__blkif_free(info);
> > +				info->connected = BLKIF_STATE_FROZEN;
> > +				complete(&info->wait_backend_disconnected);
> > +				break;
> > +			}
> > +
> >  			break;
> > +		}
> > +
> > +		/*
> > +		 * We may somehow receive backend's Closed again while thawing
> > +		 * or restoring and it causes thawing or restoring to fail.
> > +		 * Ignore such unexpected state anyway.
> > +		 */
> > +		if (info->connected == BLKIF_STATE_FROZEN &&
> > +				dev->state == XenbusStateInitialised) {
> 
> I'm not sure you need the extra dev->state == XenbusStateInitialised.
> If the frotnend is in state BLKIF_STATE_FROZEN you can likely ignore
> the notification of the backend switched to closed state, regardless
> of the frontend state?
> 
I see. Sounds plausible will do my set of testing and figure out if it does
not break anything.
> > +			dev_dbg(&dev->dev,
> > +					"ignore the backend's Closed state: %s",
> > +					dev->nodename);
> > +			break;
> > +		}
> >  		/* fall through */
> >  	case XenbusStateClosing:
> > -		if (info)
> > -			blkfront_closing(info);
> > +		if (info) {
> > +			if (info->connected == BLKIF_STATE_FREEZING)
> > +				xenbus_frontend_closed(dev);
> > +			else
> > +				blkfront_closing(info);
> > +		}
> >  		break;
> >  	}
> >  }
> > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
> >  	mutex_unlock(&blkfront_mutex);
> >  }
> >  
> > +static int blkfront_freeze(struct xenbus_device *dev)
> > +{
> > +	unsigned int i;
> > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > +	struct blkfront_ring_info *rinfo;
> > +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > +	unsigned int timeout = 5 * HZ;
> > +	int err = 0;
> > +
> > +	info->connected = BLKIF_STATE_FREEZING;
> > +
> > +	blk_mq_freeze_queue(info->rq);
> > +	blk_mq_quiesce_queue(info->rq);
> 
> Don't you need to also drain the queue and make sure it's empty?
> 
blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of running HW queues synchronously
and making sure all the ongoing dispatches have finished. Did I understand your question right?
> > +
> > +	for (i = 0; i < info->nr_rings; i++) {
> > +		rinfo = &info->rinfo[i];
> > +
> > +		gnttab_cancel_free_callback(&rinfo->callback);
> > +		flush_work(&rinfo->work);
> > +	}
> > +
> > +	/* Kick the backend to disconnect */
> > +	xenbus_switch_state(dev, XenbusStateClosing);
> > +
> > +	/*
> > +	 * We don't want to move forward before the frontend is diconnected
> > +	 * from the backend cleanly.
> > +	 */
> > +	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
> > +					      timeout);
> > +	if (!timeout) {
> > +		err = -EBUSY;
> > +		xenbus_dev_error(dev, err, "Freezing timed out;"
> > +				 "the device may become inconsistent state");
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +static int blkfront_restore(struct xenbus_device *dev)
> > +{
> > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > +	int err = 0;
> > +
> > +	err = talk_to_blkback(dev, info);
> > +	blk_mq_unquiesce_queue(info->rq);
> > +	blk_mq_unfreeze_queue(info->rq);
> > +
> > +	if (err)
> > +		goto out;
> 
> There's no need for an out label here, just return err, or even
> simpler:
> 
ok.
> if (!err)
> 	blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
> 
> return err;
> 
> Thanks, Roger.
>
Thanks,
Anchal

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation
  2020-02-14 23:25 ` [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation Anchal Agarwal
@ 2020-03-06 23:03   ` Thomas Gleixner
  2020-03-09 22:37     ` [EXTERNAL][RFC " Anchal Agarwal
  0 siblings, 1 reply; 37+ messages in thread
From: Thomas Gleixner @ 2020-03-06 23:03 UTC (permalink / raw)
  To: Anchal Agarwal, mingo, bp, hpa, x86, boris.ostrovsky, jgross,
	linux-pm, linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau,
	axboe, davem, rjw, len.brown, pavel, peterz, eduval, sblbir,
	anchalag, xen-devel, vkuznets, netdev, linux-kernel, dwmw,
	fllinden, benh

Anchal Agarwal <anchalag@amazon.com> writes:

> There are no pm handlers for the legacy devices, so during tear down
> stale event channel <> IRQ mapping may still remain in the image and
> resume may fail. To avoid adding much code by implementing handlers for
> legacy devices, add a new irq_chip flag IRQCHIP_SHUTDOWN_ON_SUSPEND which
> when enabled on an irq-chip e.g xen-pirq, it will let core suspend/resume
> irq code to shutdown and restart the active irqs. PM suspend/hibernation
> code will rely on this.
> Without this, in PM hibernation, information about the event channel
> remains in hibernation image, but there is no guarantee that the same
> event channel numbers are assigned to the devices when restoring the
> system. This may cause conflict like the following and prevent some
> devices from being restored correctly.

The above is just an agglomeration of words and acronyms and some of
these sentences do not even make sense. Anyone who is not aware of event
channels and whatever XENisms you talk about will be entirely
confused. Changelogs really need to be understandable for mere mortals
and there is no space restriction so acronyms can be written out.

Something like this:

  Many legacy device drivers do not implement power management (PM)
  functions which means that interrupts requested by these drivers stay
  in active state when the kernel is hibernated.

  This does not matter on bare metal and on most hypervisors because the
  interrupt is restored on resume without any noticable side effects as
  it stays connected to the same physical or virtual interrupt line.

  The XEN interrupt mechanism is different as it maintains a mapping
  between the Linux interrupt number and a XEN event channel. If the
  interrupt stays active on hibernation this mapping is preserved but
  there is unfortunately no guarantee that on resume the same event
  channels are reassigned to these devices. This can result in event
  channel conflicts which prevent the affected devices from being
  restored correctly.

  One way to solve this would be to add the necessary power management
  functions to all affected legacy device drivers, but that's a
  questionable effort which does not provide any benefits on non-XEN
  environments.

  The least intrusive and most efficient solution is to provide a
  mechanism which allows the core interrupt code to tear down these
  interrupts on hibernation and bring them back up again on resume. This
  allows the XEN event channel mechanism to assign an arbitrary event
  channel on resume without affecting the functionality of these
  devices.
  
  Fortunately all these device interrupts are handled by a dedicated XEN
  interrupt chip so the chip can be marked that all interrupts connected
  to it are handled this way. This is pretty much in line with the other
  interrupt chip specific quirks, e.g. IRQCHIP_MASK_ON_SUSPEND.

  Add a new quirk flag IRQCHIP_SHUTDOWN_ON_SUSPEND and add support for
  it the core interrupt suspend/resume paths.

Hmm?

> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>

Not that I care much, but now that I've written both the patch and the
changelog you might change that attribution slightly. For completeness
sake:

 Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
  2020-03-06 18:40     ` Anchal Agarwal
@ 2020-03-09  9:54       ` Roger Pau Monné
       [not found]         ` <FA688A68-5372-4757-B075-A69A45671CB9@amazon.com>
  0 siblings, 1 reply; 37+ messages in thread
From: Roger Pau Monné @ 2020-03-09  9:54 UTC (permalink / raw)
  To: Anchal Agarwal
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

On Fri, Mar 06, 2020 at 06:40:33PM +0000, Anchal Agarwal wrote:
> On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > >  	blkfront_gather_backend_features(info);
> > >  	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> > >  	blkif_set_queue_limits(info);
> > > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> > >  		kick_pending_request_queues(rinfo);
> > >  	}
> > >  
> > > +	if (frozen)
> > > +		return 0;
> > 
> > I have to admit my memory is fuzzy here, but don't you need to
> > re-queue requests in case the backend has different limits of indirect
> > descriptors per request for example?
> > 
> > Or do we expect that the frontend is always going to be resumed on the
> > same backend, and thus features won't change?
> > 
> So to understand your question better here, AFAIU the  maximum number of indirect 
> grefs is fixed by the backend, but the frontend can issue requests with any 
> number of indirect segments as long as it's less than the number provided by 
> the backend. So by your question you mean this max number of MAX_INDIRECT_SEGMENTS 
> 256 on backend can change ?

Yes, number of indirect descriptors supported by the backend can
change, because you moved to a different backend, or because the
maximum supported by the backend has changed. It's also possible to
resume on a backend that has no indirect descriptors support at all.

> > > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
> > >  	mutex_unlock(&blkfront_mutex);
> > >  }
> > >  
> > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > +{
> > > +	unsigned int i;
> > > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > > +	struct blkfront_ring_info *rinfo;
> > > +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > > +	unsigned int timeout = 5 * HZ;
> > > +	int err = 0;
> > > +
> > > +	info->connected = BLKIF_STATE_FREEZING;
> > > +
> > > +	blk_mq_freeze_queue(info->rq);
> > > +	blk_mq_quiesce_queue(info->rq);
> > 
> > Don't you need to also drain the queue and make sure it's empty?
> > 
> blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of running HW queues synchronously
> and making sure all the ongoing dispatches have finished. Did I understand your question right?

Can you please add some check to that end? (ie: that there are no
pending requests on any queue?)

Thanks, Roger.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [EXTERNAL][RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation
  2020-03-06 23:03   ` Thomas Gleixner
@ 2020-03-09 22:37     ` Anchal Agarwal
  0 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-03-09 22:37 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm, linux-mm,
	kamatam, sstabellini, konrad.wilk, roger.pau, axboe, davem, rjw,
	len.brown, pavel, peterz, eduval, sblbir, xen-devel, vkuznets,
	netdev, linux-kernel, dwmw, fllinden, benh

On Sat, Mar 07, 2020 at 12:03:52AM +0100, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> Anchal Agarwal <anchalag@amazon.com> writes:
> 
> > There are no pm handlers for the legacy devices, so during tear down
> > stale event channel <> IRQ mapping may still remain in the image and
> > resume may fail. To avoid adding much code by implementing handlers for
> > legacy devices, add a new irq_chip flag IRQCHIP_SHUTDOWN_ON_SUSPEND which
> > when enabled on an irq-chip e.g xen-pirq, it will let core suspend/resume
> > irq code to shutdown and restart the active irqs. PM suspend/hibernation
> > code will rely on this.
> > Without this, in PM hibernation, information about the event channel
> > remains in hibernation image, but there is no guarantee that the same
> > event channel numbers are assigned to the devices when restoring the
> > system. This may cause conflict like the following and prevent some
> > devices from being restored correctly.
> 
> The above is just an agglomeration of words and acronyms and some of
> these sentences do not even make sense. Anyone who is not aware of event
> channels and whatever XENisms you talk about will be entirely
> confused. Changelogs really need to be understandable for mere mortals
> and there is no space restriction so acronyms can be written out.
> 
I don't understand what does not makes sense here. Of course the one you
described is more elaborate and explanatory and I agree I just wrote a short 
one from perspective of PM hibernation related to Xen domU. 
All I explained was why teardown is needed, what is the solution and 
what will happen if we do not clear those mappings. 
> Something like this:
> 
>   Many legacy device drivers do not implement power management (PM)
>   functions which means that interrupts requested by these drivers stay
>   in active state when the kernel is hibernated.
> 
>   This does not matter on bare metal and on most hypervisors because the
>   interrupt is restored on resume without any noticable side effects as
>   it stays connected to the same physical or virtual interrupt line.
> 
>   The XEN interrupt mechanism is different as it maintains a mapping
>   between the Linux interrupt number and a XEN event channel. If the
>   interrupt stays active on hibernation this mapping is preserved but
>   there is unfortunately no guarantee that on resume the same event
>   channels are reassigned to these devices. This can result in event
>   channel conflicts which prevent the affected devices from being
>   restored correctly.
> 
>   One way to solve this would be to add the necessary power management
>   functions to all affected legacy device drivers, but that's a
>   questionable effort which does not provide any benefits on non-XEN
>   environments.
> 
>   The least intrusive and most efficient solution is to provide a
>   mechanism which allows the core interrupt code to tear down these
>   interrupts on hibernation and bring them back up again on resume. This
>   allows the XEN event channel mechanism to assign an arbitrary event
>   channel on resume without affecting the functionality of these
>   devices.
> 
>   Fortunately all these device interrupts are handled by a dedicated XEN
>   interrupt chip so the chip can be marked that all interrupts connected
>   to it are handled this way. This is pretty much in line with the other
>   interrupt chip specific quirks, e.g. IRQCHIP_MASK_ON_SUSPEND.
> 
>   Add a new quirk flag IRQCHIP_SHUTDOWN_ON_SUSPEND and add support for
>   it the core interrupt suspend/resume paths.
> 
> Hmm?
> 
Sure.
> > Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> > Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> 
> Not that I care much, but now that I've written both the patch and the
> changelog you might change that attribution slightly. For completeness
> sake:
> 
Why not. That's mandated now :)
>  Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> 
> Thanks,
> 
>         tglx
Thanks,
Anchal

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
       [not found]           ` <20200312090435.GK24449@Air-de-Roger.citrite.net>
@ 2020-03-13 17:21             ` Anchal Agarwal
  0 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-03-13 17:21 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

On Thu, Mar 12, 2020 at 10:04:35AM +0100, Roger Pau Monné wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Mar 11, 2020 at 10:25:15PM +0000, Agarwal, Anchal wrote:
> > Hi Roger,
> > I am trying to understand your comments on indirect descriptors specially without polluting the mailing list hence emailing you personally.
> 
> IMO it's better to send to the mailing list. The issues or questions
> you have about indirect descriptors can be helpful to others in the
> future. If there's no confidential information please send to the
> list next time.
> 
> Feel free to forward this reply to the list also.
>
Sure no problem at all.
> > Hope that's ok by you.  Please see my response inline.
> >
> >     On Fri, Mar 06, 2020 at 06:40:33PM +0000, Anchal Agarwal wrote:
> >     > On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> >     > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> >     > > >   blkfront_gather_backend_features(info);
> >     > > >   /* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> >     > > >   blkif_set_queue_limits(info);
> >     > > > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> >     > > >           kick_pending_request_queues(rinfo);
> >     > > >   }
> >     > > >
> >     > > > + if (frozen)
> >     > > > +         return 0;
> >     > >
> >     > > I have to admit my memory is fuzzy here, but don't you need to
> >     > > re-queue requests in case the backend has different limits of indirect
> >     > > descriptors per request for example?
> >     > >
> >     > > Or do we expect that the frontend is always going to be resumed on the
> >     > > same backend, and thus features won't change?
> >     > >
> >     > So to understand your question better here, AFAIU the  maximum number of indirect
> >     > grefs is fixed by the backend, but the frontend can issue requests with any
> >     > number of indirect segments as long as it's less than the number provided by
> >     > the backend. So by your question you mean this max number of MAX_INDIRECT_SEGMENTS
> >     > 256 on backend can change ?
> >
> >     Yes, number of indirect descriptors supported by the backend can
> >     change, because you moved to a different backend, or because the
> >     maximum supported by the backend has changed. It's also possible to
> >     resume on a backend that has no indirect descriptors support at all.
> >
> > AFAIU, the code for requeuing the requests is only for xen suspend/resume. These request in the queue are
> > same that gets added to queuelist in blkfront_resume. Also, even if indirect descriptors change on resume,
> > they just need to be broadcasted to frontend and which means we could just mean that a request can process
> > more data.
> 
> Or less data. You could legitimately migrate from a host that has
> indirect descriptors to one without, in which case requests would need
> to be smaller to fit the ring slots.
> 
> > We do setup indirect descriptors on front end on blkif_recover before returning and queue limits are
> > setup accordingly.
> > Am I missing anything here?
> 
> Calling blkif_recover should take care of it AFAICT. As it resets the
> queue limits according to the data announced on xenstore.
> 
> I think I got confused, using blkif_recover should be fine, sorry.
> 
Ok. Thanks for confirming. I will fixup other suggestions in the patch and send
out a v4.
> >
> >     > > > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
> >     > > >   mutex_unlock(&blkfront_mutex);
> >     > > >  }
> >     > > >
> >     > > > +static int blkfront_freeze(struct xenbus_device *dev)
> >     > > > +{
> >     > > > + unsigned int i;
> >     > > > + struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> >     > > > + struct blkfront_ring_info *rinfo;
> >     > > > + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> >     > > > + unsigned int timeout = 5 * HZ;
> >     > > > + int err = 0;
> >     > > > +
> >     > > > + info->connected = BLKIF_STATE_FREEZING;
> >     > > > +
> >     > > > + blk_mq_freeze_queue(info->rq);
> >     > > > + blk_mq_quiesce_queue(info->rq);
> >     > >
> >     > > Don't you need to also drain the queue and make sure it's empty?
> >     > >
> >     > blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of running HW queues synchronously
> >     > and making sure all the ongoing dispatches have finished. Did I understand your question right?
> >
> >     Can you please add some check to that end? (ie: that there are no
> >     pending requests on any queue?)
> >
> > Well a check to see if there are any unconsumed responses could be done.
> > I haven't come across use case in my testing where this failed but maybe there are other
> > setups that may cause issue here.
> 
> Thanks! It's mostly to be on the safe side if we expect the queues and
> rings to be fully drained.
> 
ACK.
> Roger.
Thanks,
Anchal

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
@ 2020-02-12 22:32 Anchal Agarwal
  0 siblings, 0 replies; 37+ messages in thread
From: Anchal Agarwal @ 2020-02-12 22:32 UTC (permalink / raw)
  To: tglx, mingo, bp, hpa, x86, boris.ostrovsky, jgross, linux-pm,
	linux-mm, kamatam, sstabellini, konrad.wilk, roger.pau, axboe,
	davem, rjw, len.brown, pavel, peterz, eduval, sblbir, anchalag,
	xen-devel, vkuznets, netdev, linux-kernel, dwmw, fllinden, benh

From: Munehisa Kamata <kamatam@amazon.com>

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
events, need to implement these xenbus_driver callbacks.
The freeze handler stops a block-layer queue and disconnect the
frontend from the backend while freeing ring_info and associated resources.
The restore handler re-allocates ring_info and re-connect to the
backend, so the rest of the kernel can continue to use the block device
transparently. Also, the handlers are used for both PM suspend and
hibernation so that we can keep the existing suspend/resume callbacks for
Xen suspend without modification. Before disconnecting from backend,
we need to prevent any new IO from being queued and wait for existing
IO to complete. Freeze/unfreeze of the queues will guarantee that there
are no requests in use on the shared ring.

Note:For older backends,if a backend doesn't have commit'12ea729645ace'
xen/blkback: unmap all persistent grants when frontend gets disconnected,
the frontend may see massive amount of grant table warning when freeing
resources.
[   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
[   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!

In this case, persistent grants would need to be disabled.

[Anchal Changelog: Removed timeout/request during blkfront freeze.
Fixed major part of the code to work with blk-mq]
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>

---
Changes since V2: None
---
 drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++---
 1 file changed, 112 insertions(+), 7 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 478120233750..d715ed3cb69a 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -47,6 +47,8 @@
 #include <linux/bitmap.h>
 #include <linux/list.h>
 #include <linux/workqueue.h>
+#include <linux/completion.h>
+#include <linux/delay.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -79,6 +81,8 @@ enum blkif_state {
 	BLKIF_STATE_DISCONNECTED,
 	BLKIF_STATE_CONNECTED,
 	BLKIF_STATE_SUSPENDED,
+	BLKIF_STATE_FREEZING,
+	BLKIF_STATE_FROZEN
 };
 
 struct grant {
@@ -220,6 +224,7 @@ struct blkfront_info
 	struct list_head requests;
 	struct bio_list bio_list;
 	struct list_head info_list;
+	struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
 static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
 static void blkfront_gather_backend_features(struct blkfront_info *info);
 static int negotiate_mq(struct blkfront_info *info);
+static void __blkif_free(struct blkfront_info *info);
 
 static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
 {
@@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 	blkif_set_queue_limits(info);
+	init_completion(&info->wait_backend_disconnected);
 
 	return 0;
 }
@@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
 /* Already hold rinfo->ring_lock. */
 static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
 {
+	if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
+		return;
 	if (!RING_FULL(&rinfo->ring))
 		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
 }
@@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
 
 static void blkif_free(struct blkfront_info *info, int suspend)
 {
-	unsigned int i;
-
 	/* Prevent new requests being issued until we fix things up. */
 	info->connected = suspend ?
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	if (info->rq)
 		blk_mq_stop_hw_queues(info->rq);
 
+	__blkif_free(info);
+}
+
+static void __blkif_free(struct blkfront_info *info)
+{
+	unsigned int i;
+
 	for (i = 0; i < info->nr_rings; i++)
 		blkif_free_ring(&info->rinfo[i]);
 
@@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
 	struct blkfront_info *info = rinfo->dev_info;
 
-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
-		return IRQ_HANDLED;
+	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
+		if (info->connected != BLKIF_STATE_FREEZING)
+			return IRQ_HANDLED;
+	}
 
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
  again:
@@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
 	struct bio *bio;
 	unsigned int segs;
 
+	bool frozen = info->connected == BLKIF_STATE_FROZEN;
 	blkfront_gather_backend_features(info);
 	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
 	blkif_set_queue_limits(info);
@@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
 		kick_pending_request_queues(rinfo);
 	}
 
+	if (frozen)
+		return 0;
+
 	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
 		/* Requeue pending requests (flush or discard) */
 		list_del_init(&req->queuelist);
@@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
+	case BLKIF_STATE_FROZEN:
 		/*
 		 * If we are recovering from suspension, we need to wait
 		 * for the backend to announce it's features before
@@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+			if (info->connected == BLKIF_STATE_FREEZING) {
+				__blkif_free(info);
+				info->connected = BLKIF_STATE_FROZEN;
+				complete(&info->wait_backend_disconnected);
+				break;
+			}
+
 			break;
+		}
+
+		/*
+		 * We may somehow receive backend's Closed again while thawing
+		 * or restoring and it causes thawing or restoring to fail.
+		 * Ignore such unexpected state anyway.
+		 */
+		if (info->connected == BLKIF_STATE_FROZEN &&
+				dev->state == XenbusStateInitialised) {
+			dev_dbg(&dev->dev,
+					"ignore the backend's Closed state: %s",
+					dev->nodename);
+			break;
+		}
 		/* fall through */
 	case XenbusStateClosing:
-		if (info)
-			blkfront_closing(info);
+		if (info) {
+			if (info->connected == BLKIF_STATE_FREEZING)
+				xenbus_frontend_closed(dev);
+			else
+				blkfront_closing(info);
+		}
 		break;
 	}
 }
@@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 	mutex_unlock(&blkfront_mutex);
 }
 
+static int blkfront_freeze(struct xenbus_device *dev)
+{
+	unsigned int i;
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	struct blkfront_ring_info *rinfo;
+	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
+	unsigned int timeout = 5 * HZ;
+	int err = 0;
+
+	info->connected = BLKIF_STATE_FREEZING;
+
+	blk_mq_freeze_queue(info->rq);
+	blk_mq_quiesce_queue(info->rq);
+
+	for (i = 0; i < info->nr_rings; i++) {
+		rinfo = &info->rinfo[i];
+
+		gnttab_cancel_free_callback(&rinfo->callback);
+		flush_work(&rinfo->work);
+	}
+
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/*
+	 * We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+					      timeout);
+	if (!timeout) {
+		err = -EBUSY;
+		xenbus_dev_error(dev, err, "Freezing timed out;"
+				 "the device may become inconsistent state");
+	}
+
+	return err;
+}
+
+static int blkfront_restore(struct xenbus_device *dev)
+{
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	int err = 0;
+
+	err = talk_to_blkback(dev, info);
+	blk_mq_unquiesce_queue(info->rq);
+	blk_mq_unfreeze_queue(info->rq);
+
+	if (err)
+		goto out;
+	blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
+
+out:
+	return err;
+}
+
 static const struct block_device_operations xlvbd_block_fops =
 {
 	.owner = THIS_MODULE,
@@ -2647,6 +2749,9 @@ static struct xenbus_driver blkfront_driver = {
 	.resume = blkfront_resume,
 	.otherend_changed = blkback_changed,
 	.is_ready = blkfront_is_ready,
+	.freeze = blkfront_freeze,
+	.thaw = blkfront_restore,
+	.restore = blkfront_restore
 };
 
 static void purge_persistent_grants(struct blkfront_info *info)
-- 
2.24.1.AMZN


^ permalink raw reply related	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2020-03-13 17:21 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
2020-02-14 23:22 ` [RFC PATCH v3 01/12] xen/manage: keep track of the on-going suspend mode Anchal Agarwal
2020-02-14 23:23 ` [RFC PATCH v3 02/12] xenbus: add freeze/thaw/restore callbacks support Anchal Agarwal
2020-02-14 23:23 ` [RFC PATCH v3 03/12] x86/xen: Introduce new function to map HYPERVISOR_shared_info on Resume Anchal Agarwal
2020-02-14 23:24 ` [RFC PATCH v3 04/12] x86/xen: add system core suspend and resume callbacks Anchal Agarwal
2020-02-14 23:24 ` [RFC PATCH v3 05/12] xen-netfront: add callbacks for PM suspend and hibernation support Anchal Agarwal
2020-02-14 23:25 ` [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
2020-02-17 10:05   ` Roger Pau Monné
2020-02-17 23:05     ` Anchal Agarwal
2020-02-18  9:16       ` Roger Pau Monné
2020-02-19 18:04         ` Anchal Agarwal
2020-02-20  8:39           ` Roger Pau Monné
2020-02-20  8:54             ` [Xen-devel] " Durrant, Paul
2020-02-20 15:45               ` Roger Pau Monné
2020-02-20 16:23                 ` Durrant, Paul
2020-02-20 16:48                   ` Roger Pau Monné
2020-02-20 17:01                     ` Durrant, Paul
2020-02-21  0:49                       ` Anchal Agarwal
2020-02-21  9:47                         ` Roger Pau Monné
2020-02-21  9:22                       ` Roger Pau Monné
2020-02-21  9:56                         ` Durrant, Paul
2020-02-21 10:21                           ` Roger Pau Monné
2020-02-21 10:33                             ` Durrant, Paul
2020-02-21 11:51                               ` Roger Pau Monné
2020-02-21 14:24   ` Roger Pau Monné
2020-03-06 18:40     ` Anchal Agarwal
2020-03-09  9:54       ` Roger Pau Monné
     [not found]         ` <FA688A68-5372-4757-B075-A69A45671CB9@amazon.com>
     [not found]           ` <20200312090435.GK24449@Air-de-Roger.citrite.net>
2020-03-13 17:21             ` Anchal Agarwal
2020-02-14 23:25 ` [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation Anchal Agarwal
2020-03-06 23:03   ` Thomas Gleixner
2020-03-09 22:37     ` [EXTERNAL][RFC " Anchal Agarwal
2020-02-14 23:26 ` [RFC PATCH v3 08/12] xen/time: introduce xen_{save,restore}_steal_clock Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 09/12] x86/xen: save and restore steal clock Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 10/12] xen: Introduce wrapper for save/restore sched clock offset Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 11/12] xen: Update sched clock offset to avoid system instability in hibernation Anchal Agarwal
2020-02-14 23:28 ` [RFC PATCH v3 12/12] PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA Anchal Agarwal
  -- strict thread matches above, loose matches on Subject: below --
2020-02-12 22:32 [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).