All of lore.kernel.org
 help / color / mirror / Atom feed
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>,
	<hpa@zytor.com>, <x86@kernel.org>, <boris.ostrovsky@oracle.com>,
	<jgross@suse.com>, <linux-pm@vger.kernel.org>,
	<linux-mm@kvack.org>, <kamatam@amazon.com>,
	<sstabellini@kernel.org>, <konrad.wilk@oracle.com>,
	<roger.pau@citrix.com>, <axboe@kernel.dk>, <davem@davemloft.net>,
	<rjw@rjwysocki.net>, <len.brown@intel.com>, <pavel@ucw.cz>,
	<peterz@infradead.org>, <eduval@amazon.com>, <sblbir@amazon.com>,
	<anchalag@amazon.com>, <xen-devel@lists.xenproject.org>,
	<vkuznets@redhat.com>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <dwmw@amazon.co.uk>,
	<benh@kernel.crashing.org>
Subject: [PATCH v3 06/11] xen-blkfront: add callbacks for PM suspend and hibernation
Date: Fri, 21 Aug 2020 22:28:19 +0000	[thread overview]
Message-ID: <22b8e0d0c2a5a7b7755d5f0206aa8de61537c5c3.1598042152.git.anchalag@amazon.com> (raw)
In-Reply-To: <cover.1598042152.git.anchalag@amazon.com>

From: Munehisa Kamata <kamatam@amazon.com>

S4 power transisiton states are much different than xen
suspend/resume. Former is visible to the guest and frontend drivers should
be aware of the state transistions and should be able to take appropriate
actions when needed. In transition to S4 we need to make sure that at least
all the in-flight blkif requests get completed, since they probably contain
bits of the guest's memory image and that's not going to get saved any
other way. Hence, re-issuing of in-flight requests as in case of xen resume
will not work here. This is in contrast to xen-suspend where we need to
freeze with as little processing as possible to avoid dirtying RAM late in
the migration cycle and we know that in-flight data can wait.

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
events, need to implement these xenbus_driver callbacks. The freeze handler
stops block-layer queue and disconnect the frontend from the backend while
freeing ring_info and associated resources. Before disconnecting from the
backend, we need to prevent any new IO from being queued and wait for existing
IO to complete. Freeze/unfreeze of the queues will guarantee that there are no
requests in use on the shared ring. However, for sanity we should check
state of the ring before disconnecting to make sure that there are no
outstanding requests to be processed on the ring. The restore handler
re-allocates ring_info, unquiesces and unfreezes the queue and re-connect to
the backend, so that rest of the kernel can continue to use the block device
transparently.

Note:For older backends,if a backend doesn't have commit'12ea729645ace'
xen/blkback: unmap all persistent grants when frontend gets disconnected,
the frontend may see massive amount of grant table warning when freeing
resources.
[   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
[   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!

In this case, persistent grants would need to be disabled.

[Anchal Agarwal: Changelog]:
RFC v1->v2: Removed timeout per request before disconnect during
	    blkfront freeze.
	    Added queue freeze/quiesce to the blkfront_freeze
	    Code cleanup
RFC v2->v3: None
RFC v3->v1: Code cleanup, Refractoring
    v1->v2: * remove err variable in blkfront_freeze
            * BugFix: error handling if rings are still busy
              after queue freeze/quiesce and returnign driver to
              connected state
            * add TODO if blkback fails to disconnect on freeze
            * Code formatting

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/block/xen-blkfront.c | 122 +++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 118 insertions(+), 4 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 3bb3dd8da9b0..500f1753e339 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -48,6 +48,8 @@
 #include <linux/list.h>
 #include <linux/workqueue.h>
 #include <linux/sched/mm.h>
+#include <linux/completion.h>
+#include <linux/delay.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -80,6 +82,8 @@ enum blkif_state {
 	BLKIF_STATE_DISCONNECTED,
 	BLKIF_STATE_CONNECTED,
 	BLKIF_STATE_SUSPENDED,
+	BLKIF_STATE_FREEZING,
+	BLKIF_STATE_FROZEN,
 };
 
 struct grant {
@@ -219,6 +223,7 @@ struct blkfront_info
 	struct list_head requests;
 	struct bio_list bio_list;
 	struct list_head info_list;
+	struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -1005,6 +1010,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 	blkif_set_queue_limits(info);
+	init_completion(&info->wait_backend_disconnected);
 
 	return 0;
 }
@@ -1353,6 +1359,8 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	unsigned int i;
 	struct blkfront_ring_info *rinfo;
 
+	if (info->connected == BLKIF_STATE_FREEZING)
+		goto free_rings;
 	/* Prevent new requests being issued until we fix things up. */
 	info->connected = suspend ?
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1360,6 +1368,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	if (info->rq)
 		blk_mq_stop_hw_queues(info->rq);
 
+free_rings:
 	for_each_rinfo(info, rinfo, i)
 		blkif_free_ring(rinfo);
 
@@ -1563,8 +1572,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
 	struct blkfront_info *info = rinfo->dev_info;
 
-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
+	if (unlikely(info->connected != BLKIF_STATE_CONNECTED &&
+			info->connected != BLKIF_STATE_FREEZING)) {
 		return IRQ_HANDLED;
+	}
 
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
  again:
@@ -2027,6 +2038,7 @@ static int blkif_recover(struct blkfront_info *info)
 	struct bio *bio;
 	unsigned int segs;
 	struct blkfront_ring_info *rinfo;
+	bool frozen = info->connected == BLKIF_STATE_FROZEN;
 
 	blkfront_gather_backend_features(info);
 	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
@@ -2049,6 +2061,9 @@ static int blkif_recover(struct blkfront_info *info)
 		kick_pending_request_queues(rinfo);
 	}
 
+	if (frozen)
+		return 0;
+
 	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
 		/* Requeue pending requests (flush or discard) */
 		list_del_init(&req->queuelist);
@@ -2365,6 +2380,7 @@ static void blkfront_connect(struct blkfront_info *info)
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
+	case BLKIF_STATE_FROZEN:
 		/*
 		 * If we are recovering from suspension, we need to wait
 		 * for the backend to announce it's features before
@@ -2482,12 +2498,37 @@ static void blkback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+			if (info->connected == BLKIF_STATE_FREEZING) {
+				blkif_free(info, 0);
+				info->connected = BLKIF_STATE_FROZEN;
+				complete(&info->wait_backend_disconnected);
+			}
 			break;
+		}
+		/*
+		 * We receive backend's Closed again while thawing
+		 * or restoring and it causes thawing or restoring to fail.
+		 * During blkfront_restore, backend is still in Closed state
+		 * and we receive backend as closed here while frontend's
+		 * dev->state is set to XenBusStateInitialized.
+		 * Ignore such unexpected state regardless of the backend's
+		 * state.
+		 */
+		if (info->connected == BLKIF_STATE_FROZEN) {
+			dev_dbg(&dev->dev, "Thawing/Restoring, ignore the backend's Closed state: %s",
+				dev->nodename);
+			break;
+		}
+
 		/* fall through */
 	case XenbusStateClosing:
-		if (info)
-			blkfront_closing(info);
+		if (info) {
+			if (info->connected == BLKIF_STATE_FREEZING)
+				xenbus_frontend_closed(dev);
+			else
+				blkfront_closing(info);
+		}
 		break;
 	}
 }
@@ -2631,6 +2672,76 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 	mutex_unlock(&blkfront_mutex);
 }
 
+static int blkfront_freeze(struct xenbus_device *dev)
+{
+	unsigned int i;
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	struct blkfront_ring_info *rinfo;
+	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
+	unsigned int timeout = 5 * HZ;
+	unsigned long flags;
+
+	info->connected = BLKIF_STATE_FREEZING;
+
+	blk_mq_freeze_queue(info->rq);
+	blk_mq_quiesce_queue(info->rq);
+
+	for_each_rinfo(info, rinfo, i) {
+		/* No more gnttab callback work. */
+		gnttab_cancel_free_callback(&rinfo->callback);
+		/* Flush gnttab callback work. Must be done with no locks held. */
+		flush_work(&rinfo->work);
+	}
+
+	for_each_rinfo(info, rinfo, i) {
+		spin_lock_irqsave(&rinfo->ring_lock, flags);
+		if (RING_FULL(&rinfo->ring) ||
+			RING_HAS_UNCONSUMED_RESPONSES(&rinfo->ring)) {
+			spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+			xenbus_dev_error(dev, -EBUSY, "Hibernation Failed. The ring is still busy");
+			info->connected = BLKIF_STATE_CONNECTED;
+			blk_mq_unquiesce_queue(info->rq);
+			blk_mq_unfreeze_queue(info->rq);
+			return -EBUSY;
+		}
+		spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+	}
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/*
+	 * We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 * TODO:Handle timeout by falling back to the normal
+	 * disconnect path and just wait for the backend to close before
+	 * reconnecting. Bring the system back to its original state by
+	 * failing hibernation gracefully.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+						timeout);
+	if (!timeout) {
+		xenbus_dev_error(dev, -EBUSY, "Freezing timed out;"
+			"the device may become inconsistent state");
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static int blkfront_restore(struct xenbus_device *dev)
+{
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	int err;
+
+	err = talk_to_blkback(dev, info);
+	if (!err) {
+		blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
+		blk_mq_unquiesce_queue(info->rq);
+		blk_mq_unfreeze_queue(info->rq);
+	}
+	return err;
+}
+
 static const struct block_device_operations xlvbd_block_fops =
 {
 	.owner = THIS_MODULE,
@@ -2654,6 +2765,9 @@ static struct xenbus_driver blkfront_driver = {
 	.resume = blkfront_resume,
 	.otherend_changed = blkback_changed,
 	.is_ready = blkfront_is_ready,
+	.freeze = blkfront_freeze,
+	.thaw = blkfront_restore,
+	.restore = blkfront_restore,
 };
 
 static void purge_persistent_grants(struct blkfront_info *info)
-- 
2.16.6


  parent reply	other threads:[~2020-08-21 22:29 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-21 22:22 [PATCH v3 00/11] Fix PM hibernation in Xen guests Anchal Agarwal
2020-08-21 22:22 ` Anchal Agarwal
2020-08-21 22:25 ` [PATCH v3 01/11] xen/manage: keep track of the on-going suspend mode Anchal Agarwal
2020-08-21 22:25   ` Anchal Agarwal
2020-09-13 15:43   ` boris.ostrovsky
2020-09-14 21:47     ` Anchal Agarwal
2020-09-14 21:47       ` Anchal Agarwal
2020-09-15  0:24       ` boris.ostrovsky
2020-09-15 18:00         ` Anchal Agarwal
2020-09-15 18:00           ` Anchal Agarwal
2020-09-15 19:58           ` boris.ostrovsky
2020-09-21 21:54             ` Anchal Agarwal
2020-09-21 21:54               ` Anchal Agarwal
2020-09-22 16:18               ` boris.ostrovsky
2020-09-22 23:17                 ` Anchal Agarwal
2020-09-22 23:17                   ` Anchal Agarwal
2020-09-25 19:04                   ` Anchal Agarwal
2020-09-25 19:04                     ` Anchal Agarwal
2020-09-25 20:02                     ` boris.ostrovsky
2020-09-25 22:28                       ` Anchal Agarwal
2020-09-25 22:28                         ` Anchal Agarwal
2020-09-28 18:49                         ` boris.ostrovsky
2020-09-30 21:29                           ` Anchal Agarwal
2020-10-01 12:43                             ` boris.ostrovsky
2021-05-21  5:26                               ` Anchal Agarwal
2021-05-25 22:23                                 ` Boris Ostrovsky
2021-05-26  4:40                                   ` Anchal Agarwal
2021-05-26 18:29                                     ` Boris Ostrovsky
2021-05-28 21:50                                       ` Anchal Agarwal
2021-06-01 14:18                                         ` Boris Ostrovsky
2021-06-02 19:37                                           ` Anchal Agarwal
2021-06-03 20:11                                             ` Boris Ostrovsky
2021-06-03 23:27                                               ` Anchal Agarwal
2021-06-04  1:49                                                 ` Boris Ostrovsky
2020-09-13 17:07   ` boris.ostrovsky
2020-08-21 22:26 ` [PATCH v3 02/11] xenbus: add freeze/thaw/restore callbacks support Anchal Agarwal
2020-08-21 22:26   ` Anchal Agarwal
2020-09-13 16:11   ` boris.ostrovsky
2020-09-15 19:56     ` Anchal Agarwal
2020-09-15 19:56       ` Anchal Agarwal
2020-08-21 22:26 ` [PATCH v3 03/11] x86/xen: Introduce new function to map HYPERVISOR_shared_info on Resume Anchal Agarwal
2020-08-21 22:26   ` Anchal Agarwal
2020-08-21 22:27 ` [PATCH v3 04/11] x86/xen: add system core suspend and resume callbacks Anchal Agarwal
2020-08-21 22:27   ` Anchal Agarwal
2020-09-13 17:25   ` boris.ostrovsky
2020-08-21 22:27 ` [PATCH v3 05/11] genirq: Shutdown irq chips in suspend/resume during hibernation Thomas Gleixner
2020-08-22  0:36   ` Thomas Gleixner
2020-08-24 17:25     ` Anchal Agarwal
2020-08-24 17:25       ` Anchal Agarwal
2020-08-25 13:20     ` Christoph Hellwig
2020-08-25 15:25       ` Thomas Gleixner
2020-08-21 22:28 ` Anchal Agarwal [this message]
2020-08-21 22:28   ` [PATCH v3 06/11] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
2020-08-21 22:29 ` [PATCH v3 07/11] xen-netfront: " Anchal Agarwal
2020-08-21 22:29   ` Anchal Agarwal
2020-08-21 22:29 ` [PATCH v3 08/11] x86/xen: save and restore steal clock during PM hibernation Anchal Agarwal
2020-08-21 22:29   ` Anchal Agarwal
2020-08-21 22:30 ` [PATCH v3 09/11] xen: Introduce wrapper for save/restore sched clock offset Anchal Agarwal
2020-08-21 22:30   ` Anchal Agarwal
2020-08-21 22:30 ` [PATCH v3 10/11] xen: Update sched clock offset to avoid system instability in hibernation Anchal Agarwal
2020-08-21 22:30   ` Anchal Agarwal
2020-09-13 17:52   ` boris.ostrovsky
2020-08-21 22:31 ` [PATCH v3 11/11] PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA Anchal Agarwal
2020-08-21 22:31   ` Anchal Agarwal
2020-08-28 18:26 ` [PATCH v3 00/11] Fix PM hibernation in Xen guests Anchal Agarwal
2020-08-28 18:26   ` Anchal Agarwal
2020-08-28 18:29   ` Rafael J. Wysocki
2020-08-28 18:29     ` Rafael J. Wysocki
2020-08-28 18:39     ` Anchal Agarwal
2020-09-11 20:44       ` Anchal Agarwal
2020-09-11 15:19 ` boris.ostrovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=22b8e0d0c2a5a7b7755d5f0206aa8de61537c5c3.1598042152.git.anchalag@amazon.com \
    --to=anchalag@amazon.com \
    --cc=axboe@kernel.dk \
    --cc=benh@kernel.crashing.org \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=davem@davemloft.net \
    --cc=dwmw@amazon.co.uk \
    --cc=eduval@amazon.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=kamatam@amazon.com \
    --cc=konrad.wilk@oracle.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pavel@ucw.cz \
    --cc=peterz@infradead.org \
    --cc=rjw@rjwysocki.net \
    --cc=roger.pau@citrix.com \
    --cc=sblbir@amazon.com \
    --cc=sstabellini@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.