linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Anchal Agarwal <anchalag@amazon.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	hpa@zytor.com, x86@kernel.org, boris.ostrovsky@oracle.com,
	jgross@suse.com, linux-pm@vger.kernel.org, linux-mm@kvack.org,
	kamatam@amazon.com, sstabellini@kernel.org,
	konrad.wilk@oracle.com, axboe@kernel.dk, davem@davemloft.net,
	rjw@rjwysocki.net, len.brown@intel.com, pavel@ucw.cz,
	peterz@infradead.org, eduval@amazon.com, sblbir@amazon.com,
	anchalag@amazon.com, xen-devel@lists.xenproject.org,
	vkuznets@redhat.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, dwmw@amazon.co.uk,
	fllinden@amaozn.com, benh@kernel.crashing.org
Cc: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>,
	<hpa@zytor.com>, <x86@kernel.org>, <boris.ostrovsky@oracle.com>,
	<jgross@suse.com>, <linux-pm@vger.kernel.org>,
	<linux-mm@kvack.org>, <kamatam@amazon.com>,
	<sstabellini@kernel.org>, <konrad.wilk@oracle.com>,
	<axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
	<len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
	<eduval@amazon.com>, <sblbir@amazon.com>,
	<xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<dwmw@amazon.co.uk>, <fllinden@amaozn.com>,
	<benh@kernel.crashing.org>
Subject: Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
Date: Mon, 17 Feb 2020 23:05:53 +0000	[thread overview]
Message-ID: <20200217230553.GA8100@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com> (raw)
In-Reply-To: <20200217100509.GE4679@Air-de-Roger>

On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > From: Munehisa Kamata <kamatam@amazon.com
> > 
> > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > events, need to implement these xenbus_driver callbacks.
> > The freeze handler stops a block-layer queue and disconnect the
> > frontend from the backend while freeing ring_info and associated resources.
> > The restore handler re-allocates ring_info and re-connect to the
> > backend, so the rest of the kernel can continue to use the block device
> > transparently. Also, the handlers are used for both PM suspend and
> > hibernation so that we can keep the existing suspend/resume callbacks for
> > Xen suspend without modification. Before disconnecting from backend,
> > we need to prevent any new IO from being queued and wait for existing
> > IO to complete.
> 
> This is different from Xen (xenstore) initiated suspension, as in that
> case Linux doesn't flush the rings or disconnects from the backend.
Yes, AFAIK in xen initiated suspension backend takes care of it. 
> 
> This is done so that in case suspensions fails the recovery doesn't
> need to reconnect the PV devices, and in order to speed up suspension
> time (ie: waiting for all queues to be flushed can take time as Linux
> supports multiqueue, multipage rings and indirect descriptors), and
> the backend could be contended if there's a lot of IO pressure from
> guests.
> 
> Linux already keeps a shadow of the ring contents, so in-flight
> requests can be re-issued after the frontend has reconnected during
> resume.
> 
> > Freeze/unfreeze of the queues will guarantee that there
> > are no requests in use on the shared ring.
> > 
> > Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> > xen/blkback: unmap all persistent grants when frontend gets disconnected,
> > the frontend may see massive amount of grant table warning when freeing
> > resources.
> > [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
> > [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> > 
> > In this case, persistent grants would need to be disabled.
> > 
> > [Anchal Changelog: Removed timeout/request during blkfront freeze.
> > Fixed major part of the code to work with blk-mq]
> > Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> > Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> > ---
> >  drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++---
> >  1 file changed, 112 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 478120233750..d715ed3cb69a 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -47,6 +47,8 @@
> >  #include <linux/bitmap.h>
> >  #include <linux/list.h>
> >  #include <linux/workqueue.h>
> > +#include <linux/completion.h>
> > +#include <linux/delay.h>
> >  
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > @@ -79,6 +81,8 @@ enum blkif_state {
> >  	BLKIF_STATE_DISCONNECTED,
> >  	BLKIF_STATE_CONNECTED,
> >  	BLKIF_STATE_SUSPENDED,
> > +	BLKIF_STATE_FREEZING,
> > +	BLKIF_STATE_FROZEN
> >  };
> >  
> >  struct grant {
> > @@ -220,6 +224,7 @@ struct blkfront_info
> >  	struct list_head requests;
> >  	struct bio_list bio_list;
> >  	struct list_head info_list;
> > +	struct completion wait_backend_disconnected;
> >  };
> >  
> >  static unsigned int nr_minors;
> > @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
> >  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
> >  static void blkfront_gather_backend_features(struct blkfront_info *info);
> >  static int negotiate_mq(struct blkfront_info *info);
> > +static void __blkif_free(struct blkfront_info *info);
> >  
> >  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
> >  {
> > @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
> >  	info->sector_size = sector_size;
> >  	info->physical_sector_size = physical_sector_size;
> >  	blkif_set_queue_limits(info);
> > +	init_completion(&info->wait_backend_disconnected);
> >  
> >  	return 0;
> >  }
> > @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct blkfront_info *info)
> >  /* Already hold rinfo->ring_lock. */
> >  static inline void kick_pending_request_queues_locked(struct blkfront_ring_info *rinfo)
> >  {
> > +	if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> > +		return;
> >  	if (!RING_FULL(&rinfo->ring))
> >  		blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
> >  }
> > @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo)
> >  
> >  static void blkif_free(struct blkfront_info *info, int suspend)
> >  {
> > -	unsigned int i;
> > -
> >  	/* Prevent new requests being issued until we fix things up. */
> >  	info->connected = suspend ?
> >  		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> > @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, int suspend)
> >  	if (info->rq)
> >  		blk_mq_stop_hw_queues(info->rq);
> >  
> > +	__blkif_free(info);
> > +}
> > +
> > +static void __blkif_free(struct blkfront_info *info)
> > +{
> > +	unsigned int i;
> > +
> >  	for (i = 0; i < info->nr_rings; i++)
> >  		blkif_free_ring(&info->rinfo[i]);
> >  
> > @@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
> >  	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
> >  	struct blkfront_info *info = rinfo->dev_info;
> >  
> > -	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
> > -		return IRQ_HANDLED;
> > +	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
> > +		if (info->connected != BLKIF_STATE_FREEZING)
> > +			return IRQ_HANDLED;
> > +	}
> >  
> >  	spin_lock_irqsave(&rinfo->ring_lock, flags);
> >   again:
> > @@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
> >  	struct bio *bio;
> >  	unsigned int segs;
> >  
> > +	bool frozen = info->connected == BLKIF_STATE_FROZEN;
> >  	blkfront_gather_backend_features(info);
> >  	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> >  	blkif_set_queue_limits(info);
> > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> >  		kick_pending_request_queues(rinfo);
> >  	}
> >  
> > +	if (frozen)
> > +		return 0;
> > +
> >  	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
> >  		/* Requeue pending requests (flush or discard) */
> >  		list_del_init(&req->queuelist);
> > @@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
> >  
> >  		return;
> >  	case BLKIF_STATE_SUSPENDED:
> > +	case BLKIF_STATE_FROZEN:
> >  		/*
> >  		 * If we are recovering from suspension, we need to wait
> >  		 * for the backend to announce it's features before
> > @@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
> >  		break;
> >  
> >  	case XenbusStateClosed:
> > -		if (dev->state == XenbusStateClosed)
> > +		if (dev->state == XenbusStateClosed) {
> > +			if (info->connected == BLKIF_STATE_FREEZING) {
> > +				__blkif_free(info);
> > +				info->connected = BLKIF_STATE_FROZEN;
> > +				complete(&info->wait_backend_disconnected);
> > +				break;
> > +			}
> > +
> >  			break;
> > +		}
> > +
> > +		/*
> > +		 * We may somehow receive backend's Closed again while thawing
> > +		 * or restoring and it causes thawing or restoring to fail.
> > +		 * Ignore such unexpected state anyway.
> > +		 */
> > +		if (info->connected == BLKIF_STATE_FROZEN &&
> > +				dev->state == XenbusStateInitialised) {
> > +			dev_dbg(&dev->dev,
> > +					"ignore the backend's Closed state: %s",
> > +					dev->nodename);
> > +			break;
> > +		}
> >  		/* fall through */
> >  	case XenbusStateClosing:
> > -		if (info)
> > -			blkfront_closing(info);
> > +		if (info) {
> > +			if (info->connected == BLKIF_STATE_FREEZING)
> > +				xenbus_frontend_closed(dev);
> > +			else
> > +				blkfront_closing(info);
> > +		}
> >  		break;
> >  	}
> >  }
> > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
> >  	mutex_unlock(&blkfront_mutex);
> >  }
> >  
> > +static int blkfront_freeze(struct xenbus_device *dev)
> > +{
> > +	unsigned int i;
> > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > +	struct blkfront_ring_info *rinfo;
> > +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > +	unsigned int timeout = 5 * HZ;
> > +	int err = 0;
> > +
> > +	info->connected = BLKIF_STATE_FREEZING;
> > +
> > +	blk_mq_freeze_queue(info->rq);
> > +	blk_mq_quiesce_queue(info->rq);
> > +
> > +	for (i = 0; i < info->nr_rings; i++) {
> > +		rinfo = &info->rinfo[i];
> > +
> > +		gnttab_cancel_free_callback(&rinfo->callback);
> > +		flush_work(&rinfo->work);
> > +	}
> > +
> > +	/* Kick the backend to disconnect */
> > +	xenbus_switch_state(dev, XenbusStateClosing);
> 
> Are you sure this is safe?
> 
In my testing running multiple fio jobs, other test scenarios running
a memory loader works fine. I did not came across a scenario that would
have failed resume due to blkfront issues unless you can sugest some?
> I don't think you wait for all requests pending on the ring to be
> finished by the backend, and hence you might loose requests as the
> ones on the ring would not be re-issued by blkfront_restore AFAICT.
> 
AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
request on the shared ring. Also, we I want to pause the queue and flush all
the pending requests in the shared ring before disconnecting from backend.
Quiescing the queue seemed a better option here as we want to make sure ongoing
requests dispatches are totally drained.
I should accept that some of these notion is borrowed from how nvme freeze/unfreeze 
is done although its not apple to apple comparison.

Do you have any particular scenario in mind which may cause resume to fail?
> Thanks, Roger.

  reply	other threads:[~2020-02-17 23:06 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
2020-02-14 23:22 ` [RFC PATCH v3 01/12] xen/manage: keep track of the on-going suspend mode Anchal Agarwal
2020-02-14 23:23 ` [RFC PATCH v3 02/12] xenbus: add freeze/thaw/restore callbacks support Anchal Agarwal
2020-02-14 23:23 ` [RFC PATCH v3 03/12] x86/xen: Introduce new function to map HYPERVISOR_shared_info on Resume Anchal Agarwal
2020-02-14 23:24 ` [RFC PATCH v3 04/12] x86/xen: add system core suspend and resume callbacks Anchal Agarwal
2020-02-14 23:24 ` [RFC PATCH v3 05/12] xen-netfront: add callbacks for PM suspend and hibernation support Anchal Agarwal
2020-02-14 23:25 ` [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
2020-02-17 10:05   ` Roger Pau Monné
2020-02-17 23:05     ` Anchal Agarwal [this message]
2020-02-18  9:16       ` Roger Pau Monné
2020-02-19 18:04         ` Anchal Agarwal
2020-02-20  8:39           ` Roger Pau Monné
2020-02-20  8:54             ` [Xen-devel] " Durrant, Paul
2020-02-20 15:45               ` Roger Pau Monné
2020-02-20 16:23                 ` Durrant, Paul
2020-02-20 16:48                   ` Roger Pau Monné
2020-02-20 17:01                     ` Durrant, Paul
2020-02-21  0:49                       ` Anchal Agarwal
2020-02-21  9:47                         ` Roger Pau Monné
2020-02-21  9:22                       ` Roger Pau Monné
2020-02-21  9:56                         ` Durrant, Paul
2020-02-21 10:21                           ` Roger Pau Monné
2020-02-21 10:33                             ` Durrant, Paul
2020-02-21 11:51                               ` Roger Pau Monné
2020-02-21 14:24   ` Roger Pau Monné
2020-03-06 18:40     ` Anchal Agarwal
2020-03-09  9:54       ` Roger Pau Monné
     [not found]         ` <FA688A68-5372-4757-B075-A69A45671CB9@amazon.com>
     [not found]           ` <20200312090435.GK24449@Air-de-Roger.citrite.net>
2020-03-13 17:21             ` Anchal Agarwal
2020-02-14 23:25 ` [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation Anchal Agarwal
2020-03-06 23:03   ` Thomas Gleixner
2020-03-09 22:37     ` [EXTERNAL][RFC " Anchal Agarwal
2020-02-14 23:26 ` [RFC PATCH v3 08/12] xen/time: introduce xen_{save,restore}_steal_clock Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 09/12] x86/xen: save and restore steal clock Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 10/12] xen: Introduce wrapper for save/restore sched clock offset Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 11/12] xen: Update sched clock offset to avoid system instability in hibernation Anchal Agarwal
2020-02-14 23:28 ` [RFC PATCH v3 12/12] PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA Anchal Agarwal
  -- strict thread matches above, loose matches on Subject: below --
2020-02-12 22:32 [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200217230553.GA8100@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com \
    --to=anchalag@amazon.com \
    --cc=axboe@kernel.dk \
    --cc=benh@kernel.crashing.org \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=davem@davemloft.net \
    --cc=dwmw@amazon.co.uk \
    --cc=eduval@amazon.com \
    --cc=fllinden@amaozn.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=kamatam@amazon.com \
    --cc=konrad.wilk@oracle.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pavel@ucw.cz \
    --cc=peterz@infradead.org \
    --cc=rjw@rjwysocki.net \
    --cc=roger.pau@citrix.com \
    --cc=sblbir@amazon.com \
    --cc=sstabellini@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).