linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Anchal Agarwal <anchalag@amazon.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>,
	<hpa@zytor.com>, <x86@kernel.org>, <boris.ostrovsky@oracle.com>,
	<jgross@suse.com>, <linux-pm@vger.kernel.org>,
	<linux-mm@kvack.org>, <kamatam@amazon.com>,
	<sstabellini@kernel.org>, <konrad.wilk@oracle.com>,
	<roger.pau@citrix.com>, <axboe@kernel.dk>, <davem@davemloft.net>,
	<rjw@rjwysocki.net>, <len.brown@intel.com>, <pavel@ucw.cz>,
	<peterz@infradead.org>, <eduval@amazon.com>, <sblbir@amazon.com>,
	<anchalag@amazon.com>, <xen-devel@lists.xenproject.org>,
	<vkuznets@redhat.com>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <dwmw@amazon.co.uk>,
	<fllinden@amaozn.com>, <benh@kernel.crashing.org>
Subject: Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
Date: Fri, 13 Mar 2020 17:21:24 +0000	[thread overview]
Message-ID: <20200313172124.GB8513@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com> (raw)
In-Reply-To: <20200312090435.GK24449@Air-de-Roger.citrite.net>

On Thu, Mar 12, 2020 at 10:04:35AM +0100, Roger Pau Monné wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Mar 11, 2020 at 10:25:15PM +0000, Agarwal, Anchal wrote:
> > Hi Roger,
> > I am trying to understand your comments on indirect descriptors specially without polluting the mailing list hence emailing you personally.
> 
> IMO it's better to send to the mailing list. The issues or questions
> you have about indirect descriptors can be helpful to others in the
> future. If there's no confidential information please send to the
> list next time.
> 
> Feel free to forward this reply to the list also.
>
Sure no problem at all.
> > Hope that's ok by you.  Please see my response inline.
> >
> >     On Fri, Mar 06, 2020 at 06:40:33PM +0000, Anchal Agarwal wrote:
> >     > On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> >     > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> >     > > >   blkfront_gather_backend_features(info);
> >     > > >   /* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> >     > > >   blkif_set_queue_limits(info);
> >     > > > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> >     > > >           kick_pending_request_queues(rinfo);
> >     > > >   }
> >     > > >
> >     > > > + if (frozen)
> >     > > > +         return 0;
> >     > >
> >     > > I have to admit my memory is fuzzy here, but don't you need to
> >     > > re-queue requests in case the backend has different limits of indirect
> >     > > descriptors per request for example?
> >     > >
> >     > > Or do we expect that the frontend is always going to be resumed on the
> >     > > same backend, and thus features won't change?
> >     > >
> >     > So to understand your question better here, AFAIU the  maximum number of indirect
> >     > grefs is fixed by the backend, but the frontend can issue requests with any
> >     > number of indirect segments as long as it's less than the number provided by
> >     > the backend. So by your question you mean this max number of MAX_INDIRECT_SEGMENTS
> >     > 256 on backend can change ?
> >
> >     Yes, number of indirect descriptors supported by the backend can
> >     change, because you moved to a different backend, or because the
> >     maximum supported by the backend has changed. It's also possible to
> >     resume on a backend that has no indirect descriptors support at all.
> >
> > AFAIU, the code for requeuing the requests is only for xen suspend/resume. These request in the queue are
> > same that gets added to queuelist in blkfront_resume. Also, even if indirect descriptors change on resume,
> > they just need to be broadcasted to frontend and which means we could just mean that a request can process
> > more data.
> 
> Or less data. You could legitimately migrate from a host that has
> indirect descriptors to one without, in which case requests would need
> to be smaller to fit the ring slots.
> 
> > We do setup indirect descriptors on front end on blkif_recover before returning and queue limits are
> > setup accordingly.
> > Am I missing anything here?
> 
> Calling blkif_recover should take care of it AFAICT. As it resets the
> queue limits according to the data announced on xenstore.
> 
> I think I got confused, using blkif_recover should be fine, sorry.
> 
Ok. Thanks for confirming. I will fixup other suggestions in the patch and send
out a v4.
> >
> >     > > > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
> >     > > >   mutex_unlock(&blkfront_mutex);
> >     > > >  }
> >     > > >
> >     > > > +static int blkfront_freeze(struct xenbus_device *dev)
> >     > > > +{
> >     > > > + unsigned int i;
> >     > > > + struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> >     > > > + struct blkfront_ring_info *rinfo;
> >     > > > + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> >     > > > + unsigned int timeout = 5 * HZ;
> >     > > > + int err = 0;
> >     > > > +
> >     > > > + info->connected = BLKIF_STATE_FREEZING;
> >     > > > +
> >     > > > + blk_mq_freeze_queue(info->rq);
> >     > > > + blk_mq_quiesce_queue(info->rq);
> >     > >
> >     > > Don't you need to also drain the queue and make sure it's empty?
> >     > >
> >     > blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of running HW queues synchronously
> >     > and making sure all the ongoing dispatches have finished. Did I understand your question right?
> >
> >     Can you please add some check to that end? (ie: that there are no
> >     pending requests on any queue?)
> >
> > Well a check to see if there are any unconsumed responses could be done.
> > I haven't come across use case in my testing where this failed but maybe there are other
> > setups that may cause issue here.
> 
> Thanks! It's mostly to be on the safe side if we expect the queues and
> rings to be fully drained.
> 
ACK.
> Roger.
Thanks,
Anchal

  parent reply	other threads:[~2020-03-13 17:21 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-14 23:21 [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
2020-02-14 23:22 ` [RFC PATCH v3 01/12] xen/manage: keep track of the on-going suspend mode Anchal Agarwal
2020-02-14 23:23 ` [RFC PATCH v3 02/12] xenbus: add freeze/thaw/restore callbacks support Anchal Agarwal
2020-02-14 23:23 ` [RFC PATCH v3 03/12] x86/xen: Introduce new function to map HYPERVISOR_shared_info on Resume Anchal Agarwal
2020-02-14 23:24 ` [RFC PATCH v3 04/12] x86/xen: add system core suspend and resume callbacks Anchal Agarwal
2020-02-14 23:24 ` [RFC PATCH v3 05/12] xen-netfront: add callbacks for PM suspend and hibernation support Anchal Agarwal
2020-02-14 23:25 ` [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
2020-02-17 10:05   ` Roger Pau Monné
2020-02-17 23:05     ` Anchal Agarwal
2020-02-18  9:16       ` Roger Pau Monné
2020-02-19 18:04         ` Anchal Agarwal
2020-02-20  8:39           ` Roger Pau Monné
2020-02-20  8:54             ` [Xen-devel] " Durrant, Paul
2020-02-20 15:45               ` Roger Pau Monné
2020-02-20 16:23                 ` Durrant, Paul
2020-02-20 16:48                   ` Roger Pau Monné
2020-02-20 17:01                     ` Durrant, Paul
2020-02-21  0:49                       ` Anchal Agarwal
2020-02-21  9:47                         ` Roger Pau Monné
2020-02-21  9:22                       ` Roger Pau Monné
2020-02-21  9:56                         ` Durrant, Paul
2020-02-21 10:21                           ` Roger Pau Monné
2020-02-21 10:33                             ` Durrant, Paul
2020-02-21 11:51                               ` Roger Pau Monné
2020-02-21 14:24   ` Roger Pau Monné
2020-03-06 18:40     ` Anchal Agarwal
2020-03-09  9:54       ` Roger Pau Monné
     [not found]         ` <FA688A68-5372-4757-B075-A69A45671CB9@amazon.com>
     [not found]           ` <20200312090435.GK24449@Air-de-Roger.citrite.net>
2020-03-13 17:21             ` Anchal Agarwal [this message]
2020-02-14 23:25 ` [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation Anchal Agarwal
2020-03-06 23:03   ` Thomas Gleixner
2020-03-09 22:37     ` [EXTERNAL][RFC " Anchal Agarwal
2020-02-14 23:26 ` [RFC PATCH v3 08/12] xen/time: introduce xen_{save,restore}_steal_clock Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 09/12] x86/xen: save and restore steal clock Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 10/12] xen: Introduce wrapper for save/restore sched clock offset Anchal Agarwal
2020-02-14 23:27 ` [RFC PATCH v3 11/12] xen: Update sched clock offset to avoid system instability in hibernation Anchal Agarwal
2020-02-14 23:28 ` [RFC PATCH v3 12/12] PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA Anchal Agarwal
  -- strict thread matches above, loose matches on Subject: below --
2020-02-12 22:32 [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200313172124.GB8513@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com \
    --to=anchalag@amazon.com \
    --cc=axboe@kernel.dk \
    --cc=benh@kernel.crashing.org \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=davem@davemloft.net \
    --cc=dwmw@amazon.co.uk \
    --cc=eduval@amazon.com \
    --cc=fllinden@amaozn.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=kamatam@amazon.com \
    --cc=konrad.wilk@oracle.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pavel@ucw.cz \
    --cc=peterz@infradead.org \
    --cc=rjw@rjwysocki.net \
    --cc=roger.pau@citrix.com \
    --cc=sblbir@amazon.com \
    --cc=sstabellini@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).