Xen-Devel Archive on lore.kernel.org
 help / color / Atom feed
From: Anchal Agarwal <anchalag@amazon.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
	benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org,
	pavel@ucw.cz, hpa@zytor.com, tglx@linutronix.de,
	sstabellini@kernel.org, fllinden@amaozn.com, kamatam@amazon.com,
	mingo@redhat.com, xen-devel@lists.xenproject.org,
	sblbir@amazon.com, axboe@kernel.dk, konrad.wilk@oracle.com,
	anchalag@amazon.com, bp@alien8.de, boris.ostrovsky@oracle.com,
	jgross@suse.com, netdev@vger.kernel.org,
	linux-pm@vger.kernel.org, rjw@rjwysocki.net,
	linux-kernel@vger.kernel.org, vkuznets@redhat.com,
	davem@davemloft.net, dwmw@amazon.co.uk, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
Date: Fri, 13 Mar 2020 17:21:24 +0000
Message-ID: <20200313172124.GB8513@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com> (raw)
In-Reply-To: <20200312090435.GK24449@Air-de-Roger.citrite.net>

On Thu, Mar 12, 2020 at 10:04:35AM +0100, Roger Pau Monné wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, Mar 11, 2020 at 10:25:15PM +0000, Agarwal, Anchal wrote:
> > Hi Roger,
> > I am trying to understand your comments on indirect descriptors specially without polluting the mailing list hence emailing you personally.
> 
> IMO it's better to send to the mailing list. The issues or questions
> you have about indirect descriptors can be helpful to others in the
> future. If there's no confidential information please send to the
> list next time.
> 
> Feel free to forward this reply to the list also.
>
Sure no problem at all.
> > Hope that's ok by you.  Please see my response inline.
> >
> >     On Fri, Mar 06, 2020 at 06:40:33PM +0000, Anchal Agarwal wrote:
> >     > On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> >     > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> >     > > >   blkfront_gather_backend_features(info);
> >     > > >   /* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> >     > > >   blkif_set_queue_limits(info);
> >     > > > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> >     > > >           kick_pending_request_queues(rinfo);
> >     > > >   }
> >     > > >
> >     > > > + if (frozen)
> >     > > > +         return 0;
> >     > >
> >     > > I have to admit my memory is fuzzy here, but don't you need to
> >     > > re-queue requests in case the backend has different limits of indirect
> >     > > descriptors per request for example?
> >     > >
> >     > > Or do we expect that the frontend is always going to be resumed on the
> >     > > same backend, and thus features won't change?
> >     > >
> >     > So to understand your question better here, AFAIU the  maximum number of indirect
> >     > grefs is fixed by the backend, but the frontend can issue requests with any
> >     > number of indirect segments as long as it's less than the number provided by
> >     > the backend. So by your question you mean this max number of MAX_INDIRECT_SEGMENTS
> >     > 256 on backend can change ?
> >
> >     Yes, number of indirect descriptors supported by the backend can
> >     change, because you moved to a different backend, or because the
> >     maximum supported by the backend has changed. It's also possible to
> >     resume on a backend that has no indirect descriptors support at all.
> >
> > AFAIU, the code for requeuing the requests is only for xen suspend/resume. These request in the queue are
> > same that gets added to queuelist in blkfront_resume. Also, even if indirect descriptors change on resume,
> > they just need to be broadcasted to frontend and which means we could just mean that a request can process
> > more data.
> 
> Or less data. You could legitimately migrate from a host that has
> indirect descriptors to one without, in which case requests would need
> to be smaller to fit the ring slots.
> 
> > We do setup indirect descriptors on front end on blkif_recover before returning and queue limits are
> > setup accordingly.
> > Am I missing anything here?
> 
> Calling blkif_recover should take care of it AFAICT. As it resets the
> queue limits according to the data announced on xenstore.
> 
> I think I got confused, using blkif_recover should be fine, sorry.
> 
Ok. Thanks for confirming. I will fixup other suggestions in the patch and send
out a v4.
> >
> >     > > > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
> >     > > >   mutex_unlock(&blkfront_mutex);
> >     > > >  }
> >     > > >
> >     > > > +static int blkfront_freeze(struct xenbus_device *dev)
> >     > > > +{
> >     > > > + unsigned int i;
> >     > > > + struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> >     > > > + struct blkfront_ring_info *rinfo;
> >     > > > + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> >     > > > + unsigned int timeout = 5 * HZ;
> >     > > > + int err = 0;
> >     > > > +
> >     > > > + info->connected = BLKIF_STATE_FREEZING;
> >     > > > +
> >     > > > + blk_mq_freeze_queue(info->rq);
> >     > > > + blk_mq_quiesce_queue(info->rq);
> >     > >
> >     > > Don't you need to also drain the queue and make sure it's empty?
> >     > >
> >     > blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of running HW queues synchronously
> >     > and making sure all the ongoing dispatches have finished. Did I understand your question right?
> >
> >     Can you please add some check to that end? (ie: that there are no
> >     pending requests on any queue?)
> >
> > Well a check to see if there are any unconsumed responses could be done.
> > I haven't come across use case in my testing where this failed but maybe there are other
> > setups that may cause issue here.
> 
> Thanks! It's mostly to be on the safe side if we expect the queues and
> rings to be fully drained.
> 
ACK.
> Roger.
Thanks,
Anchal

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply index

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-14 23:21 [Xen-devel] [RFC RESEND PATCH v3 00/12] Enable PM hibernation on guest VMs Anchal Agarwal
2020-02-14 23:22 ` [Xen-devel] [RFC PATCH v3 01/12] xen/manage: keep track of the on-going suspend mode Anchal Agarwal
2020-02-14 23:23 ` [Xen-devel] [RFC PATCH v3 02/12] xenbus: add freeze/thaw/restore callbacks support Anchal Agarwal
2020-02-14 23:23 ` [Xen-devel] [RFC PATCH v3 03/12] x86/xen: Introduce new function to map HYPERVISOR_shared_info on Resume Anchal Agarwal
2020-02-14 23:24 ` [Xen-devel] [RFC PATCH v3 04/12] x86/xen: add system core suspend and resume callbacks Anchal Agarwal
2020-02-14 23:24 ` [Xen-devel] [RFC PATCH v3 05/12] xen-netfront: add callbacks for PM suspend and hibernation support Anchal Agarwal
2020-02-14 23:25 ` [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal
2020-02-17 10:05   ` Roger Pau Monné
2020-02-17 23:05     ` Anchal Agarwal
2020-02-18  9:16       ` Roger Pau Monné
2020-02-19 18:04         ` Anchal Agarwal
2020-02-20  8:39           ` Roger Pau Monné
2020-02-20  8:54             ` Durrant, Paul
2020-02-20 15:45               ` Roger Pau Monné
2020-02-20 16:23                 ` Durrant, Paul
2020-02-20 16:48                   ` Roger Pau Monné
2020-02-20 17:01                     ` Durrant, Paul
2020-02-21  0:49                       ` Anchal Agarwal
2020-02-21  9:47                         ` Roger Pau Monné
2020-02-21  9:22                       ` Roger Pau Monné
2020-02-21  9:56                         ` Durrant, Paul
2020-02-21 10:21                           ` Roger Pau Monné
2020-02-21 10:33                             ` Durrant, Paul
2020-02-21 11:51                               ` Roger Pau Monné
2020-02-21 14:24   ` Roger Pau Monné
2020-03-06 18:40     ` Anchal Agarwal
2020-03-09  9:54       ` Roger Pau Monné
     [not found]         ` <FA688A68-5372-4757-B075-A69A45671CB9@amazon.com>
     [not found]           ` <20200312090435.GK24449@Air-de-Roger.citrite.net>
2020-03-13 17:21             ` Anchal Agarwal [this message]
2020-02-14 23:25 ` [Xen-devel] [RFC PATCH v3 07/12] genirq: Shutdown irq chips in suspend/resume during hibernation Anchal Agarwal
2020-03-06 23:03   ` Thomas Gleixner
2020-03-09 22:37     ` [Xen-devel] [EXTERNAL][RFC " Anchal Agarwal
2020-02-14 23:26 ` [Xen-devel] [RFC PATCH v3 08/12] xen/time: introduce xen_{save, restore}_steal_clock Anchal Agarwal
2020-02-14 23:27 ` [Xen-devel] [RFC PATCH v3 09/12] x86/xen: save and restore steal clock Anchal Agarwal
2020-02-14 23:27 ` [Xen-devel] [RFC PATCH v3 10/12] xen: Introduce wrapper for save/restore sched clock offset Anchal Agarwal
2020-02-14 23:27 ` [Xen-devel] [RFC PATCH v3 11/12] xen: Update sched clock offset to avoid system instability in hibernation Anchal Agarwal
2020-02-14 23:28 ` [Xen-devel] [RFC PATCH v3 12/12] PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA Anchal Agarwal
  -- strict thread matches above, loose matches on Subject: below --
2020-02-12 22:32 [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Anchal Agarwal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200313172124.GB8513@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com \
    --to=anchalag@amazon.com \
    --cc=axboe@kernel.dk \
    --cc=benh@kernel.crashing.org \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=davem@davemloft.net \
    --cc=dwmw@amazon.co.uk \
    --cc=eduval@amazon.com \
    --cc=fllinden@amaozn.com \
    --cc=hpa@zytor.com \
    --cc=jgross@suse.com \
    --cc=kamatam@amazon.com \
    --cc=konrad.wilk@oracle.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pavel@ucw.cz \
    --cc=peterz@infradead.org \
    --cc=rjw@rjwysocki.net \
    --cc=roger.pau@citrix.com \
    --cc=sblbir@amazon.com \
    --cc=sstabellini@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Xen-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/xen-devel/0 xen-devel/git/0.git
	git clone --mirror https://lore.kernel.org/xen-devel/1 xen-devel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 xen-devel xen-devel/ https://lore.kernel.org/xen-devel \
		xen-devel@lists.xenproject.org xen-devel@lists.xen.org
	public-inbox-index xen-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.xenproject.lists.xen-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git