xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: paul@xen.org
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Roger Pau Monné'" <roger.pau@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Ping: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation request
Date: Wed, 17 Feb 2021 09:30:24 +0100	[thread overview]
Message-ID: <006bd542-e213-a6ad-7812-e91fed7093a3@suse.com> (raw)
In-Reply-To: <ad73c330-4cbd-0ee4-fee7-2453dab00eef@suse.com>

Paul (or others), thoughts?

On 04.02.2021 12:24, Jan Beulich wrote:
> On 04.02.2021 10:26, Paul Durrant wrote:
>>> From: Jan Beulich <jbeulich@suse.com>
>>> Sent: 02 February 2021 15:15
>>>
>>> XENMEM_decrease_reservation isn't the only means by which pages can get
>>> removed from a guest, yet all removals ought to be signaled to qemu. Put
>>> setting of the flag into the central p2m_remove_page() underlying all
>>> respective hypercalls as well as a few similar places, mainly in PoD
>>> code.
>>>
>>> Additionally there's no point sending the request for the local domain
>>> when the domain acted upon is a different one. The latter domain's ioreq
>>> server mapcaches need invalidating. We assume that domain to be paused
>>> at the point the operation takes place, so sending the request in this
>>> case happens from the hvm_do_resume() path, which as one of its first
>>> steps calls handle_hvm_io_completion().
>>>
>>> Even without the remote operation aspect a single domain-wide flag
>>> doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
>>> parallel. Each of them needs to issue an invalidation request in due
>>> course, in particular because exiting to guest context should not happen
>>> before the request was actually seen by (all) the emulator(s).
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> v2: Preemption related adjustment split off. Make flag per-vCPU. More
>>>     places to set the flag. Also handle acting on a remote domain.
>>>     Re-base.
>>
>> I'm wondering if a per-vcpu flag is overkill actually. We just need
>> to make sure that we don't miss sending an invalidation where
>> multiple vcpus are in play. The mapcache in the emulator is global
>> so issuing an invalidate for every vcpu is going to cause an
>> unnecessary storm of ioreqs, isn't it?
> 
> The only time a truly unnecessary storm would occur is when for
> an already running guest mapcache invalidation gets triggered
> by a remote domain. This should be a pretty rare event, so I
> think the storm in this case ought to be tolerable.
> 
>> Could we get away with the per-domain atomic counter?
> 
> Possible, but quite involved afaict: The potential storm above
> is the price to pay for the simplicity of the model. It is
> important to note that while we don't need all of the vCPU-s
> to send these ioreqs, we need all of them to wait for the
> request(s) to be acked. And this waiting is what we get "for
> free" using the approach here, whereas we'd need to introduce
> new logic for this with an atomic counter (afaict at least).
> 
> Jan
> 



  reply	other threads:[~2021-02-17  8:30 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-02 15:13 [PATCH v2 0/2] IOREQ: mapcache invalidation request sending corrections Jan Beulich
2021-02-02 15:14 ` [PATCH v2 1/2] IOREQ: fix waiting for broadcast completion Jan Beulich
2021-02-04  8:45   ` Paul Durrant
2021-02-02 15:14 ` [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation request Jan Beulich
2021-02-04  9:26   ` Paul Durrant
2021-02-04 11:24     ` Jan Beulich
2021-02-17  8:30       ` Jan Beulich [this message]
2021-02-17  9:41         ` Ping: " Paul Durrant
2021-02-17 10:02   ` Jan Beulich
2021-02-17 10:58     ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=006bd542-e213-a6ad-7812-e91fed7093a3@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=julien@xen.org \
    --cc=paul@xen.org \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).