From: George Dunlap <george.dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Tim Deegan <tim@xen.org>,
xen-devel@lists.xen.org, Paul Durrant <paul.durrant@citrix.com>,
Yu Zhang <yu.c.zhang@linux.intel.com>,
zhiyuan.lv@intel.com, JunNakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
Date: Wed, 22 Jun 2016 11:15:09 +0100 [thread overview]
Message-ID: <76e213ae-551b-7f88-0ec5-bfecdc802d79@citrix.com> (raw)
In-Reply-To: <576A803F02000078000F78D9@prv-mh.provo.novell.com>
On 22/06/16 11:10, Jan Beulich wrote:
>>>> On 22.06.16 at 11:47, <george.dunlap@citrix.com> wrote:
>> On 22/06/16 10:29, Jan Beulich wrote:
>>>>>> On 22.06.16 at 11:16, <george.dunlap@citrix.com> wrote:
>>>> On 22/06/16 07:39, Jan Beulich wrote:
>>>>>>>> On 21.06.16 at 16:38, <george.dunlap@citrix.com> wrote:
>>>>>> On 21/06/16 10:47, Jan Beulich wrote:
>>>>>>>>>>> And then - didn't we mean to disable that part of XenGT during
>>>>>>>>>>> migration, i.e. temporarily accept the higher performance
>>>>>>>>>>> overhead without the p2m_ioreq_server entries? In which case
>>>>>>>>>>> flipping everything back to p2m_ram_rw after (completed or
>>>>>>>>>>> canceled) migration would be exactly what we want. The (new
>>>>>>>>>>> or previous) ioreq server should attach only afterwards, and
>>>>>>>>>>> can then freely re-establish any p2m_ioreq_server entries it
>>>>>>>>>>> deems necessary.
>>>>>>>>>>>
>>>>>>>>>> Well, I agree this part of XenGT should be disabled during migration.
>>>>>>>>>> But in such
>>>>>>>>>> case I think it's device model's job to trigger the p2m type
>>>>>>>>>> flipping(i.e. by calling
>>>>>>>>>> HVMOP_set_mem_type).
>>>>>>>>> I agree - this would seem to be the simpler model here, despite (as
>>>>>>>>> George validly says) the more consistent model would be for the
>>>>>>>>> hypervisor to do the cleanup. Such cleanup would imo be reasonable
>>>>>>>>> only if there was an easy way for the hypervisor to enumerate all
>>>>>>>>> p2m_ioreq_server pages.
>>>>>>>>
>>>>>>>> Well, for me, the "easy way" means we should avoid traversing the whole ept
>>>>>>>> paging structure all at once, right?
>>>>>>>
>>>>>>> Yes.
>>>>>>
>>>>>> Does calling p2m_change_entry_type_global() not satisfy this requirement?
>>>>>
>>>>> Not really - that addresses the "low overhead" aspect, but not the
>>>>> "enumerate all such entries" one.
>>>>
>>>> I'm sorry, I think I'm missing something here. What do we need the
>>>> enumeration for?
>>>
>>> We'd need that if we were to do the cleanup in the hypervisor (as
>>> we can't rely on all p2m entry re-calculation to have happened by
>>> the time a new ioreq server registers for the type).
>>
>> So you're afraid of this sequence of events?
>> 1) Server A de-registered, triggering a ioreq_server -> ram_rw type change
>> 2) gfn N is marked as misconfigured
>> 3) Server B registers and marks gfn N as ioreq_server
>> 4) When N is accessed, the misconfiguration is resolved incorrectly to
>> ram_rw
>>
>> But that can't happen, because misconfigured entries are resolved before
>> setting a p2m entry; so at step 3, gfn N will be first set to
>> (non-misconfigured) ram_rw, then changed to (non-misconfigured)
>> ioreq_server.
>>
>> Or is there another sequence of events that I'm missing?
>
> 1) Server A marks GFN Y as ioreq_server
> 2) Server A de-registered, triggering a ioreq_server -> ram_rw type
> change
> 3) Server B registers and gfn Y still didn't become ram_rw again (as
> the misconfiguration didn't trickle down the tree far enough)
There are some missing steps here. Gfn Y is still misconfigured, right?
What will happen when the misconfiguration is resolved? Will it not
become ram_rw? If not, what would it change to and why?
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-06-22 10:15 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-19 9:05 [PATCH v4 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-05-19 9:05 ` [PATCH v4 1/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-06-14 10:04 ` Jan Beulich
2016-06-14 13:14 ` George Dunlap
2016-06-15 10:51 ` Yu Zhang
2016-05-19 9:05 ` [PATCH v4 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-05-19 9:05 ` [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2016-06-14 10:45 ` Jan Beulich
2016-06-14 13:13 ` George Dunlap
2016-06-14 13:31 ` Jan Beulich
2016-06-15 9:50 ` George Dunlap
2016-06-15 10:21 ` Jan Beulich
2016-06-15 11:28 ` George Dunlap
2016-06-16 9:30 ` Yu Zhang
2016-06-16 9:55 ` Jan Beulich
2016-06-17 10:17 ` George Dunlap
2016-06-20 9:03 ` Yu Zhang
2016-06-20 10:10 ` George Dunlap
2016-06-20 10:25 ` Jan Beulich
2016-06-20 10:32 ` George Dunlap
2016-06-20 10:55 ` Jan Beulich
2016-06-20 11:28 ` Yu Zhang
2016-06-20 13:13 ` George Dunlap
2016-06-21 7:42 ` Yu Zhang
2016-06-20 10:30 ` Yu Zhang
2016-06-20 10:43 ` George Dunlap
2016-06-20 10:45 ` Jan Beulich
2016-06-20 11:06 ` Yu Zhang
2016-06-20 11:20 ` Jan Beulich
2016-06-20 12:06 ` Yu Zhang
2016-06-20 13:38 ` Jan Beulich
2016-06-21 7:45 ` Yu Zhang
2016-06-21 8:22 ` Jan Beulich
2016-06-21 9:16 ` Yu Zhang
2016-06-21 9:47 ` Jan Beulich
2016-06-21 10:00 ` Yu Zhang
2016-06-21 14:38 ` George Dunlap
2016-06-22 6:39 ` Jan Beulich
2016-06-22 8:38 ` Yu Zhang
2016-06-22 9:11 ` Jan Beulich
2016-06-22 9:16 ` George Dunlap
2016-06-22 9:29 ` Jan Beulich
2016-06-22 9:47 ` George Dunlap
2016-06-22 10:07 ` Yu Zhang
2016-06-22 11:33 ` George Dunlap
2016-06-23 7:37 ` Yu Zhang
2016-06-23 10:33 ` George Dunlap
2016-06-24 4:16 ` Yu Zhang
2016-06-24 6:12 ` Jan Beulich
2016-06-24 7:12 ` Yu Zhang
2016-06-24 8:01 ` Jan Beulich
2016-06-24 9:57 ` Yu Zhang
2016-06-24 10:27 ` Jan Beulich
2016-06-22 10:10 ` Jan Beulich
2016-06-22 10:15 ` George Dunlap [this message]
2016-06-22 11:50 ` Jan Beulich
2016-06-15 10:52 ` Yu Zhang
2016-06-15 12:26 ` Jan Beulich
2016-06-16 9:32 ` Yu Zhang
2016-06-16 10:02 ` Jan Beulich
2016-06-16 11:18 ` Yu Zhang
2016-06-16 12:43 ` Jan Beulich
2016-06-20 9:05 ` Yu Zhang
2016-06-14 13:14 ` George Dunlap
2016-05-27 7:52 ` [PATCH v4 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Zhang, Yu C
2016-05-27 10:00 ` Jan Beulich
2016-05-27 9:51 ` Zhang, Yu C
2016-05-27 10:02 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=76e213ae-551b-7f88-0ec5-bfecdc802d79@citrix.com \
--to=george.dunlap@citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=paul.durrant@citrix.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xen.org \
--cc=yu.c.zhang@linux.intel.com \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).