From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: Jan Beulich <JBeulich@suse.com>,
George Dunlap <george.dunlap@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Tim Deegan <tim@xen.org>,
xen-devel@lists.xen.org, Paul Durrant <paul.durrant@citrix.com>,
zhiyuan.lv@intel.com, JunNakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
Date: Mon, 20 Jun 2016 19:28:45 +0800 [thread overview]
Message-ID: <5767D36D.6010503@linux.intel.com> (raw)
In-Reply-To: <5767E7A402000078000F69C0@prv-mh.provo.novell.com>
On 6/20/2016 6:55 PM, Jan Beulich wrote:
>>>> On 20.06.16 at 12:32, <george.dunlap@citrix.com> wrote:
>> On 20/06/16 11:25, Jan Beulich wrote:
>>>>>> On 20.06.16 at 12:10, <george.dunlap@citrix.com> wrote:
>>>> On 20/06/16 10:03, Yu Zhang wrote:
>>>>> However, there are conflicts if we take live migration into account,
>>>>> i.e. if the live migration is
>>>>> triggered by the user(unintentionally maybe) during the gpu emulation
>>>>> process, resolve_misconfig()
>>>>> will set all the outstanding p2m_ioreq_server entries to p2m_log_dirty,
>>>>> which is not what we expected,
>>>>> because our intention is to only reset the outdated p2m_ioreq_server
>>>>> entries back to p2m_ram_rw.
>>>> Well the real problem in the situation you describe is that a second
>>>> "lazy" p2m_change_entry_type_global() operation is starting before the
>>>> first one is finished. All that's needed to resolve the situation is
>>>> that if you get a second p2m_change_entry_type_global() operation while
>>>> there are outstanding entries from the first type change, you have to
>>>> finish the first operation (i.e., go "eagerly" find all the
>>>> misconfigured entries and change them to the new type) before starting
>>>> the second one.
>>> Eager resolution of outstanding entries can't be the solution here, I
>>> think, as that would - afaict - be as time consuming as doing the type
>>> change synchronously right away.
>> But isn't it the case that p2m_change_entry_type_global() is only
>> implemented for EPT?
> Also for NPT, we're using a similar model in p2m-pt.c (see e.g. the
> uses of RECALC_FLAGS - we're utilizing the _PAGE_USER set
> unconditionally leads to NPF). And since shadow sits on top of
> p2m-pt, that should be covered too.
>
>> So we've been doing the slow method for both
>> shadow and AMD HAP (whatever it's called these days) since the
>> beginning. And in any case we'd only have to go for the "slow" case in
>> circumstances where the 2nd type change happened before the first one
>> had completed.
> We can't even tell when one have fully finished.
I agree, we have no idea if the previous type change is completely done.
Besides, IIUC, the p2m_change_entry_type_gobal() is not a quite slow
method, because it does
not invalidate all the paging structure entries at once, it just writes
the upper level ones, others
are updated in resolve_misconfig().
>
>>> p2m_change_entry_type_global(),
>>> at least right now, can be invoked freely without prior type changes
>>> having fully propagated. The logic resolving mis-configured entries
>>> simply needs to be able to know the correct new type. I can't see
>>> why this logic shouldn't therefore be extensible to this new type
>>> which can be in flight - after we ought to have a way to know what
>>> type a particular GFN is supposed to be?
>> Actually, come to think of it -- since the first type change is meant to
>> convert all ioreq_server -> ram_rw, and the second is meant to change
>> all ram_rw -> logdirty, is there any case in which we *wouldn't* want
>> the resulting type to be logdirty? Isn't that exactly what we'd get if
>> we'd done both operations synchronously?
> I think Yu's concern is for pages which did not get converted back?
> Or on the restore side? Otherwise - "yes" to both of your questions.
>
Yes. My concern is that resolve_misconfig() can not easily be extended
to differentiate the
p2m_ioreq_server entries which need to be reset and the normal
p2m_ioreq_server entries.
So my implementation in the 2nd version would met the dilemmas I
described if take live
migration into account.
Thanks
Yu
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-06-20 11:28 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-19 9:05 [PATCH v4 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-05-19 9:05 ` [PATCH v4 1/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-06-14 10:04 ` Jan Beulich
2016-06-14 13:14 ` George Dunlap
2016-06-15 10:51 ` Yu Zhang
2016-05-19 9:05 ` [PATCH v4 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-05-19 9:05 ` [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2016-06-14 10:45 ` Jan Beulich
2016-06-14 13:13 ` George Dunlap
2016-06-14 13:31 ` Jan Beulich
2016-06-15 9:50 ` George Dunlap
2016-06-15 10:21 ` Jan Beulich
2016-06-15 11:28 ` George Dunlap
2016-06-16 9:30 ` Yu Zhang
2016-06-16 9:55 ` Jan Beulich
2016-06-17 10:17 ` George Dunlap
2016-06-20 9:03 ` Yu Zhang
2016-06-20 10:10 ` George Dunlap
2016-06-20 10:25 ` Jan Beulich
2016-06-20 10:32 ` George Dunlap
2016-06-20 10:55 ` Jan Beulich
2016-06-20 11:28 ` Yu Zhang [this message]
2016-06-20 13:13 ` George Dunlap
2016-06-21 7:42 ` Yu Zhang
2016-06-20 10:30 ` Yu Zhang
2016-06-20 10:43 ` George Dunlap
2016-06-20 10:45 ` Jan Beulich
2016-06-20 11:06 ` Yu Zhang
2016-06-20 11:20 ` Jan Beulich
2016-06-20 12:06 ` Yu Zhang
2016-06-20 13:38 ` Jan Beulich
2016-06-21 7:45 ` Yu Zhang
2016-06-21 8:22 ` Jan Beulich
2016-06-21 9:16 ` Yu Zhang
2016-06-21 9:47 ` Jan Beulich
2016-06-21 10:00 ` Yu Zhang
2016-06-21 14:38 ` George Dunlap
2016-06-22 6:39 ` Jan Beulich
2016-06-22 8:38 ` Yu Zhang
2016-06-22 9:11 ` Jan Beulich
2016-06-22 9:16 ` George Dunlap
2016-06-22 9:29 ` Jan Beulich
2016-06-22 9:47 ` George Dunlap
2016-06-22 10:07 ` Yu Zhang
2016-06-22 11:33 ` George Dunlap
2016-06-23 7:37 ` Yu Zhang
2016-06-23 10:33 ` George Dunlap
2016-06-24 4:16 ` Yu Zhang
2016-06-24 6:12 ` Jan Beulich
2016-06-24 7:12 ` Yu Zhang
2016-06-24 8:01 ` Jan Beulich
2016-06-24 9:57 ` Yu Zhang
2016-06-24 10:27 ` Jan Beulich
2016-06-22 10:10 ` Jan Beulich
2016-06-22 10:15 ` George Dunlap
2016-06-22 11:50 ` Jan Beulich
2016-06-15 10:52 ` Yu Zhang
2016-06-15 12:26 ` Jan Beulich
2016-06-16 9:32 ` Yu Zhang
2016-06-16 10:02 ` Jan Beulich
2016-06-16 11:18 ` Yu Zhang
2016-06-16 12:43 ` Jan Beulich
2016-06-20 9:05 ` Yu Zhang
2016-06-14 13:14 ` George Dunlap
2016-05-27 7:52 ` [PATCH v4 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Zhang, Yu C
2016-05-27 10:00 ` Jan Beulich
2016-05-27 9:51 ` Zhang, Yu C
2016-05-27 10:02 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5767D36D.6010503@linux.intel.com \
--to=yu.c.zhang@linux.intel.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=paul.durrant@citrix.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xen.org \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.