xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>,
	xen-devel@lists.xen.org, Paul Durrant <paul.durrant@citrix.com>,
	zhiyuan.lv@intel.com, JunNakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
Date: Wed, 22 Jun 2016 18:07:35 +0800	[thread overview]
Message-ID: <576A6367.9080402@linux.intel.com> (raw)
In-Reply-To: <97d44932-1623-7009-a2af-039f1e0ad584@citrix.com>



On 6/22/2016 5:47 PM, George Dunlap wrote:
> On 22/06/16 10:29, Jan Beulich wrote:
>>>>> On 22.06.16 at 11:16, <george.dunlap@citrix.com> wrote:
>>> On 22/06/16 07:39, Jan Beulich wrote:
>>>>>>> On 21.06.16 at 16:38, <george.dunlap@citrix.com> wrote:
>>>>> On 21/06/16 10:47, Jan Beulich wrote:
>>>>>>>>>> And then - didn't we mean to disable that part of XenGT during
>>>>>>>>>> migration, i.e. temporarily accept the higher performance
>>>>>>>>>> overhead without the p2m_ioreq_server entries? In which case
>>>>>>>>>> flipping everything back to p2m_ram_rw after (completed or
>>>>>>>>>> canceled) migration would be exactly what we want. The (new
>>>>>>>>>> or previous) ioreq server should attach only afterwards, and
>>>>>>>>>> can then freely re-establish any p2m_ioreq_server entries it
>>>>>>>>>> deems necessary.
>>>>>>>>>>
>>>>>>>>> Well, I agree this part of XenGT should be disabled during migration.
>>>>>>>>> But in such
>>>>>>>>> case I think it's device model's job to trigger the p2m type
>>>>>>>>> flipping(i.e. by calling
>>>>>>>>> HVMOP_set_mem_type).
>>>>>>>> I agree - this would seem to be the simpler model here, despite (as
>>>>>>>> George validly says) the more consistent model would be for the
>>>>>>>> hypervisor to do the cleanup. Such cleanup would imo be reasonable
>>>>>>>> only if there was an easy way for the hypervisor to enumerate all
>>>>>>>> p2m_ioreq_server pages.
>>>>>>> Well, for me, the "easy way" means we should avoid traversing the whole ept
>>>>>>> paging structure all at once, right?
>>>>>> Yes.
>>>>> Does calling p2m_change_entry_type_global() not satisfy this requirement?
>>>> Not really - that addresses the "low overhead" aspect, but not the
>>>> "enumerate all such entries" one.
>>> I'm sorry, I think I'm missing something here.  What do we need the
>>> enumeration for?
>> We'd need that if we were to do the cleanup in the hypervisor (as
>> we can't rely on all p2m entry re-calculation to have happened by
>> the time a new ioreq server registers for the type).
> So you're afraid of this sequence of events?
> 1) Server A de-registered, triggering a ioreq_server -> ram_rw type change
> 2) gfn N is marked as misconfigured
> 3) Server B registers and marks gfn N as ioreq_server
> 4) When N is accessed, the misconfiguration is resolved incorrectly to
> ram_rw
>
> But that can't happen, because misconfigured entries are resolved before
> setting a p2m entry; so at step 3, gfn N will be first set to
> (non-misconfigured) ram_rw, then changed to (non-misconfigured)
> ioreq_server.
>
> Or is there another sequence of events that I'm missing?

Thanks for your reply, George. :)
If no log dirty is triggered during this process, your sequence is correct.
However, if log dirty is triggered, we'll met problems. I have described 
this
in previous mails :

http://lists.xenproject.org/archives/html/xen-devel/2016-06/msg02426.html
on Jun 20

and

http://lists.xenproject.org/archives/html/xen-devel/2016-06/msg02575.html
on Jun 21

>>>>> Well I had in principle already agreed to letting this be the interface
>>>>> on the previous round of patches; we're having this discussion because
>>>>> you (Jan) asked about what happens if an ioreq server is de-registered
>>>>> while there are still outstanding p2m types. :-)
>>>> Indeed. Yet so far I understood you didn't like de-registration to
>>>> both not do the cleanup itself and fail if there are outstanding
>>>> entries.
>>> No, I think regarding deregistering while there were outstanding
>>> entries, I said the opposite -- that there's no point in failing the
>>> de-registration, because a poorly-behaved ioreq server may just ignore
>>> the error code and exit anyway.  Although, thinking on it again, I
>>> suppose that an error code would allow a buggy ioreq server to know that
>>> it had screwed up somewhere.
>> Not exactly, I think: The failed de-registration ought to lead to failure
>> of an attempt to register another ioreq server (or the same one again),
>> which should make the issue quickly noticable.
> Hmm... yes, the more I think about it the more it seems like allowing
> p2m entries from a previous ioreq server to be already set when there's
> a new ioreq server registration is digging a hole for future people to
> fall into.  Paul and Yu Zhang are the most likely people to fall into
> that hole, so I haven't been arguing strenuously so far against it, but
> given that I'm not yet convinced that fixing it is that difficult, at

Taking the live migration into consideration, I admit I still have not 
found any light-
weighted solution to fix this. :)

Thanks
Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-22 10:07 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-19  9:05 [PATCH v4 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-05-19  9:05 ` [PATCH v4 1/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-06-14 10:04   ` Jan Beulich
2016-06-14 13:14     ` George Dunlap
2016-06-15 10:51     ` Yu Zhang
2016-05-19  9:05 ` [PATCH v4 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-05-19  9:05 ` [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2016-06-14 10:45   ` Jan Beulich
2016-06-14 13:13     ` George Dunlap
2016-06-14 13:31       ` Jan Beulich
2016-06-15  9:50         ` George Dunlap
2016-06-15 10:21           ` Jan Beulich
2016-06-15 11:28             ` George Dunlap
2016-06-16  9:30             ` Yu Zhang
2016-06-16  9:55               ` Jan Beulich
2016-06-17 10:17                 ` George Dunlap
2016-06-20  9:03                   ` Yu Zhang
2016-06-20 10:10                     ` George Dunlap
2016-06-20 10:25                       ` Jan Beulich
2016-06-20 10:32                         ` George Dunlap
2016-06-20 10:55                           ` Jan Beulich
2016-06-20 11:28                             ` Yu Zhang
2016-06-20 13:13                               ` George Dunlap
2016-06-21  7:42                                 ` Yu Zhang
2016-06-20 10:30                       ` Yu Zhang
2016-06-20 10:43                         ` George Dunlap
2016-06-20 10:45                         ` Jan Beulich
2016-06-20 11:06                           ` Yu Zhang
2016-06-20 11:20                             ` Jan Beulich
2016-06-20 12:06                               ` Yu Zhang
2016-06-20 13:38                                 ` Jan Beulich
2016-06-21  7:45                                   ` Yu Zhang
2016-06-21  8:22                                     ` Jan Beulich
2016-06-21  9:16                                       ` Yu Zhang
2016-06-21  9:47                                         ` Jan Beulich
2016-06-21 10:00                                           ` Yu Zhang
2016-06-21 14:38                                           ` George Dunlap
2016-06-22  6:39                                             ` Jan Beulich
2016-06-22  8:38                                               ` Yu Zhang
2016-06-22  9:11                                                 ` Jan Beulich
2016-06-22  9:16                                               ` George Dunlap
2016-06-22  9:29                                                 ` Jan Beulich
2016-06-22  9:47                                                   ` George Dunlap
2016-06-22 10:07                                                     ` Yu Zhang [this message]
2016-06-22 11:33                                                       ` George Dunlap
2016-06-23  7:37                                                         ` Yu Zhang
2016-06-23 10:33                                                           ` George Dunlap
2016-06-24  4:16                                                             ` Yu Zhang
2016-06-24  6:12                                                               ` Jan Beulich
2016-06-24  7:12                                                                 ` Yu Zhang
2016-06-24  8:01                                                                   ` Jan Beulich
2016-06-24  9:57                                                                     ` Yu Zhang
2016-06-24 10:27                                                                       ` Jan Beulich
2016-06-22 10:10                                                     ` Jan Beulich
2016-06-22 10:15                                                       ` George Dunlap
2016-06-22 11:50                                                         ` Jan Beulich
2016-06-15 10:52     ` Yu Zhang
2016-06-15 12:26       ` Jan Beulich
2016-06-16  9:32         ` Yu Zhang
2016-06-16 10:02           ` Jan Beulich
2016-06-16 11:18             ` Yu Zhang
2016-06-16 12:43               ` Jan Beulich
2016-06-20  9:05             ` Yu Zhang
2016-06-14 13:14   ` George Dunlap
2016-05-27  7:52 ` [PATCH v4 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Zhang, Yu C
2016-05-27 10:00   ` Jan Beulich
2016-05-27  9:51     ` Zhang, Yu C
2016-05-27 10:02     ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=576A6367.9080402@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=paul.durrant@citrix.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).