xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: Jan Beulich <JBeulich@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>,
	xen-devel@lists.xen.org, Paul Durrant <paul.durrant@citrix.com>,
	zhiyuan.lv@intel.com, JunNakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
Date: Wed, 22 Jun 2016 16:38:56 +0800	[thread overview]
Message-ID: <576A4EA0.9020403@linux.intel.com> (raw)
In-Reply-To: <576A4ED702000078000F7733@prv-mh.provo.novell.com>



On 6/22/2016 2:39 PM, Jan Beulich wrote:
>>>> On 21.06.16 at 16:38, <george.dunlap@citrix.com> wrote:
>> On 21/06/16 10:47, Jan Beulich wrote:
>>>>>>> And then - didn't we mean to disable that part of XenGT during
>>>>>>> migration, i.e. temporarily accept the higher performance
>>>>>>> overhead without the p2m_ioreq_server entries? In which case
>>>>>>> flipping everything back to p2m_ram_rw after (completed or
>>>>>>> canceled) migration would be exactly what we want. The (new
>>>>>>> or previous) ioreq server should attach only afterwards, and
>>>>>>> can then freely re-establish any p2m_ioreq_server entries it
>>>>>>> deems necessary.
>>>>>>>
>>>>>> Well, I agree this part of XenGT should be disabled during migration.
>>>>>> But in such
>>>>>> case I think it's device model's job to trigger the p2m type
>>>>>> flipping(i.e. by calling
>>>>>> HVMOP_set_mem_type).
>>>>> I agree - this would seem to be the simpler model here, despite (as
>>>>> George validly says) the more consistent model would be for the
>>>>> hypervisor to do the cleanup. Such cleanup would imo be reasonable
>>>>> only if there was an easy way for the hypervisor to enumerate all
>>>>> p2m_ioreq_server pages.
>>>> Well, for me, the "easy way" means we should avoid traversing the whole ept
>>>> paging structure all at once, right?
>>> Yes.
>> Does calling p2m_change_entry_type_global() not satisfy this requirement?
> Not really - that addresses the "low overhead" aspect, but not the
> "enumerate all such entries" one.
>
>>>> I have not figured out any clean
>>>> solution
>>>> in hypervisor side, that's one reason I'd like to left this job to
>>>> device model
>>>> side(another reason is that I do think device model should take this
>>>> responsibility).
>>> Let's see if we can get George to agree.
>> Well I had in principle already agreed to letting this be the interface
>> on the previous round of patches; we're having this discussion because
>> you (Jan) asked about what happens if an ioreq server is de-registered
>> while there are still outstanding p2m types. :-)
> Indeed. Yet so far I understood you didn't like de-registration to
> both not do the cleanup itself and fail if there are outstanding
> entries.
>
>> I do think having Xen change the type makes the most sense, but if
>> you're happy to leave that up to the ioreq server, I'm OK with things
>> being done that way as well.  I think we can probably change it later if
>> we want.
> Yes, since ioreq server interfaces will all be unstable ones, that
> shouldn't be a problem. Albeit that's only the theory. With the call
> coming from the device model, we'd need to make sure to put all
> the logic (if any) to deal with the hypervisor implementation details
> into libxc, so the caller of the libxc interface won't need to change.
> I've learned during putting together the hvmctl series that this
> wasn't done cleanly enough for one of the existing interfaces (see
> patch 10 of that series).

Thanks Jan & George. So I guess you both accepted that we can left the 
clean up to
the device model side, right?

B.R.
Yu


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-22  8:38 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-19  9:05 [PATCH v4 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-05-19  9:05 ` [PATCH v4 1/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-06-14 10:04   ` Jan Beulich
2016-06-14 13:14     ` George Dunlap
2016-06-15 10:51     ` Yu Zhang
2016-05-19  9:05 ` [PATCH v4 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-05-19  9:05 ` [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2016-06-14 10:45   ` Jan Beulich
2016-06-14 13:13     ` George Dunlap
2016-06-14 13:31       ` Jan Beulich
2016-06-15  9:50         ` George Dunlap
2016-06-15 10:21           ` Jan Beulich
2016-06-15 11:28             ` George Dunlap
2016-06-16  9:30             ` Yu Zhang
2016-06-16  9:55               ` Jan Beulich
2016-06-17 10:17                 ` George Dunlap
2016-06-20  9:03                   ` Yu Zhang
2016-06-20 10:10                     ` George Dunlap
2016-06-20 10:25                       ` Jan Beulich
2016-06-20 10:32                         ` George Dunlap
2016-06-20 10:55                           ` Jan Beulich
2016-06-20 11:28                             ` Yu Zhang
2016-06-20 13:13                               ` George Dunlap
2016-06-21  7:42                                 ` Yu Zhang
2016-06-20 10:30                       ` Yu Zhang
2016-06-20 10:43                         ` George Dunlap
2016-06-20 10:45                         ` Jan Beulich
2016-06-20 11:06                           ` Yu Zhang
2016-06-20 11:20                             ` Jan Beulich
2016-06-20 12:06                               ` Yu Zhang
2016-06-20 13:38                                 ` Jan Beulich
2016-06-21  7:45                                   ` Yu Zhang
2016-06-21  8:22                                     ` Jan Beulich
2016-06-21  9:16                                       ` Yu Zhang
2016-06-21  9:47                                         ` Jan Beulich
2016-06-21 10:00                                           ` Yu Zhang
2016-06-21 14:38                                           ` George Dunlap
2016-06-22  6:39                                             ` Jan Beulich
2016-06-22  8:38                                               ` Yu Zhang [this message]
2016-06-22  9:11                                                 ` Jan Beulich
2016-06-22  9:16                                               ` George Dunlap
2016-06-22  9:29                                                 ` Jan Beulich
2016-06-22  9:47                                                   ` George Dunlap
2016-06-22 10:07                                                     ` Yu Zhang
2016-06-22 11:33                                                       ` George Dunlap
2016-06-23  7:37                                                         ` Yu Zhang
2016-06-23 10:33                                                           ` George Dunlap
2016-06-24  4:16                                                             ` Yu Zhang
2016-06-24  6:12                                                               ` Jan Beulich
2016-06-24  7:12                                                                 ` Yu Zhang
2016-06-24  8:01                                                                   ` Jan Beulich
2016-06-24  9:57                                                                     ` Yu Zhang
2016-06-24 10:27                                                                       ` Jan Beulich
2016-06-22 10:10                                                     ` Jan Beulich
2016-06-22 10:15                                                       ` George Dunlap
2016-06-22 11:50                                                         ` Jan Beulich
2016-06-15 10:52     ` Yu Zhang
2016-06-15 12:26       ` Jan Beulich
2016-06-16  9:32         ` Yu Zhang
2016-06-16 10:02           ` Jan Beulich
2016-06-16 11:18             ` Yu Zhang
2016-06-16 12:43               ` Jan Beulich
2016-06-20  9:05             ` Yu Zhang
2016-06-14 13:14   ` George Dunlap
2016-05-27  7:52 ` [PATCH v4 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Zhang, Yu C
2016-05-27 10:00   ` Jan Beulich
2016-05-27  9:51     ` Zhang, Yu C
2016-05-27 10:02     ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=576A4EA0.9020403@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=paul.durrant@citrix.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).