xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Yu, Zhang" <yu.c.zhang@linux.intel.com>
To: Paul Durrant <Paul.Durrant@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Tim (Xen.org)" <tim@xen.org>,
	"Lv, Zhiyuan" <zhiyuan.lv@intel.com>,
	"jun.nakajima@intel.com" <jun.nakajima@intel.com>
Subject: Re: [PATCH v2 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server
Date: Tue, 19 Apr 2016 19:59:26 +0800	[thread overview]
Message-ID: <57161D9E.6030509@linux.intel.com> (raw)
In-Reply-To: <716680ad3f5441ecba9f39b6b0d1bebc@AMSPEX02CL03.citrite.net>



On 4/19/2016 7:47 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>> Sent: 19 April 2016 12:18
>> To: Paul Durrant; George Dunlap; xen-devel@lists.xen.org
>> Cc: Kevin Tian; Jan Beulich; Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan;
>> jun.nakajima@intel.com
>> Subject: Re: [Xen-devel] [PATCH v2 3/3] x86/ioreq server: Add HVMOP to
>> map guest ram with p2m_ioreq_server to an ioreq server
>>
>>
>>
>> On 4/19/2016 6:05 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>>>> Sent: 19 April 2016 10:44
>>>> To: Paul Durrant; George Dunlap; xen-devel@lists.xen.org
>>>> Cc: Kevin Tian; Jan Beulich; Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan;
>>>> jun.nakajima@intel.com
>>>> Subject: Re: [Xen-devel] [PATCH v2 3/3] x86/ioreq server: Add HVMOP to
>>>> map guest ram with p2m_ioreq_server to an ioreq server
>>>>
>>>>
>>>>
>>>> On 4/19/2016 5:21 PM, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>> [snip]
>>>>>>>> Does any other maintainers have any suggestions?
>>>>>>>
>>>>>>> Note that it is a requirement that an ioreq server be disabled before
>> VM
>>>>>> suspend. That means ioreq server pages essentially have to go back to
>>>>>> ram_rw semantics.
>>>>>>>
>>>>>>>       Paul
>>>>>>>
>>>>>>
>>>>>> OK. So it should be hypervisor's responsibility to do the resetting.
>>>>>> Now we probably have 2 choices:
>>>>>> 1> we reset the p2m type synchronously when ioreq server unmapping
>>>>>> happens, instead of deferring to the misconfig handling part. This
>>>>>> means performance impact to traverse the p2m table.
>>>>>>
>>>>>
>>>>> Do we need to reset at all. The p2m type does need to be transferred, it
>>>> will just be unclaimed on the far end (i.e. the pages are treated as r/w
>> ram)
>>>> until the emulator starts up there. If that cannot be done without creating
>>>> yet another p2m type to handle logdirty (which seems a suboptimal way
>> of
>>>> dealing with it) then I think migration needs to be disallowed on any
>> domain
>>>> than contains any ioreq_server type pages at this stage.
>>>>>
>>>>>      Paul
>>>>>
>>>>
>>>> Yes. We need - either the device model or hypervisor should grantee
>>>> there's no p2m_ioreq_server pages left after an ioreq server is
>> unmapped
>>>> from this type (which is write protected in such senario), otherwise
>>>> its emulation might be forwarded to other unexpected device models
>> which
>>>> claims the p2m_ioreq_server later.
>>>
>>> That should be for the device model to guarantee IMO. If the 'wrong'
>> emulator claims the ioreq server type then I don't think that's Xen's problem.
>>>
>>
>> Thanks, Paul.
>>
>> So what about the VM suspend case you mentioned above? Will that trigger
>> the unmapping of ioreq server? Could the device model also take the role
>> to change the p2m type back in such case?
>
> Yes. The device model has to be told by the toolstack that the VM is suspending, otherwise it can't disable the ioreq server which puts the shared ioreq pages back into the guest p2m. If that's not done then the pages will be leaked.
>
>>
>> It would be much simpler if hypervisor side does not need to provide
>> the p2m resetting logic, and we can support live migration at the same
>> time then. :)
>>
>
> That really should not be hypervisor's job.
>
>    Paul
>

Oh. So let's just remove the p2m type recalculation code from this
patch, no need to call p2m_change_entry_type_global, and no need to
worry about the log dirty part.

George, do you think this acceptable?

BTW, if no need to call p2m_change_entry_type_global, which is not
used for shadow mode, we can keep this p2m type in shadow code, right?

Thanks
Yu

>>
>> B.R.
>> Yu
>>
>>>>
>>>> So I guess approach 2> is your suggestion now.
>>>>
>>>> Besides, previously, Jan also questioned the necessity of resetting the
>>>> p2m type when an ioreq server is mapping to the p2m_ioreq_server. His
>>>> argument is that we should only allow such p2m transition after an
>>>> ioreq server has already mapped to this p2m_ioreq_server. I think his
>>>> point sounds also reasonable.
>>>>
>>>
>>> I was kind of hoping to avoid that ordering dependency but if it makes
>> things simpler then so be it.
>>>
>>>     Paul
>>>
>>>> Thanks
>>>> Yu
>>>>
>>>>>> Or
>>>>>> 2> we just disallow live migration when p2m->ioreq.server is not NULL.
>>>>>> This is not quite accurate, because having p2m->ioreq.server mapped
>>>>>> to p2m_ioreq_server does not necessarily means there would be such
>>>>>> outstanding entries. To be more accurate, we can add some other
>> rough
>>>>>> check, e.g. both check if p2m->ioreq.server against NULL and check if
>>>>>> the hvmop_set_mem_type has ever been triggered once for the
>>>>>> p2m_ioreq_server type.
>>>>>>
>>>>>> Both choice seems suboptimal for me. And I wonder if we have any
>>>>>> better solutions?
>>>>>>
>>>>>> Thanks
>>>>>> Yu
>>>>>>
>>>>>>>> Thanks in advance! :)
>>>>>>>>>> If the answer is, "everything just works", that's perfect.
>>>>>>>>>>
>>>>>>>>>> If the answer is, "Before logdirty mode is set, the ioreq server has
>>>> the
>>>>>>>>>> opportunity to detach, removing the p2m_ioreq_server entries,
>> and
>>>>>>>>>> operating without that functionality", that's good too.
>>>>>>>>>>
>>>>>>>>>> If the answer is, "the live migration request fails and the guest
>>>>>>>>>> continues to run", that's also acceptable.  If you want this series to
>>>>>>>>>> be checked in today (the last day for 4.7), this is probably your
>> best
>>>>>>>>>> bet.
>>>>>>>>>>
>>>>>>>>>>       -George
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> Yu
>>>>>>> _______________________________________________
>>>>>>> Xen-devel mailing list
>>>>>>> Xen-devel@lists.xen.org
>>>>>>> http://lists.xen.org/xen-devel
>>>>>>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-04-19 11:59 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-31 10:53 [PATCH v2 0/3] x86/ioreq server: introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-03-31 10:53 ` [PATCH v2 1/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-04-05 13:57   ` George Dunlap
2016-04-05 14:08     ` George Dunlap
2016-04-08 13:25   ` Andrew Cooper
2016-03-31 10:53 ` [PATCH v2 2/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-04-05 14:38   ` George Dunlap
2016-04-08 13:26   ` Andrew Cooper
2016-04-08 21:48   ` Jan Beulich
2016-04-18  8:41     ` Paul Durrant
2016-04-18  9:10       ` George Dunlap
2016-04-18  9:14         ` Wei Liu
2016-04-18  9:45           ` Paul Durrant
2016-04-18 16:40       ` Jan Beulich
2016-04-18 16:45         ` Paul Durrant
2016-04-18 16:47           ` Jan Beulich
2016-04-18 16:58             ` Paul Durrant
2016-04-19 11:02               ` Yu, Zhang
2016-04-19 11:15                 ` Paul Durrant
2016-04-19 11:38                   ` Yu, Zhang
2016-04-19 11:50                     ` Paul Durrant
2016-04-19 16:51                     ` Jan Beulich
2016-04-20 14:59                       ` Wei Liu
2016-04-20 15:02                 ` George Dunlap
2016-04-20 16:30                   ` George Dunlap
2016-04-20 16:52                     ` Jan Beulich
2016-04-20 16:58                       ` Paul Durrant
2016-04-20 17:06                         ` George Dunlap
2016-04-20 17:09                           ` Paul Durrant
2016-04-21 12:24                           ` Yu, Zhang
2016-04-21 13:31                             ` Paul Durrant
2016-04-21 13:48                               ` Yu, Zhang
2016-04-21 13:56                                 ` Paul Durrant
2016-04-21 14:09                                   ` George Dunlap
2016-04-20 17:08                       ` George Dunlap
2016-04-21 12:04                       ` Yu, Zhang
2016-03-31 10:53 ` [PATCH v2 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
     [not found]   ` <20160404082556.GC28633@deinos.phlegethon.org>
2016-04-05  6:01     ` Yu, Zhang
2016-04-06 17:13   ` George Dunlap
2016-04-07  7:01     ` Yu, Zhang
     [not found]       ` <CAFLBxZbLp2zWzCzQTaJNWbanQSmTJ57ZyTh0qaD-+YUn8o8pyQ@mail.gmail.com>
2016-04-08 10:39         ` George Dunlap
     [not found]         ` <5707839F.9060803@linux.intel.com>
2016-04-08 11:01           ` George Dunlap
2016-04-11 11:15             ` Yu, Zhang
2016-04-14 10:45               ` Yu, Zhang
2016-04-18 15:57                 ` Paul Durrant
2016-04-19  9:11                   ` Yu, Zhang
2016-04-19  9:21                     ` Paul Durrant
2016-04-19  9:44                       ` Yu, Zhang
2016-04-19 10:05                         ` Paul Durrant
2016-04-19 11:17                           ` Yu, Zhang
2016-04-19 11:47                             ` Paul Durrant
2016-04-19 11:59                               ` Yu, Zhang [this message]
2016-04-20 14:50                                 ` George Dunlap
2016-04-20 14:57                                   ` Paul Durrant
2016-04-20 15:37                                     ` George Dunlap
2016-04-20 16:30                                       ` Paul Durrant
2016-04-20 16:58                                         ` George Dunlap
2016-04-21 13:28                                         ` Yu, Zhang
2016-04-21 13:21                                   ` Yu, Zhang
2016-04-22 11:27                                     ` Wei Liu
2016-04-22 11:30                                       ` George Dunlap
2016-04-19  4:37                 ` Tian, Kevin
2016-04-19  9:21                   ` Yu, Zhang
2016-04-08 13:33   ` Andrew Cooper
2016-04-11 11:14     ` Yu, Zhang
2016-04-11 12:20       ` Andrew Cooper
2016-04-11 16:25         ` Jan Beulich
2016-04-08 22:28   ` Jan Beulich
2016-04-11 11:14     ` Yu, Zhang
2016-04-11 16:31       ` Jan Beulich
2016-04-12  9:37         ` Yu, Zhang
2016-04-12 15:08           ` Jan Beulich
2016-04-14  9:56             ` Yu, Zhang
2016-04-19  4:50               ` Tian, Kevin
2016-04-19  8:46                 ` Paul Durrant
2016-04-19  9:27                   ` Yu, Zhang
2016-04-19  9:40                     ` Paul Durrant
2016-04-19  9:49                       ` Yu, Zhang
2016-04-19 10:01                         ` Paul Durrant
2016-04-19  9:54                           ` Yu, Zhang
2016-04-19  9:15                 ` Yu, Zhang
2016-04-19  9:23                   ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57161D9E.6030509@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=Paul.Durrant@citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).