From: "Yu, Zhang" <yu.c.zhang@linux.intel.com>
To: Jan Beulich <JBeulich@suse.com>,
George Dunlap <George.Dunlap@citrix.com>,
Paul Durrant <Paul.Durrant@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wei.liu2@citrix.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"Tim(Xen.org)" <tim@xen.org>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
Zhiyuan Lv <zhiyuan.lv@intel.com>,
Jun Nakajima <jun.nakajima@intel.com>,
"Keir (Xen.org)" <keir@xen.org>
Subject: Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
Date: Mon, 25 Apr 2016 23:53:15 +0800 [thread overview]
Message-ID: <ede675a2-1cbb-318f-ca13-814cc3e23f9e@linux.intel.com> (raw)
In-Reply-To: <571E563202000078000E5747@prv-mh.provo.novell.com>
On 4/25/2016 11:38 PM, Jan Beulich wrote:
>>>> On 25.04.16 at 17:29, <Paul.Durrant@citrix.com> wrote:
>>> -----Original Message-----
>>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>>> Sent: 25 April 2016 16:22
>>> To: Paul Durrant; George Dunlap
>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>
>>>
>>>
>>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
>>>>> -----Original Message-----
>>>>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
>>>>> George Dunlap
>>>>> Sent: 25 April 2016 14:39
>>>>> To: Yu Zhang
>>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>>>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei
>>> Liu
>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>>
>>>>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
>>> <yu.c.zhang@linux.intel.com>
>>>>> wrote:
>>>>>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>>>>>> protected memory pages whose write operations are supposed to be
>>>>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>>>>> rangeset restrict the number of guest pages to be write-protected.
>>>>>>
>>>>>> This patch replaces the p2m type p2m_mmio_write_dm with a new
>>> name:
>>>>>> p2m_ioreq_server, which means this p2m type can be claimed by one
>>>>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>>>>> server. Patches following up will add the related hvmop handling
>>>>>> code which map/unmap type p2m_ioreq_server to/from an ioreq
>>> server.
>>>>>>
>>>>>> changes in v3:
>>>>>> - According to Jan & George's comments, keep
>>>>> HVMMEM_mmio_write_dm
>>>>>> for old xen interface versions, and replace it with HVMMEM_unused
>>>>>> for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>>>>>> enum - HVMMEM_ioreq_server is introduced for the get/set mem
>>> type
>>>>>> interfaces;
>>>>>> - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>>>>
>>>>> Unfortunately these rather contradict each other -- I consider
>>>>> Reviewed-by to only stick if the code I've specified hasn't changed
>>>>> (or has only changed trivially).
>>>>>
>>>>> Also...
>>>>>
>>>>>>
>>>>>> changes in v2:
>>>>>> - According to George Dunlap's comments, only rename the p2m type,
>>>>>> with no behavior changes.
>>>>>>
>>>>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>>>>> Acked-by: Tim Deegan <tim@xen.org>
>>>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>>>>> Cc: Keir Fraser <keir@xen.org>
>>>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>> Cc: Tim Deegan <tim@xen.org>
>>>>>> ---
>>>>>> xen/arch/x86/hvm/hvm.c | 14 ++++++++------
>>>>>> xen/arch/x86/mm/p2m-ept.c | 2 +-
>>>>>> xen/arch/x86/mm/p2m-pt.c | 2 +-
>>>>>> xen/arch/x86/mm/shadow/multi.c | 2 +-
>>>>>> xen/include/asm-x86/p2m.h | 4 ++--
>>>>>> xen/include/public/hvm/hvm_op.h | 8 +++++++-
>>>>>> 6 files changed, 20 insertions(+), 12 deletions(-)
>>>>>>
>>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>>> index f24126d..874cb0f 100644
>>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>>>>> unsigned long gla,
>>>>>> */
>>>>>> if ( (p2mt == p2m_mmio_dm) ||
>>>>>> (npfec.write_access &&
>>>>>> - (p2m_is_discard_write(p2mt) || (p2mt ==
>>> p2m_mmio_write_dm))) )
>>>>>> + (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
>>>>>> {
>>>>>> __put_gfn(p2m, gfn);
>>>>>> if ( ap2m_active )
>>>>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>> get_gfn_query_unlocked(d, a.pfn, &t);
>>>>>> if ( p2m_is_mmio(t) )
>>>>>> a.mem_type = HVMMEM_mmio_dm;
>>>>>> - else if ( t == p2m_mmio_write_dm )
>>>>>> - a.mem_type = HVMMEM_mmio_write_dm;
>>>>>> + else if ( t == p2m_ioreq_server )
>>>>>> + a.mem_type = HVMMEM_ioreq_server;
>>>>>> else if ( p2m_is_readonly(t) )
>>>>>> a.mem_type = HVMMEM_ram_ro;
>>>>>> else if ( p2m_is_ram(t) )
>>>>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>> [HVMMEM_ram_rw] = p2m_ram_rw,
>>>>>> [HVMMEM_ram_ro] = p2m_ram_ro,
>>>>>> [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>>>>> - [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>>>>> + [HVMMEM_unused] = p2m_invalid,
>>>>>> + [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>>>> };
>>>>>>
>>>>>> if ( copy_from_guest(&a, arg, 1) )
>>>>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>> ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
>>>>>> goto setmemtype_fail;
>>>>>>
>>>>>> - if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>>>>> + if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>>>>> + unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>>>> goto setmemtype_fail;
>>>>>>
>>>>>> while ( a.nr > start_iter )
>>>>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>> }
>>>>>> if ( !p2m_is_ram(t) &&
>>>>>> (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm)
>>>>> &&
>>>>>> - (t != p2m_mmio_write_dm || a.hvmmem_type !=
>>>>> HVMMEM_ram_rw) )
>>>>>> + (t != p2m_ioreq_server || a.hvmmem_type !=
>>>>> HVMMEM_ram_rw) )
>>>>>> {
>>>>>> put_gfn(d, pfn);
>>>>>> goto setmemtype_fail;
>>>>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
>>> ept.c
>>>>>> index 3cb6868..380ec25 100644
>>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>>>>> p2m_domain *p2m, ept_entry_t *entry,
>>>>>> entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>>>> break;
>>>>>> case p2m_grant_map_ro:
>>>>>> - case p2m_mmio_write_dm:
>>>>>> + case p2m_ioreq_server:
>>>>>> entry->r = 1;
>>>>>> entry->w = entry->x = 0;
>>>>>> entry->a = !!cpu_has_vmx_ept_ad;
>>>>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>>>>>> index 3d80612..eabd2e3 100644
>>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>>>> @@ -94,7 +94,7 @@ static unsigned long
>>> p2m_type_to_flags(p2m_type_t
>>>>> t, mfn_t mfn,
>>>>>> default:
>>>>>> return flags | _PAGE_NX_BIT;
>>>>>> case p2m_grant_map_ro:
>>>>>> - case p2m_mmio_write_dm:
>>>>>> + case p2m_ioreq_server:
>>>>>> return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>>>> case p2m_ram_ro:
>>>>>> case p2m_ram_logdirty:
>>>>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>>>>> b/xen/arch/x86/mm/shadow/multi.c
>>>>>> index e5c8499..c81302a 100644
>>>>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>>>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>>>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>>>>
>>>>>> /* Need to hand off device-model MMIO to the device model */
>>>>>> if ( p2mt == p2m_mmio_dm
>>>>>> - || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>>>>>> + || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>>>> {
>>>>>> gpa = guest_walk_to_gpa(&gw);
>>>>>> goto mmio;
>>>>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>>>>>> index 5392eb0..ee2ea9c 100644
>>>>>> --- a/xen/include/asm-x86/p2m.h
>>>>>> +++ b/xen/include/asm-x86/p2m.h
>>>>>> @@ -71,7 +71,7 @@ typedef enum {
>>>>>> p2m_ram_shared = 12, /* Shared or sharable memory */
>>>>>> p2m_ram_broken = 13, /* Broken page, access cause domain
>>> crash
>>>>> */
>>>>>> p2m_map_foreign = 14, /* ram pages from foreign domain */
>>>>>> - p2m_mmio_write_dm = 15, /* Read-only; writes go to the device
>>>>> model */
>>>>>> + p2m_ioreq_server = 15,
>>>>>> } p2m_type_t;
>>>>>>
>>>>>> /* Modifiers to the query */
>>>>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>>>> | p2m_to_mask(p2m_ram_ro) \
>>>>>> | p2m_to_mask(p2m_grant_map_ro) \
>>>>>> | p2m_to_mask(p2m_ram_shared) \
>>>>>> - | p2m_to_mask(p2m_mmio_write_dm))
>>>>>> + | p2m_to_mask(p2m_ioreq_server))
>>>>>>
>>>>>> /* Write-discard types, which should discard the write operations */
>>>>>> #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro) \
>>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
>>>>> b/xen/include/public/hvm/hvm_op.h
>>>>>> index 1606185..b3e45cf 100644
>>>>>> --- a/xen/include/public/hvm/hvm_op.h
>>>>>> +++ b/xen/include/public/hvm/hvm_op.h
>>>>>> @@ -83,7 +83,13 @@ typedef enum {
>>>>>> HVMMEM_ram_rw, /* Normal read/write guest RAM */
>>>>>> HVMMEM_ram_ro, /* Read-only; writes are discarded */
>>>>>> HVMMEM_mmio_dm, /* Reads and write go to the device
>>> model
>>>>> */
>>>>>> - HVMMEM_mmio_write_dm /* Read-only; writes go to the device
>>>>> model */
>>>>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>>>>> + HVMMEM_mmio_write_dm, /* Read-only; writes go to the device
>>>>> model */
>>>>>> +#else
>>>>>> + HVMMEM_unused, /* Placeholder; setting memory to this
>>> type
>>>>>> + will fail for code after 4.7.0 */
>>>>>> +#endif
>>>>>> + HVMMEM_ioreq_server
>>>>>
>>>>> Also, I don't think we've had a convincing argument for why this patch
>>>>> needs to be in 4.7. The p2m name changes are internal only, and so
>>>>> don't need to be made at all; and the old functionality will work as
>>>>> well as it ever did. Furthermore, the whole reason we're in this
>>>>> situation is that we checked in a publicly-visible change to the
>>>>> interface before it was properly ready; I think we should avoid making
>>>>> the same mistake this time.
>>>>>
>>>>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>>>>> and/or Jan have strong opinions, then I would say check in only a
>>>>> patch to do the #if/#else/#endif, and leave both the p2m type changes
>>>>> and the new HVMMEM_ioreq_server enum for when the 4.8
>>> development
>>>>> window opens.
>>>>>
>>>>
>>>> If the whole series is going in then I think this patch is ok. If this the
>> only
>>> patch that is going in for 4.7 then I thing we need the patch to hvm_op.h to
>>> deprecate the old type and that's it. We definitely should not introduce an
>>> implementation of the type HVMMEM_ioreq_server that has the same
>>> hardcoded semantics as the old type and then change it.
>>>> The p2m type changes are also wrong. That type needs to be left alone,
>>> presumably, so that anything using HVMMEM_mmio_write_dm and
>>> compiled to the old interface version continues to function. I think
>>> HVMMEM_ioreq_server needs to map to a new p2m type which should be
>>> introduced in patch #3.
>>>>
>>>
>>> Sorry, I'm also confused now. :(
>>>
>>> Do we really want to introduce a new p2m type? Why?
>>> My understanding of the previous agreement is that:
>>> 1> We need the interface to work on old hypervisor for
>>> HVMMEM_mmio_write_dm;
>>> 2> We need the interface to return -EINVAL for new hypervisor
>>> for HVMMEM_mmio_write_dm;
>>> 3> We need the new type HVMMEM_ioreq_server to work on new
>>> hypervisor;
>>>
>>> Did I miss something? Or I totally misunderstood?
>>>
>>
>> I don't know. I'm confused too. What we definitely don't want to do is add a
>> new HVMMEM type and have it map to the old behaviour, otherwise we're no
>> better off.
>>
>> The question I'm not clear on the answer to is what happens to old code:
>>
>> Should it continue to compile? If so, should it continue to run.
>
> We only need to be concerned about the "get type" functionality,
> as that's the only thing an arbitrary guest can use. If the
> hypercall simply never returns the old type, then old code will
> still work (it'll just have some dead code on new Xen), and hence
> it compiling against the older interface is fine (and, from general
> considerations, a requirement).
>
Thanks Jan. And I think the answer is yes. The new hypervisor will
only return HVMMEM_ioreq_server, which is a different value now.
> Jan
>
Yu
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-04-25 15:53 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-25 10:35 [PATCH v3 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-04-25 10:35 ` [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-04-25 12:12 ` Jan Beulich
2016-04-25 13:30 ` Wei Liu
2016-04-25 13:39 ` George Dunlap
2016-04-25 14:01 ` Paul Durrant
2016-04-25 14:15 ` George Dunlap
2016-04-25 14:16 ` Jan Beulich
2016-04-25 14:19 ` Paul Durrant
2016-04-25 14:28 ` George Dunlap
2016-04-25 14:34 ` Paul Durrant
2016-04-25 15:21 ` Yu, Zhang
2016-04-25 15:29 ` Paul Durrant
2016-04-25 15:38 ` Jan Beulich
2016-04-25 15:53 ` Yu, Zhang [this message]
2016-04-25 16:15 ` George Dunlap
2016-04-25 16:20 ` Yu, Zhang
2016-04-25 17:01 ` Paul Durrant
2016-04-26 8:23 ` Yu, Zhang
2016-04-26 8:33 ` Paul Durrant
2016-04-27 14:12 ` George Dunlap
2016-04-27 14:42 ` Paul Durrant
2016-04-28 2:47 ` Yu, Zhang
2016-04-28 7:14 ` Paul Durrant
2016-04-28 7:07 ` Yu, Zhang
2016-04-28 10:02 ` Jan Beulich
2016-04-28 10:43 ` Paul Durrant
2016-04-27 14:47 ` Wei Liu
2016-04-25 15:49 ` Yu, Zhang
2016-04-25 14:14 ` Jan Beulich
2016-04-25 10:35 ` [PATCH v3 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-04-26 10:53 ` Wei Liu
2016-04-27 9:11 ` Yu, Zhang
2016-04-25 10:35 ` [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2016-04-25 12:36 ` Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ede675a2-1cbb-318f-ca13-814cc3e23f9e@linux.intel.com \
--to=yu.c.zhang@linux.intel.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=JBeulich@suse.com \
--cc=Paul.Durrant@citrix.com \
--cc=jun.nakajima@intel.com \
--cc=keir@xen.org \
--cc=kevin.tian@intel.com \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).