xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Yu, Zhang" <yu.c.zhang@linux.intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Cc: Kevin Tian <kevin.tian@intel.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Paul Durrant <paul.durrant@citrix.com>,
	zhiyuan.lv@intel.com, Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v2 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server
Date: Mon, 11 Apr 2016 19:14:13 +0800	[thread overview]
Message-ID: <570B8705.3080109@linux.intel.com> (raw)
In-Reply-To: <5707B32B.2060903@citrix.com>



On 4/8/2016 9:33 PM, Andrew Cooper wrote:
> On 31/03/16 11:53, Yu Zhang wrote:
>> A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
>> let one ioreq server claim/disclaim its responsibility for the
>> handling of guest pages with p2m type p2m_ioreq_server. Users
>> of this HVMOP can specify whether the p2m_ioreq_server is supposed
>> to handle write accesses or read ones or both in a parameter named
>> flags. For now, we only support one ioreq server for this p2m type,
>> so once an ioreq server has claimed its ownership, subsequent calls
>> of the HVMOP_map_mem_type_to_ioreq_server will fail. Users can also
>> disclaim the ownership of guest ram pages with this p2m type, by
>> triggering this new HVMOP, with ioreq server id set to the current
>> owner's and flags parameter set to 0.
>>
>> For now, both HVMOP_map_mem_type_to_ioreq_server and p2m_ioreq_server
>> are only supported for HVMs with HAP enabled.
>>
>> Note that flags parameter(if not 0) of this HVMOP only indicates
>> which kind of memory accesses are to be forwarded to an ioreq server,
>> it has impact on the access rights of guest ram pages, but are not
>> the same. Due to hardware limitations, if only write operations are
>> to be forwarded, read ones will be performed at full speed, with
>> no hypervisor intervention. But if read ones are to be forwarded to
>> an ioreq server, writes will inevitably be trapped into hypervisor,
>> which means significant performance impact.
>>
>> Also note that this HVMOP_map_mem_type_to_ioreq_server will not
>> change the p2m type of any guest ram page, until HVMOP_set_mem_type
>> is triggered. So normally the steps should be the backend driver
>> first claims its ownership of guest ram pages with p2m_ioreq_server
>> type, and then sets the memory type to p2m_ioreq_server for specified
>> guest ram pages.
>>
>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>> Cc: Keir Fraser <keir@xen.org>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>> Cc: Kevin Tian <kevin.tian@intel.com>
>> Cc: Tim Deegan <tim@xen.org>
>> ---
>>   xen/arch/x86/hvm/emulate.c       | 125 +++++++++++++++++++++++++++++++++++++--
>>   xen/arch/x86/hvm/hvm.c           |  95 +++++++++++++++++++++++++++--
>>   xen/arch/x86/mm/hap/nested_hap.c |   2 +-
>>   xen/arch/x86/mm/p2m-ept.c        |  14 ++++-
>>   xen/arch/x86/mm/p2m-pt.c         |  25 +++++---
>>   xen/arch/x86/mm/p2m.c            |  82 +++++++++++++++++++++++++
>>   xen/arch/x86/mm/shadow/multi.c   |   3 +-
>>   xen/include/asm-x86/p2m.h        |  36 +++++++++--
>>   xen/include/public/hvm/hvm_op.h  |  37 ++++++++++++
>>   9 files changed, 395 insertions(+), 24 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
>> index ddc8007..77a4793 100644
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -94,11 +94,69 @@ static const struct hvm_io_handler null_handler = {
>>       .ops = &null_ops
>>   };
>>
>> +static int mem_read(const struct hvm_io_handler *io_handler,
>> +                    uint64_t addr,
>> +                    uint32_t size,
>> +                    uint64_t *data)
>> +{
>> +    struct domain *currd = current->domain;
>> +    unsigned long gmfn = paddr_to_pfn(addr);
>> +    unsigned long offset = addr & ~PAGE_MASK;
>> +    struct page_info *page = get_page_from_gfn(currd, gmfn, NULL, P2M_UNSHARE);
>> +    uint8_t *p;
>> +
>> +    if ( !page )
>> +        return X86EMUL_UNHANDLEABLE;
>> +
>> +    p = __map_domain_page(page);
>> +    p += offset;
>> +    memcpy(data, p, size);
>
> What happens when offset + size crosses the page boundary?
>

The 'size' is set in hvmemul_linear_mmio_access(), to insure offset + 
size will not cross the page boundary.

>
>> diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
>> index a1eae52..d46f186 100644
>> --- a/xen/include/public/hvm/hvm_op.h
>> +++ b/xen/include/public/hvm/hvm_op.h
>> @@ -489,6 +489,43 @@ struct xen_hvm_altp2m_op {
>>   typedef struct xen_hvm_altp2m_op xen_hvm_altp2m_op_t;
>>   DEFINE_XEN_GUEST_HANDLE(xen_hvm_altp2m_op_t);
>>
>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>> +
>> +/*
>> + * HVMOP_map_mem_type_to_ioreq_server : map or unmap the IOREQ Server <id>
>> + *                                      to specific memroy type <type>
>> + *                                      for specific accesses <flags>
>> + *
>> + * Note that if only write operations are to be forwarded to an ioreq server,
>> + * read operations will be performed with no hypervisor intervention. But if
>> + * flags indicates that read operations are to be forwarded to an ioreq server,
>> + * write operations will inevitably be trapped into hypervisor, whether they
>> + * are emulated by hypervisor or forwarded to ioreq server depends on the flags
>> + * setting. This situation means significant performance impact.
>> + */
>> +#define HVMOP_map_mem_type_to_ioreq_server 26
>> +struct xen_hvm_map_mem_type_to_ioreq_server {
>> +    domid_t domid;      /* IN - domain to be serviced */
>> +    ioservid_t id;      /* IN - ioreq server id */
>> +    hvmmem_type_t type; /* IN - memory type */
>
> hvmmem_type_t is an enum and doesn't have a fixed width.  It can't be
> used in the public API.
>
> You also have some implicit padding holes as a result of the layout.
>
Oh, guess I should use uint16_t hvmmem_type(any maybe some pad if
necessary).
Thanks for pointing this out, Andrew. :)

> ~Andrew
>
>> +    uint32_t flags;     /* IN - types of accesses to be forwarded to the
>> +                           ioreq server. flags with 0 means to unmap the
>> +                           ioreq server */
>> +#define _HVMOP_IOREQ_MEM_ACCESS_READ 0
>> +#define HVMOP_IOREQ_MEM_ACCESS_READ \
>> +    (1u << _HVMOP_IOREQ_MEM_ACCESS_READ)
>> +
>> +#define _HVMOP_IOREQ_MEM_ACCESS_WRITE 1
>> +#define HVMOP_IOREQ_MEM_ACCESS_WRITE \
>> +    (1u << _HVMOP_IOREQ_MEM_ACCESS_WRITE)
>> +};
>> +typedef struct xen_hvm_map_mem_type_to_ioreq_server
>> +    xen_hvm_map_mem_type_to_ioreq_server_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_mem_type_to_ioreq_server_t);
>> +
>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>> +
>> +
>>   #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
>>
>>   /*
>
>

B.R.
Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-04-11 11:14 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-31 10:53 [PATCH v2 0/3] x86/ioreq server: introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-03-31 10:53 ` [PATCH v2 1/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-04-05 13:57   ` George Dunlap
2016-04-05 14:08     ` George Dunlap
2016-04-08 13:25   ` Andrew Cooper
2016-03-31 10:53 ` [PATCH v2 2/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-04-05 14:38   ` George Dunlap
2016-04-08 13:26   ` Andrew Cooper
2016-04-08 21:48   ` Jan Beulich
2016-04-18  8:41     ` Paul Durrant
2016-04-18  9:10       ` George Dunlap
2016-04-18  9:14         ` Wei Liu
2016-04-18  9:45           ` Paul Durrant
2016-04-18 16:40       ` Jan Beulich
2016-04-18 16:45         ` Paul Durrant
2016-04-18 16:47           ` Jan Beulich
2016-04-18 16:58             ` Paul Durrant
2016-04-19 11:02               ` Yu, Zhang
2016-04-19 11:15                 ` Paul Durrant
2016-04-19 11:38                   ` Yu, Zhang
2016-04-19 11:50                     ` Paul Durrant
2016-04-19 16:51                     ` Jan Beulich
2016-04-20 14:59                       ` Wei Liu
2016-04-20 15:02                 ` George Dunlap
2016-04-20 16:30                   ` George Dunlap
2016-04-20 16:52                     ` Jan Beulich
2016-04-20 16:58                       ` Paul Durrant
2016-04-20 17:06                         ` George Dunlap
2016-04-20 17:09                           ` Paul Durrant
2016-04-21 12:24                           ` Yu, Zhang
2016-04-21 13:31                             ` Paul Durrant
2016-04-21 13:48                               ` Yu, Zhang
2016-04-21 13:56                                 ` Paul Durrant
2016-04-21 14:09                                   ` George Dunlap
2016-04-20 17:08                       ` George Dunlap
2016-04-21 12:04                       ` Yu, Zhang
2016-03-31 10:53 ` [PATCH v2 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
     [not found]   ` <20160404082556.GC28633@deinos.phlegethon.org>
2016-04-05  6:01     ` Yu, Zhang
2016-04-06 17:13   ` George Dunlap
2016-04-07  7:01     ` Yu, Zhang
     [not found]       ` <CAFLBxZbLp2zWzCzQTaJNWbanQSmTJ57ZyTh0qaD-+YUn8o8pyQ@mail.gmail.com>
2016-04-08 10:39         ` George Dunlap
     [not found]         ` <5707839F.9060803@linux.intel.com>
2016-04-08 11:01           ` George Dunlap
2016-04-11 11:15             ` Yu, Zhang
2016-04-14 10:45               ` Yu, Zhang
2016-04-18 15:57                 ` Paul Durrant
2016-04-19  9:11                   ` Yu, Zhang
2016-04-19  9:21                     ` Paul Durrant
2016-04-19  9:44                       ` Yu, Zhang
2016-04-19 10:05                         ` Paul Durrant
2016-04-19 11:17                           ` Yu, Zhang
2016-04-19 11:47                             ` Paul Durrant
2016-04-19 11:59                               ` Yu, Zhang
2016-04-20 14:50                                 ` George Dunlap
2016-04-20 14:57                                   ` Paul Durrant
2016-04-20 15:37                                     ` George Dunlap
2016-04-20 16:30                                       ` Paul Durrant
2016-04-20 16:58                                         ` George Dunlap
2016-04-21 13:28                                         ` Yu, Zhang
2016-04-21 13:21                                   ` Yu, Zhang
2016-04-22 11:27                                     ` Wei Liu
2016-04-22 11:30                                       ` George Dunlap
2016-04-19  4:37                 ` Tian, Kevin
2016-04-19  9:21                   ` Yu, Zhang
2016-04-08 13:33   ` Andrew Cooper
2016-04-11 11:14     ` Yu, Zhang [this message]
2016-04-11 12:20       ` Andrew Cooper
2016-04-11 16:25         ` Jan Beulich
2016-04-08 22:28   ` Jan Beulich
2016-04-11 11:14     ` Yu, Zhang
2016-04-11 16:31       ` Jan Beulich
2016-04-12  9:37         ` Yu, Zhang
2016-04-12 15:08           ` Jan Beulich
2016-04-14  9:56             ` Yu, Zhang
2016-04-19  4:50               ` Tian, Kevin
2016-04-19  8:46                 ` Paul Durrant
2016-04-19  9:27                   ` Yu, Zhang
2016-04-19  9:40                     ` Paul Durrant
2016-04-19  9:49                       ` Yu, Zhang
2016-04-19 10:01                         ` Paul Durrant
2016-04-19  9:54                           ` Yu, Zhang
2016-04-19  9:15                 ` Yu, Zhang
2016-04-19  9:23                   ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=570B8705.3080109@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jun.nakajima@intel.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=paul.durrant@citrix.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).