All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
@ 2017-06-20 17:18 Zhongze Liu
  2017-06-20 17:29 ` Julien Grall
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Zhongze Liu @ 2017-06-20 17:18 UTC (permalink / raw)
  To: xen-devel
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, Julien Grall, Jarvis Roach

====================================================
1. Motivation and Description
====================================================
Virtual machines use grant table hypercalls to setup a share page for
inter-VMs communications. These hypercalls are used by all PV
protocols today. However, very simple guests, such as baremetal
applications, might not have the infrastructure to handle the grant table.
This project is about setting up several shared memory areas for inter-VMs
communications directly from the VM config file.
So that the guest kernel doesn't have to have grant table support (in the
embedded space, this is not unusual) to be able to communicate with
other guests.

====================================================
2. Implementation Plan:
====================================================

======================================
2.1 Introduce a new VM config option in xl:
======================================
The shared areas should be shareable among several (>=2) VMs, so
every shared physical memory area is assigned to a set of VMs.
Therefore, a “token” or “identifier” should be used here to uniquely
identify a backing memory area.

The backing area would be taken from one domain, which we will regard
as the "master domain", and this domain should be created prior to any
other "slave domain"s. Again, we have to use some kind of tag to tell who
is the "master domain".

And the ability to specify the attributes of the pages (say, WO/RO/X)
to be shared should be also given to the user. For the master domain,
these attributes often describes the maximum permission allowed for the
shared pages, and for the slave domains, these attributes are often used
to describe with what permissions this area will be mapped.
This information should also be specified in the xl config entry.

To handle all these, I would suggest using an unsigned integer to serve as the
identifier, and using a "master" tag in the master domain's xl config entry
to announce that she will provide the backing memory pages. A separate
entry would be used to describe the attributes of the shared memory area, of
the form "prot=RW".
For example:

In xl config file of vm1:

    static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
                          granularity = 4k, prot = RO, master”,
                         "id = ID2, begin = gmfn3, end = gmfn4,
 granularity = 4k, prot = RW, master”]

In xl config file of vm2:

    static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
                          granularity = 4k, prot = RO”]

In xl config file of vm3:

    static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
                          granularity = 4k, prot = RW”]

gmfn's above are all hex of the form "0x20000".

In the example above. A memory area ID1 will be shared between vm1 and vm2.
This area will be taken from vm1 and mapped into vm2's stage-2 page table.
The parameter "prot=RO" means that this memory area are offered with read-only
permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
gmfn5~gmfn6.
Likewise, a memory area ID will be shared between vm1 and vm3 with read and
write permissions. vm1 is the master and vm2 the slave. vm1 can access the
area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.

The "granularity" is optional in the slaves' config entries. But if it's
presented in the slaves' config entry, it has to be the same with its master's.
Besides, the size of the gmfn range must also match. And overlapping backing
memory areas are well defined.

Note that the "master" tag in vm1 for both ID1 and ID2 indicates that vm1
should be created prior to both vm2 and vm3, for they both rely on the pages
backed by vm1. If one tries to create vm2 or vm3 prior to vm1, she will get
an error. And in vm1's config file, the "prot=RO" parameter of ID1 indicates
that if one tries to share this page with vm1 with, say, "WR" permission,
she will get an error, too.

======================================
2.2 Store the mem-sharing information in xenstore
======================================
For we don't have some persistent storage for xl to store the information
of the shared memory areas, we have to find some way to keep it between xl
launches. And xenstore is a good place to do this. The information for one
shared area should include the ID, master domid and gmfn ranges and
memory attributes in master and slave domains of this area.
A current plan is to place the information under /local/shared_mem/ID.
Still take the above config files as an example:

If we instantiate vm1, vm2 and vm3, one after another,
“xenstore ls -f” should output something like this:

After VM1 was instantiated, the output of “xenstore ls -f”
will be something like this:

    /local/shared_mem/ID1/master = domid_of_vm1
    /local/shared_mem/ID1/gmfn_begin = gmfn1
    /local/shared_mem/ID1/gmfn_end = gmfn2
    /local/shared_mem/ID1/granularity = "4k"
    /local/shared_mem/ID1/permissions = "RO"
    /local/shared_mem/ID1/slaves = ""

    /local/shared_mem/ID2/master = domid_of_vm1
    /local/shared_mem/ID2/gmfn_begin = gmfn3
    /local/shared_mem/ID2/gmfn_end = gmf4
    /local/shared_mem/ID1/granularity = "4k"
    /local/shared_mem/ID2/permissions = "RW"
    /local/shared_mem/ID2/slaves = ""

After VM2 was instantiated, the following new lines will appear:

    /local/shared_mem/ID1/slaves/domid_of_vm2/gmfn_begin = gmfn5
    /local/shared_mem/ID1/slaves/domid_of_vm2/gmfn_end = gmfn6
    /local/shared_mem/ID1/slaves/domid_of_vm2/permissions = "RO"

After VM2 was instantiated, the following new lines will appear:

    /local/shared_mem/ID2/slaves/domid_of_vm3/gmfn_begin = gmfn7
    /local/shared_mem/ID2/slaves/domid_of_vm3/gmfn_end = gmfn8
    /local/shared_mem/ID2/slaves/domid_of_vm3/permissions = "RW"


When we encounter an id IDx during "xl create":

  + If it’s not under /local/shared_mem:
    + If the corresponding entry has a "master" tag, create the
      corresponding entries for IDx in xenstore
    + If there isn't a "master" tag, say error.

  + If it’s found under /local/shared_mem:
    + If the corresponding entry has a "master" tag, say error
    + If there isn't a "master" tag, map the pages to the newly
      created domain, and add the current domain and necessary information
      under /local/shared_mem/IDx/slaves.

======================================
2.3 mapping the memory areas
======================================
Handle the newly added config option in tools/{xl, libxl} and utilize
toos/libxc to do the actual memory mapping. Specifically, we will use
a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
do the actual mapping. But since there isn't such a wrapper in libxc, we'll
have to add a new wrapper, xc_domain_add_to_physmap_batch in libxc/xc_domain.c

======================================
2.4 error handling
======================================
Add code to handle various errors: Invalid address, invalid permissions, wrong
order of vm creation, mismatched granulairty of length of memory area etc.

====================================================
3. Expected Outcomes/Goals:
====================================================
A new VM config option in xl will be introduced, allowing users to setup
several shared memory areas for inter-VMs communications.
This should work on both x86 and ARM.

====================================================
3. Future Directions:
====================================================
There could also be other non-permission memory attributes like cacheability
and shareability.

Indications of where in the host physical memory should we get the backing
memory from.

Set up a notification channel between domains who are communicating through
shared memory regions, this allows one vm to signal her friends when data is
available in the shared memory or when the data in the shared memory is
consumed. The channel could be built upon PPI or SGI.


[See also:
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Share_a_page_in_memory_from_the_VM_config_file]


Cheers,

Zhongze Liu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-20 17:18 [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file Zhongze Liu
@ 2017-06-20 17:29 ` Julien Grall
  2017-06-22 16:58   ` Zhongze Liu
  2017-06-21 15:09 ` Wei Liu
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Julien Grall @ 2017-06-20 17:29 UTC (permalink / raw)
  To: Zhongze Liu, xen-devel
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, Jarvis Roach

Hi,

Thank you for the new proposal.

On 06/20/2017 06:18 PM, Zhongze Liu wrote:
> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> The parameter "prot=RO" means that this memory area are offered with read-only
> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> gmfn5~gmfn6.

[...]

> 
> ======================================
> 2.3 mapping the memory areas
> ======================================
> Handle the newly added config option in tools/{xl, libxl} and utilize
> toos/libxc to do the actual memory mapping. Specifically, we will use
> a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
> do the actual mapping. But since there isn't such a wrapper in libxc, we'll
> have to add a new wrapper, xc_domain_add_to_physmap_batch in libxc/xc_domain.c

In the paragrah above, you suggest the user can select the permission on 
the shared page. However, the hypercall XENMEM_add_to_physmap does not 
currently take permission. So how do you plan to handle that?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-20 17:18 [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file Zhongze Liu
  2017-06-20 17:29 ` Julien Grall
@ 2017-06-21 15:09 ` Wei Liu
  2017-06-21 15:12   ` Julien Grall
  2017-06-22 17:27   ` Zhongze Liu
  2017-06-22 21:05 ` Stefano Stabellini
  2017-07-18 12:10 ` Julien Grall
  3 siblings, 2 replies; 19+ messages in thread
From: Wei Liu @ 2017-06-21 15:09 UTC (permalink / raw)
  To: Zhongze Liu
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, Julien Grall, xen-devel, Jarvis Roach

On Wed, Jun 21, 2017 at 01:18:38AM +0800, Zhongze Liu wrote:
> ====================================================
> 1. Motivation and Description
> ====================================================
> Virtual machines use grant table hypercalls to setup a share page for
> inter-VMs communications. These hypercalls are used by all PV
> protocols today. However, very simple guests, such as baremetal
> applications, might not have the infrastructure to handle the grant table.
> This project is about setting up several shared memory areas for inter-VMs
> communications directly from the VM config file.
> So that the guest kernel doesn't have to have grant table support (in the
> embedded space, this is not unusual) to be able to communicate with
> other guests.
> 
> ====================================================
> 2. Implementation Plan:
> ====================================================
> 
> ======================================
> 2.1 Introduce a new VM config option in xl:
> ======================================
> The shared areas should be shareable among several (>=2) VMs, so
> every shared physical memory area is assigned to a set of VMs.
> Therefore, a “token” or “identifier” should be used here to uniquely
> identify a backing memory area.
> 
> The backing area would be taken from one domain, which we will regard
> as the "master domain", and this domain should be created prior to any
> other "slave domain"s. Again, we have to use some kind of tag to tell who
> is the "master domain".
> 
> And the ability to specify the attributes of the pages (say, WO/RO/X)
> to be shared should be also given to the user. For the master domain,
> these attributes often describes the maximum permission allowed for the
> shared pages, and for the slave domains, these attributes are often used
> to describe with what permissions this area will be mapped.
> This information should also be specified in the xl config entry.
> 

I don't quite get the attribute settings. If you only insert a backing
page into guest physical address space with XENMEM hypercall, how do you
audit the attributes when the guest tries to map the page?

> To handle all these, I would suggest using an unsigned integer to serve as the
> identifier, and using a "master" tag in the master domain's xl config entry
> to announce that she will provide the backing memory pages. A separate
> entry would be used to describe the attributes of the shared memory area, of
> the form "prot=RW".

I think using an integer is too limiting. You would need the user to
know if a particular number is already used. Maybe using a number is
good enough for the use case you have in mind, but it is not future
proof. I don't know how sophisticated we want this to be, though.

> For example:
> 
> In xl config file of vm1:
> 
>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>                           granularity = 4k, prot = RO, master”,
>                          "id = ID2, begin = gmfn3, end = gmfn4,

I think you mean "gpfn" here and below.

>  granularity = 4k, prot = RW, master”]
> 
> In xl config file of vm2:
> 
>     static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
>                           granularity = 4k, prot = RO”]
> 
> In xl config file of vm3:
> 
>     static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
>                           granularity = 4k, prot = RW”]
> 
> gmfn's above are all hex of the form "0x20000".
> 
> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> The parameter "prot=RO" means that this memory area are offered with read-only
> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> gmfn5~gmfn6.
> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
> 
> The "granularity" is optional in the slaves' config entries. But if it's
> presented in the slaves' config entry, it has to be the same with its master's.
> Besides, the size of the gmfn range must also match. And overlapping backing
> memory areas are well defined.
> 

What do you mean by "well defined"?

Why is inserting a sub-range not allowed?

> Note that the "master" tag in vm1 for both ID1 and ID2 indicates that vm1
> should be created prior to both vm2 and vm3, for they both rely on the pages
> backed by vm1. If one tries to create vm2 or vm3 prior to vm1, she will get
> an error. And in vm1's config file, the "prot=RO" parameter of ID1 indicates
> that if one tries to share this page with vm1 with, say, "WR" permission,
> she will get an error, too.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-21 15:09 ` Wei Liu
@ 2017-06-21 15:12   ` Julien Grall
  2017-06-22 17:27   ` Zhongze Liu
  1 sibling, 0 replies; 19+ messages in thread
From: Julien Grall @ 2017-06-21 15:12 UTC (permalink / raw)
  To: Wei Liu, Zhongze Liu
  Cc: Edgar E. Iglesias, Stefano Stabellini, Ian Jackson, edgari,
	xen-devel, Jarvis Roach



On 21/06/17 16:09, Wei Liu wrote:
> On Wed, Jun 21, 2017 at 01:18:38AM +0800, Zhongze Liu wrote:
>> For example:
>>
>> In xl config file of vm1:
>>
>>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>>                           granularity = 4k, prot = RO, master”,
>>                          "id = ID2, begin = gmfn3, end = gmfn4,
>
> I think you mean "gpfn" here and below.

It would be better to use gfn in that case to follow the convention of 
the hypervisor (see xen/include/memory.h).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-20 17:29 ` Julien Grall
@ 2017-06-22 16:58   ` Zhongze Liu
  2017-06-22 20:55     ` Stefano Stabellini
  0 siblings, 1 reply; 19+ messages in thread
From: Zhongze Liu @ 2017-06-22 16:58 UTC (permalink / raw)
  To: Julien Grall
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, xen-devel, Jarvis Roach

Hi Julien,

2017-06-21 1:29 GMT+08:00 Julien Grall <julien.grall@arm.com>:
> Hi,
>
> Thank you for the new proposal.
>
> On 06/20/2017 06:18 PM, Zhongze Liu wrote:
>>
>> In the example above. A memory area ID1 will be shared between vm1 and
>> vm2.
>> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
>> The parameter "prot=RO" means that this memory area are offered with
>> read-only
>> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
>> gmfn5~gmfn6.
>
>
> [...]
>
>>
>> ======================================
>> 2.3 mapping the memory areas
>> ======================================
>> Handle the newly added config option in tools/{xl, libxl} and utilize
>> toos/libxc to do the actual memory mapping. Specifically, we will use
>> a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
>> do the actual mapping. But since there isn't such a wrapper in libxc,
>> we'll
>> have to add a new wrapper, xc_domain_add_to_physmap_batch in
>> libxc/xc_domain.c
>
>
> In the paragrah above, you suggest the user can select the permission on the
> shared page. However, the hypercall XENMEM_add_to_physmap does not currently
> take permission. So how do you plan to handle that?
>

I think this could be done via XENMEM_access_op?

Cheers,

Zhongze Liu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-21 15:09 ` Wei Liu
  2017-06-21 15:12   ` Julien Grall
@ 2017-06-22 17:27   ` Zhongze Liu
  2017-06-22 18:55     ` Zhongze Liu
  2017-06-28 16:03     ` Wei Liu
  1 sibling, 2 replies; 19+ messages in thread
From: Zhongze Liu @ 2017-06-22 17:27 UTC (permalink / raw)
  To: Wei Liu
  Cc: Edgar E. Iglesias, Stefano Stabellini, Ian Jackson, edgari,
	Julien Grall, xen-devel, Jarvis Roach

Hi Wei,

Thank you for your valuable comments.

2017-06-21 23:09 GMT+08:00 Wei Liu <wei.liu2@citrix.com>:
> On Wed, Jun 21, 2017 at 01:18:38AM +0800, Zhongze Liu wrote:
>> ====================================================
>> 1. Motivation and Description
>> ====================================================
>> Virtual machines use grant table hypercalls to setup a share page for
>> inter-VMs communications. These hypercalls are used by all PV
>> protocols today. However, very simple guests, such as baremetal
>> applications, might not have the infrastructure to handle the grant table.
>> This project is about setting up several shared memory areas for inter-VMs
>> communications directly from the VM config file.
>> So that the guest kernel doesn't have to have grant table support (in the
>> embedded space, this is not unusual) to be able to communicate with
>> other guests.
>>
>> ====================================================
>> 2. Implementation Plan:
>> ====================================================
>>
>> ======================================
>> 2.1 Introduce a new VM config option in xl:
>> ======================================
>> The shared areas should be shareable among several (>=2) VMs, so
>> every shared physical memory area is assigned to a set of VMs.
>> Therefore, a “token” or “identifier” should be used here to uniquely
>> identify a backing memory area.
>>
>> The backing area would be taken from one domain, which we will regard
>> as the "master domain", and this domain should be created prior to any
>> other "slave domain"s. Again, we have to use some kind of tag to tell who
>> is the "master domain".
>>
>> And the ability to specify the attributes of the pages (say, WO/RO/X)
>> to be shared should be also given to the user. For the master domain,
>> these attributes often describes the maximum permission allowed for the
>> shared pages, and for the slave domains, these attributes are often used
>> to describe with what permissions this area will be mapped.
>> This information should also be specified in the xl config entry.
>>
>
> I don't quite get the attribute settings. If you only insert a backing
> page into guest physical address space with XENMEM hypercall, how do you
> audit the attributes when the guest tries to map the page?
>

I'm still considering about this, and any suggestions are welcomed.
The current plan
I have in mind is XENMEM_access_op.

>> To handle all these, I would suggest using an unsigned integer to serve as the
>> identifier, and using a "master" tag in the master domain's xl config entry
>> to announce that she will provide the backing memory pages. A separate
>> entry would be used to describe the attributes of the shared memory area, of
>> the form "prot=RW".
>
> I think using an integer is too limiting. You would need the user to
> know if a particular number is already used. Maybe using a number is
> good enough for the use case you have in mind, but it is not future
> proof. I don't know how sophisticated we want this to be, though.
>

Sounds reasonable. I chose integers because I think integers are fast
and easy to
manipulate. But integers are somewhat hard to memorize and this isn't
a good thing
from a user's point of view. So maybe I'll make it a string with a
maximum size of 32
or longer.

>> For example:
>>
>> In xl config file of vm1:
>>
>>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>>                           granularity = 4k, prot = RO, master”,
>>                          "id = ID2, begin = gmfn3, end = gmfn4,
>
> I think you mean "gpfn" here and below.
>

Yes, according to https://wiki.xenproject.org/wiki/XenTerminology, the section
"Address Spaces", gmfn == gpfn for auto-translated guests. But this usage
seems to be outdated and should be phased out according to include/xen/mm.h.
And just as what Julien has pointed out, the term "gfn" should be used here.

>>  granularity = 4k, prot = RW, master”]
>>
>> In xl config file of vm2:
>>
>>     static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
>>                           granularity = 4k, prot = RO”]
>>
>> In xl config file of vm3:
>>
>>     static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
>>                           granularity = 4k, prot = RW”]
>>
>> gmfn's above are all hex of the form "0x20000".
>>
>> In the example above. A memory area ID1 will be shared between vm1 and vm2.
>> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
>> The parameter "prot=RO" means that this memory area are offered with read-only
>> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
>> gmfn5~gmfn6.
>> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
>> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
>> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
>>
>> The "granularity" is optional in the slaves' config entries. But if it's
>> presented in the slaves' config entry, it has to be the same with its master's.
>> Besides, the size of the gmfn range must also match. And overlapping backing
>> memory areas are well defined.
>>
>
> What do you mean by "well defined"?

Em...I think I should have put it in a more clear way. In fact, I mean
that overlapping
areas are allowed, and when two areas overlap with each other, any
operations done
on the overlapping area will be seen on both sides. Besides this, they
just act like two
independent areas. And the job of serializing the access to the
overlapping area is
left to the user.

>
> Why is inserting a sub-range not allowed?
>

This is also a feature under consideration.Maybe the use cases that I have
in mind is not that complicated, so I chose to keep it simple. But
after giving it
a second thought, I found this will not add too much complexity to the code and
will be useful in some cases. So I think I'll allow this in my next
version of the proposal.

>> Note that the "master" tag in vm1 for both ID1 and ID2 indicates that vm1
>> should be created prior to both vm2 and vm3, for they both rely on the pages
>> backed by vm1. If one tries to create vm2 or vm3 prior to vm1, she will get
>> an error. And in vm1's config file, the "prot=RO" parameter of ID1 indicates
>> that if one tries to share this page with vm1 with, say, "WR" permission,
>> she will get an error, too.
>>

Cheers,

Zhongze Liu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-22 17:27   ` Zhongze Liu
@ 2017-06-22 18:55     ` Zhongze Liu
  2017-06-28 16:03     ` Wei Liu
  1 sibling, 0 replies; 19+ messages in thread
From: Zhongze Liu @ 2017-06-22 18:55 UTC (permalink / raw)
  To: Wei Liu
  Cc: Edgar E. Iglesias, Stefano Stabellini, Ian Jackson, edgari,
	Julien Grall, xen-devel, Jarvis Roach

Hi,

After talking to Stefano, I know that there seem to be no such
hypercalls to restrict the W/R/X
permissions on the shared backing pages (XENMEM_access_op is for
another purpose,
sorry for getting its usage wrong). And it seems that the ability to
specify these permissions
is not strictly necessary. Since the goal of this project is to setup
VM-to-VM communication,
in most cases, users would just expect that the shared memory is
mapped read-write with
cacheability attributes of normal memory. So the temporary conclusion
is to restrict the
design to sharing read-write pages with normal caching attributes,
with the rest left to
the to-be-done list.


Cheers,

Zhongze Liu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-22 16:58   ` Zhongze Liu
@ 2017-06-22 20:55     ` Stefano Stabellini
  0 siblings, 0 replies; 19+ messages in thread
From: Stefano Stabellini @ 2017-06-22 20:55 UTC (permalink / raw)
  To: Zhongze Liu
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, Julien Grall, xen-devel, Jarvis Roach

On Fri, 23 Jun 2017, Zhongze Liu wrote:
> Hi Julien,
> 
> 2017-06-21 1:29 GMT+08:00 Julien Grall <julien.grall@arm.com>:
> > Hi,
> >
> > Thank you for the new proposal.
> >
> > On 06/20/2017 06:18 PM, Zhongze Liu wrote:
> >>
> >> In the example above. A memory area ID1 will be shared between vm1 and
> >> vm2.
> >> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> >> The parameter "prot=RO" means that this memory area are offered with
> >> read-only
> >> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> >> gmfn5~gmfn6.
> >
> >
> > [...]
> >
> >>
> >> ======================================
> >> 2.3 mapping the memory areas
> >> ======================================
> >> Handle the newly added config option in tools/{xl, libxl} and utilize
> >> toos/libxc to do the actual memory mapping. Specifically, we will use
> >> a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
> >> do the actual mapping. But since there isn't such a wrapper in libxc,
> >> we'll
> >> have to add a new wrapper, xc_domain_add_to_physmap_batch in
> >> libxc/xc_domain.c
> >
> >
> > In the paragrah above, you suggest the user can select the permission on the
> > shared page. However, the hypercall XENMEM_add_to_physmap does not currently
> > take permission. So how do you plan to handle that?
> >
> 
> I think this could be done via XENMEM_access_op?

I discussed this topic with Zhongze. I suggested to leave permissions as
"TODO" for the moment, given that for the use-case we have in mind they
aren't needed.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-20 17:18 [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file Zhongze Liu
  2017-06-20 17:29 ` Julien Grall
  2017-06-21 15:09 ` Wei Liu
@ 2017-06-22 21:05 ` Stefano Stabellini
  2017-06-23  9:16   ` Julien Grall
  2017-07-18 12:10 ` Julien Grall
  3 siblings, 1 reply; 19+ messages in thread
From: Stefano Stabellini @ 2017-06-22 21:05 UTC (permalink / raw)
  To: Zhongze Liu
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, Julien Grall, xen-devel, Jarvis Roach

[-- Attachment #1: Type: TEXT/PLAIN, Size: 9167 bytes --]

On Wed, 21 Jun 2017, Zhongze Liu wrote:
> ====================================================
> 1. Motivation and Description
> ====================================================
> Virtual machines use grant table hypercalls to setup a share page for
> inter-VMs communications. These hypercalls are used by all PV
> protocols today. However, very simple guests, such as baremetal
> applications, might not have the infrastructure to handle the grant table.
> This project is about setting up several shared memory areas for inter-VMs
> communications directly from the VM config file.
> So that the guest kernel doesn't have to have grant table support (in the
> embedded space, this is not unusual) to be able to communicate with
> other guests.
> 
> ====================================================
> 2. Implementation Plan:
> ====================================================
> 
> ======================================
> 2.1 Introduce a new VM config option in xl:
> ======================================
> The shared areas should be shareable among several (>=2) VMs, so
> every shared physical memory area is assigned to a set of VMs.
> Therefore, a “token” or “identifier” should be used here to uniquely
> identify a backing memory area.
> 
> The backing area would be taken from one domain, which we will regard
> as the "master domain", and this domain should be created prior to any
> other "slave domain"s. Again, we have to use some kind of tag to tell who
> is the "master domain".
> 
> And the ability to specify the attributes of the pages (say, WO/RO/X)
> to be shared should be also given to the user. For the master domain,
> these attributes often describes the maximum permission allowed for the
> shared pages, and for the slave domains, these attributes are often used
> to describe with what permissions this area will be mapped.
> This information should also be specified in the xl config entry.
> 
> To handle all these, I would suggest using an unsigned integer to serve as the
> identifier, and using a "master" tag in the master domain's xl config entry
> to announce that she will provide the backing memory pages. A separate
> entry would be used to describe the attributes of the shared memory area, of
> the form "prot=RW".
> For example:
> 
> In xl config file of vm1:
> 
>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>                           granularity = 4k, prot = RO, master”,
>                          "id = ID2, begin = gmfn3, end = gmfn4,
>  granularity = 4k, prot = RW, master”]
> 
> In xl config file of vm2:
> 
>     static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
>                           granularity = 4k, prot = RO”]
> 
> In xl config file of vm3:
> 
>     static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
>                           granularity = 4k, prot = RW”]
> 
> gmfn's above are all hex of the form "0x20000".
> 
> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> The parameter "prot=RO" means that this memory area are offered with read-only
> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> gmfn5~gmfn6.
> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
> 
> The "granularity" is optional in the slaves' config entries. But if it's
> presented in the slaves' config entry, it has to be the same with its master's.
> Besides, the size of the gmfn range must also match. And overlapping backing
> memory areas are well defined.
> 
> Note that the "master" tag in vm1 for both ID1 and ID2 indicates that vm1
> should be created prior to both vm2 and vm3, for they both rely on the pages
> backed by vm1. If one tries to create vm2 or vm3 prior to vm1, she will get
> an error. And in vm1's config file, the "prot=RO" parameter of ID1 indicates
> that if one tries to share this page with vm1 with, say, "WR" permission,
> she will get an error, too.
> 
> ======================================
> 2.2 Store the mem-sharing information in xenstore
> ======================================
> For we don't have some persistent storage for xl to store the information
> of the shared memory areas, we have to find some way to keep it between xl
> launches. And xenstore is a good place to do this. The information for one
> shared area should include the ID, master domid and gmfn ranges and
> memory attributes in master and slave domains of this area.
> A current plan is to place the information under /local/shared_mem/ID.
> Still take the above config files as an example:
> 
> If we instantiate vm1, vm2 and vm3, one after another,
> “xenstore ls -f” should output something like this:
> 
> After VM1 was instantiated, the output of “xenstore ls -f”
> will be something like this:
> 
>     /local/shared_mem/ID1/master = domid_of_vm1
>     /local/shared_mem/ID1/gmfn_begin = gmfn1
>     /local/shared_mem/ID1/gmfn_end = gmfn2
>     /local/shared_mem/ID1/granularity = "4k"
>     /local/shared_mem/ID1/permissions = "RO"
>     /local/shared_mem/ID1/slaves = ""
> 
>     /local/shared_mem/ID2/master = domid_of_vm1
>     /local/shared_mem/ID2/gmfn_begin = gmfn3
>     /local/shared_mem/ID2/gmfn_end = gmf4
>     /local/shared_mem/ID1/granularity = "4k"
>     /local/shared_mem/ID2/permissions = "RW"
>     /local/shared_mem/ID2/slaves = ""
> 
> After VM2 was instantiated, the following new lines will appear:
> 
>     /local/shared_mem/ID1/slaves/domid_of_vm2/gmfn_begin = gmfn5
>     /local/shared_mem/ID1/slaves/domid_of_vm2/gmfn_end = gmfn6
>     /local/shared_mem/ID1/slaves/domid_of_vm2/permissions = "RO"
> 
> After VM2 was instantiated, the following new lines will appear:
> 
>     /local/shared_mem/ID2/slaves/domid_of_vm3/gmfn_begin = gmfn7
>     /local/shared_mem/ID2/slaves/domid_of_vm3/gmfn_end = gmfn8
>     /local/shared_mem/ID2/slaves/domid_of_vm3/permissions = "RW"
> 
> 
> When we encounter an id IDx during "xl create":
> 
>   + If it’s not under /local/shared_mem:
>     + If the corresponding entry has a "master" tag, create the
>       corresponding entries for IDx in xenstore
>     + If there isn't a "master" tag, say error.
> 
>   + If it’s found under /local/shared_mem:
>     + If the corresponding entry has a "master" tag, say error
>     + If there isn't a "master" tag, map the pages to the newly
>       created domain, and add the current domain and necessary information
>       under /local/shared_mem/IDx/slaves.

Aside from using "gfn" instead of gmfn everywhere, I think it looks
pretty good.

I would leave out permissions and cacheability attributes from this
version of the work. I would just add a note saying that memory will be
mapped as RW regular cacheable RAM. Other permissions and cacheability
will be possible, but they are not implemented yet.

I think you should also add a few lines on how the teardown is supposed
to work at domain destruction, mentioning that the memory will be freed
only after all slaves and the master are destroyed. I would also clarify
who and when removes the /local/shared_mem xenstore entries.


> ======================================
> 2.3 mapping the memory areas
> ======================================
> Handle the newly added config option in tools/{xl, libxl} and utilize
> toos/libxc to do the actual memory mapping. Specifically, we will use
> a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
> do the actual mapping. But since there isn't such a wrapper in libxc, we'll
> have to add a new wrapper, xc_domain_add_to_physmap_batch in libxc/xc_domain.c
> 
> ======================================
> 2.4 error handling
> ======================================
> Add code to handle various errors: Invalid address, invalid permissions, wrong
> order of vm creation, mismatched granulairty of length of memory area etc.
> 
> ====================================================
> 3. Expected Outcomes/Goals:
> ====================================================
> A new VM config option in xl will be introduced, allowing users to setup
> several shared memory areas for inter-VMs communications.
> This should work on both x86 and ARM.
> 
> ====================================================
> 3. Future Directions:
> ====================================================
> There could also be other non-permission memory attributes like cacheability
> and shareability.
> 
> Indications of where in the host physical memory should we get the backing
> memory from.
> 
> Set up a notification channel between domains who are communicating through
> shared memory regions, this allows one vm to signal her friends when data is
> available in the shared memory or when the data in the shared memory is
> consumed. The channel could be built upon PPI or SGI.
> 
> 
> [See also:
> https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Share_a_page_in_memory_from_the_VM_config_file]
> 
> 
> Cheers,
> 
> Zhongze Liu
> 

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-22 21:05 ` Stefano Stabellini
@ 2017-06-23  9:16   ` Julien Grall
  2017-06-23 18:21     ` Stefano Stabellini
  0 siblings, 1 reply; 19+ messages in thread
From: Julien Grall @ 2017-06-23  9:16 UTC (permalink / raw)
  To: Stefano Stabellini, Zhongze Liu
  Cc: Edgar E. Iglesias, Wei Liu, Ian Jackson, edgari, xen-devel, Jarvis Roach

Hi,

On 22/06/17 22:05, Stefano Stabellini wrote:
>> When we encounter an id IDx during "xl create":
>>
>>   + If it’s not under /local/shared_mem:
>>     + If the corresponding entry has a "master" tag, create the
>>       corresponding entries for IDx in xenstore
>>     + If there isn't a "master" tag, say error.
>>
>>   + If it’s found under /local/shared_mem:
>>     + If the corresponding entry has a "master" tag, say error
>>     + If there isn't a "master" tag, map the pages to the newly
>>       created domain, and add the current domain and necessary information
>>       under /local/shared_mem/IDx/slaves.
>
> Aside from using "gfn" instead of gmfn everywhere, I think it looks
> pretty good.
>
> I would leave out permissions and cacheability attributes from this
> version of the work. I would just add a note saying that memory will be
> mapped as RW regular cacheable RAM. Other permissions and cacheability
> will be possible, but they are not implemented yet.

Well, I think we should design the interface correctly from the 
beginning to facilitate future extension.

Also, you need to clarify what you mean by "regular cacheable RAM". Are 
they write-through, write-back...? But, on ARM, this would only be the 
caching attribute in stage-2 page table. The final caching, memory type, 
shareability would be a combination of stage-2 and stage-1 attributes.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-23  9:16   ` Julien Grall
@ 2017-06-23 18:21     ` Stefano Stabellini
  2017-06-23 19:19       ` Jarvis Roach
  2017-06-23 20:18       ` Julien Grall
  0 siblings, 2 replies; 19+ messages in thread
From: Stefano Stabellini @ 2017-06-23 18:21 UTC (permalink / raw)
  To: Julien Grall
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Zhongze Liu,
	Ian Jackson, edgari, xen-devel, Jarvis Roach

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1917 bytes --]

On Fri, 23 Jun 2017, Julien Grall wrote:
> Hi,
> 
> On 22/06/17 22:05, Stefano Stabellini wrote:
> > > When we encounter an id IDx during "xl create":
> > > 
> > >   + If it’s not under /local/shared_mem:
> > >     + If the corresponding entry has a "master" tag, create the
> > >       corresponding entries for IDx in xenstore
> > >     + If there isn't a "master" tag, say error.
> > > 
> > >   + If it’s found under /local/shared_mem:
> > >     + If the corresponding entry has a "master" tag, say error
> > >     + If there isn't a "master" tag, map the pages to the newly
> > >       created domain, and add the current domain and necessary information
> > >       under /local/shared_mem/IDx/slaves.
> > 
> > Aside from using "gfn" instead of gmfn everywhere, I think it looks
> > pretty good.
> > 
> > I would leave out permissions and cacheability attributes from this
> > version of the work. I would just add a note saying that memory will be
> > mapped as RW regular cacheable RAM. Other permissions and cacheability
> > will be possible, but they are not implemented yet.
> 
> Well, I think we should design the interface correctly from the beginning to
> facilitate future extension.

Which interface are you speaking about?

I don't think we should attemp to write how the hypercall interface
might look like in the future to support setting permissions and
cacheability attributes.


> Also, you need to clarify what you mean by "regular cacheable RAM". Are they
> write-through, write-back...? But, on ARM, this would only be the caching
> attribute in stage-2 page table. The final caching, memory type, shareability
> would be a combination of stage-2 and stage-1 attributes.

The very same that is used today for the ram of virtual machines, do we
need to say any more than that? (For ARM, p2m_ram_rw and MATTR_MEM,
LPAE_SH_INNER. For stage1, we should refer to
xen/include/public/arch-arm.h.)

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-23 18:21     ` Stefano Stabellini
@ 2017-06-23 19:19       ` Jarvis Roach
  2017-06-23 20:09         ` Stefano Stabellini
  2017-06-23 20:18       ` Julien Grall
  1 sibling, 1 reply; 19+ messages in thread
From: Jarvis Roach @ 2017-06-23 19:19 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: Edgar E. Iglesias, Wei Liu, Zhongze Liu, Ian Jackson, edgari, xen-devel



> -----Original Message-----
> From: Stefano Stabellini [mailto:sstabellini@kernel.org]
> Sent: Friday, June 23, 2017 2:21 PM
> To: Julien Grall <julien.grall@arm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Zhongze Liu
> <blackskygg@gmail.com>; xen-devel@lists.xenproject.org; Wei Liu
> <wei.liu2@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jarvis Roach
> <Jarvis.Roach@dornerworks.com>; edgari@xilinx.com; Edgar E. Iglesias
> <edgar.iglesias@xilinx.com>
> Subject: Re: [RFC v2]Proposal to allow setting up shared memory areas
> between VMs from xl config file
> 
> On Fri, 23 Jun 2017, Julien Grall wrote:
> > Hi,
> >
> > On 22/06/17 22:05, Stefano Stabellini wrote:
> > > > When we encounter an id IDx during "xl create":
> > > >
> > > >   + If it’s not under /local/shared_mem:
> > > >     + If the corresponding entry has a "master" tag, create the
> > > >       corresponding entries for IDx in xenstore
> > > >     + If there isn't a "master" tag, say error.
> > > >
> > > >   + If it’s found under /local/shared_mem:
> > > >     + If the corresponding entry has a "master" tag, say error
> > > >     + If there isn't a "master" tag, map the pages to the newly
> > > >       created domain, and add the current domain and necessary
> information
> > > >       under /local/shared_mem/IDx/slaves.
> > >
> > > Aside from using "gfn" instead of gmfn everywhere, I think it looks
> > > pretty good.
> > >
> > > I would leave out permissions and cacheability attributes from this
> > > version of the work. I would just add a note saying that memory will
> > > be mapped as RW regular cacheable RAM. Other permissions and
> > > cacheability will be possible, but they are not implemented yet.
> >
> > Well, I think we should design the interface correctly from the
> > beginning to facilitate future extension.
> 
> Which interface are you speaking about?
> 
> I don't think we should attemp to write how the hypercall interface might
> look like in the future to support setting permissions and cacheability
> attributes.
> 
> 
> > Also, you need to clarify what you mean by "regular cacheable RAM".
> > Are they write-through, write-back...? But, on ARM, this would only be
> > the caching attribute in stage-2 page table. The final caching, memory
> > type, shareability would be a combination of stage-2 and stage-1 attributes.
> 
> The very same that is used today for the ram of virtual machines, do we need
> to say any more than that? (For ARM, p2m_ram_rw and MATTR_MEM,
> LPAE_SH_INNER. For stage1, we should refer to
> xen/include/public/arch-arm.h.)

I have customers who need some buffers LPAE_SH_OUTER and others who need NORMAL non-cacheable or inner-cacheable buffers, so my suggestion is to provide a way to support the full combination of configurations. 

While the stage 1/stage 2 combination results allow guests (via the stage 1 translation regime) to force the two combinations I specifically mentioned,  in the first case the customers want LPAE_SH_OUTER for cache coherency with a DMA-capable I/O device. In that case, Xen needs to set the shareability attribute to OUTER in the stage 2 table since that's what is used for the SMMU. In the second case,  NORMAL non-cacheable or inner-cacheable, the customers are in a position where they can't trust the guests to disable their cache or set it for inner-cacheable, so it would be good for a way to Xen or privileged/trusted domain to do so.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-23 19:19       ` Jarvis Roach
@ 2017-06-23 20:09         ` Stefano Stabellini
  2017-06-23 20:47           ` Jarvis Roach
  0 siblings, 1 reply; 19+ messages in thread
From: Stefano Stabellini @ 2017-06-23 20:09 UTC (permalink / raw)
  To: Jarvis Roach
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Zhongze Liu,
	Ian Jackson, edgari, Julien Grall, xen-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 4512 bytes --]

On Fri, 23 Jun 2017, Jarvis Roach wrote:
> > -----Original Message-----
> > From: Stefano Stabellini [mailto:sstabellini@kernel.org]
> > Sent: Friday, June 23, 2017 2:21 PM
> > To: Julien Grall <julien.grall@arm.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>; Zhongze Liu
> > <blackskygg@gmail.com>; xen-devel@lists.xenproject.org; Wei Liu
> > <wei.liu2@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jarvis Roach
> > <Jarvis.Roach@dornerworks.com>; edgari@xilinx.com; Edgar E. Iglesias
> > <edgar.iglesias@xilinx.com>
> > Subject: Re: [RFC v2]Proposal to allow setting up shared memory areas
> > between VMs from xl config file
> > 
> > On Fri, 23 Jun 2017, Julien Grall wrote:
> > > Hi,
> > >
> > > On 22/06/17 22:05, Stefano Stabellini wrote:
> > > > > When we encounter an id IDx during "xl create":
> > > > >
> > > > >   + If it’s not under /local/shared_mem:
> > > > >     + If the corresponding entry has a "master" tag, create the
> > > > >       corresponding entries for IDx in xenstore
> > > > >     + If there isn't a "master" tag, say error.
> > > > >
> > > > >   + If it’s found under /local/shared_mem:
> > > > >     + If the corresponding entry has a "master" tag, say error
> > > > >     + If there isn't a "master" tag, map the pages to the newly
> > > > >       created domain, and add the current domain and necessary
> > information
> > > > >       under /local/shared_mem/IDx/slaves.
> > > >
> > > > Aside from using "gfn" instead of gmfn everywhere, I think it looks
> > > > pretty good.
> > > >
> > > > I would leave out permissions and cacheability attributes from this
> > > > version of the work. I would just add a note saying that memory will
> > > > be mapped as RW regular cacheable RAM. Other permissions and
> > > > cacheability will be possible, but they are not implemented yet.
> > >
> > > Well, I think we should design the interface correctly from the
> > > beginning to facilitate future extension.
> > 
> > Which interface are you speaking about?
> > 
> > I don't think we should attemp to write how the hypercall interface might
> > look like in the future to support setting permissions and cacheability
> > attributes.
> > 
> > 
> > > Also, you need to clarify what you mean by "regular cacheable RAM".
> > > Are they write-through, write-back...? But, on ARM, this would only be
> > > the caching attribute in stage-2 page table. The final caching, memory
> > > type, shareability would be a combination of stage-2 and stage-1 attributes.
> > 
> > The very same that is used today for the ram of virtual machines, do we need
> > to say any more than that? (For ARM, p2m_ram_rw and MATTR_MEM,
> > LPAE_SH_INNER. For stage1, we should refer to
> > xen/include/public/arch-arm.h.)
> 
> I have customers who need some buffers LPAE_SH_OUTER and others who need NORMAL non-cacheable or inner-cacheable buffers, so my suggestion is to provide a way to support the full combination of configurations. 
> 
> While the stage 1/stage 2 combination results allow guests (via the stage 1 translation regime) to force the two combinations I specifically mentioned,  in the first case the customers want LPAE_SH_OUTER for cache coherency with a DMA-capable I/O device. In that case, Xen needs to set the shareability attribute to OUTER in the stage 2 table since that's what is used for the SMMU. In the second case,  NORMAL non-cacheable or inner-cacheable, the customers are in a position where they can't trust the guests to disable their cache or set it for inner-cacheable, so it would be good for a way to Xen or privileged/trusted domain to do so.

Let me premise that I would be happy to see the whole set of
configurations implemented in the long run, we might just not get there
on day1. We could spec out how the VM config option should look like,
but leave the cacheability and shareability parameteres unimplemented
for now (also to address Julien't comment on defining future proof
interfaces).

I understand the need for cache-coherent buffers for dma to/from
devices, but I think that problem should be solved with the iomem config
option. This project was meant to setup shared memory regions for
VM-to-VM communications. It doesn't look like that is the kind of
requirement that this framework is meant to meet, unless I am missing
something?

Normal non-cacheable buffers are more interesting: do you actually see
guests running on non-cacheable memory? If not, could you make an
example of a use-case for two VMs sharing a non-cacheable page?

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-23 18:21     ` Stefano Stabellini
  2017-06-23 19:19       ` Jarvis Roach
@ 2017-06-23 20:18       ` Julien Grall
  1 sibling, 0 replies; 19+ messages in thread
From: Julien Grall @ 2017-06-23 20:18 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Edgar E. Iglesias, Wei Liu, Zhongze Liu, Ian Jackson, edgari,
	xen-devel, Jarvis Roach

Hi Stefano,

On 06/23/2017 07:21 PM, Stefano Stabellini wrote:
> On Fri, 23 Jun 2017, Julien Grall wrote:
>> Hi,
>>
>> On 22/06/17 22:05, Stefano Stabellini wrote:
>>>> When we encounter an id IDx during "xl create":
>>>>
>>>>    + If it’s not under /local/shared_mem:
>>>>      + If the corresponding entry has a "master" tag, create the
>>>>        corresponding entries for IDx in xenstore
>>>>      + If there isn't a "master" tag, say error.
>>>>
>>>>    + If it’s found under /local/shared_mem:
>>>>      + If the corresponding entry has a "master" tag, say error
>>>>      + If there isn't a "master" tag, map the pages to the newly
>>>>        created domain, and add the current domain and necessary information
>>>>        under /local/shared_mem/IDx/slaves.
>>>
>>> Aside from using "gfn" instead of gmfn everywhere, I think it looks
>>> pretty good.
>>>
>>> I would leave out permissions and cacheability attributes from this
>>> version of the work. I would just add a note saying that memory will be
>>> mapped as RW regular cacheable RAM. Other permissions and cacheability
>>> will be possible, but they are not implemented yet.
>>
>> Well, I think we should design the interface correctly from the beginning to
>> facilitate future extension.
> 
> Which interface are you speaking about?

The interface with the user, i.e libxl and xl. The hypercall can be 
added later if necessary as this could be a DOMCTL so not part of a 
stable ABI.

> 
> I don't think we should attemp to write how the hypercall interface
> might look like in the future to support setting permissions and
> cacheability attributes.
> 
> 
>> Also, you need to clarify what you mean by "regular cacheable RAM". Are they
>> write-through, write-back...? But, on ARM, this would only be the caching
>> attribute in stage-2 page table. The final caching, memory type, shareability
>> would be a combination of stage-2 and stage-1 attributes.
> 
> The very same that is used today for the ram of virtual machines, do we
> need to say any more than that? (For ARM, p2m_ram_rw and MATTR_MEM,
> LPAE_SH_INNER. For stage1, we should refer to
> xen/include/public/arch-arm.h.)

  * All memory which is shared with other entities in the system
  * (including the hypervisor and other guests) must reside in memory
  * which is mapped as Normal Inner-cacheable. This applies to:
  *  - hypercall arguments passed via a pointer to guest memory.
  *  - memory shared via the grant table mechanism (including PV I/O
  *    rings etc).
  *  - memory shared with the hypervisor (struct shared_info, struct
  *
  * Any Inner cache allocation strategy (Write-Back, Write-Through etc)
  * is acceptable. There is no restriction on the Outer-cacheability.

This does not include memory shared between guest via other method than 
grant-table. So the documentation should be at least updated.

But AFAICT, this does not say anything about the shareability of the 
region. It only speaks about outer-cache and inner-cache.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-23 20:09         ` Stefano Stabellini
@ 2017-06-23 20:47           ` Jarvis Roach
  0 siblings, 0 replies; 19+ messages in thread
From: Jarvis Roach @ 2017-06-23 20:47 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Edgar E. Iglesias, Wei Liu, Zhongze Liu, Ian Jackson, edgari,
	Julien Grall, xen-devel



> -----Original Message-----
> From: Stefano Stabellini [mailto:sstabellini@kernel.org]
> Sent: Friday, June 23, 2017 4:09 PM
> To: Jarvis Roach <Jarvis.Roach@dornerworks.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien.grall@arm.com>; Zhongze Liu <blackskygg@gmail.com>; xen-
> devel@lists.xenproject.org; Wei Liu <wei.liu2@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; edgari@xilinx.com; Edgar E. Iglesias
> <edgar.iglesias@xilinx.com>
> Subject: RE: [RFC v2]Proposal to allow setting up shared memory areas
> between VMs from xl config file
> 
> On Fri, 23 Jun 2017, Jarvis Roach wrote:
> > > -----Original Message-----
> > > From: Stefano Stabellini [mailto:sstabellini@kernel.org]
> > > Sent: Friday, June 23, 2017 2:21 PM
> > > To: Julien Grall <julien.grall@arm.com>
> > > Cc: Stefano Stabellini <sstabellini@kernel.org>; Zhongze Liu
> > > <blackskygg@gmail.com>; xen-devel@lists.xenproject.org; Wei Liu
> > > <wei.liu2@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>;
> > > Jarvis Roach <Jarvis.Roach@dornerworks.com>; edgari@xilinx.com;
> > > Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> > > Subject: Re: [RFC v2]Proposal to allow setting up shared memory
> > > areas between VMs from xl config file
> > >
> > > On Fri, 23 Jun 2017, Julien Grall wrote:
> > > > Hi,
> > > >
> > > > On 22/06/17 22:05, Stefano Stabellini wrote:
> > > > > > When we encounter an id IDx during "xl create":
> > > > > >
> > > > > >   + If it’s not under /local/shared_mem:
> > > > > >     + If the corresponding entry has a "master" tag, create the
> > > > > >       corresponding entries for IDx in xenstore
> > > > > >     + If there isn't a "master" tag, say error.
> > > > > >
> > > > > >   + If it’s found under /local/shared_mem:
> > > > > >     + If the corresponding entry has a "master" tag, say error
> > > > > >     + If there isn't a "master" tag, map the pages to the newly
> > > > > >       created domain, and add the current domain and necessary
> > > information
> > > > > >       under /local/shared_mem/IDx/slaves.
> > > > >
> > > > > Aside from using "gfn" instead of gmfn everywhere, I think it
> > > > > looks pretty good.
> > > > >
> > > > > I would leave out permissions and cacheability attributes from
> > > > > this version of the work. I would just add a note saying that
> > > > > memory will be mapped as RW regular cacheable RAM. Other
> > > > > permissions and cacheability will be possible, but they are not
> implemented yet.
> > > >
> > > > Well, I think we should design the interface correctly from the
> > > > beginning to facilitate future extension.
> > >
> > > Which interface are you speaking about?
> > >
> > > I don't think we should attemp to write how the hypercall interface
> > > might look like in the future to support setting permissions and
> > > cacheability attributes.
> > >
> > >
> > > > Also, you need to clarify what you mean by "regular cacheable RAM".
> > > > Are they write-through, write-back...? But, on ARM, this would
> > > > only be the caching attribute in stage-2 page table. The final
> > > > caching, memory type, shareability would be a combination of stage-2
> and stage-1 attributes.
> > >
> > > The very same that is used today for the ram of virtual machines, do
> > > we need to say any more than that? (For ARM, p2m_ram_rw and
> > > MATTR_MEM, LPAE_SH_INNER. For stage1, we should refer to
> > > xen/include/public/arch-arm.h.)
> >
> > I have customers who need some buffers LPAE_SH_OUTER and others
> who need NORMAL non-cacheable or inner-cacheable buffers, so my
> suggestion is to provide a way to support the full combination of
> configurations.
> >
> > While the stage 1/stage 2 combination results allow guests (via the stage 1
> translation regime) to force the two combinations I specifically mentioned,  in
> the first case the customers want LPAE_SH_OUTER for cache coherency with
> a DMA-capable I/O device. In that case, Xen needs to set the shareability
> attribute to OUTER in the stage 2 table since that's what is used for the
> SMMU. In the second case,  NORMAL non-cacheable or inner-cacheable, the
> customers are in a position where they can't trust the guests to disable their
> cache or set it for inner-cacheable, so it would be good for a way to Xen or
> privileged/trusted domain to do so.
> 
> Let me premise that I would be happy to see the whole set of configurations
> implemented in the long run, we might just not get there on day1. We could
> spec out how the VM config option should look like, but leave the
> cacheability and shareability parameteres unimplemented for now (also to
> address Julien't comment on defining future proof interfaces).
> 
> I understand the need for cache-coherent buffers for dma to/from devices,
> but I think that problem should be solved with the iomem config option. This
> project was meant to setup shared memory regions for VM-to-VM
> communications. It doesn't look like that is the kind of requirement that this
> framework is meant to meet, unless I am missing something?

As the intent is for direct VM-to-VM communication I concede the point. However, there is interest in I/O -> common buffer that both VMs can access using a distributed access algorithm, in which case you have indirect VM-to-VM communication occurring, though no doubt I'm stretching the meaning and intent of the project.

> Normal non-cacheable buffers are more interesting: do you actually see
> guests running on non-cacheable memory? If not, could you make an
> example of a use-case for two VMs sharing a non-cacheable page?

There a couple of different use cases for guests running without outer cache specifically, or without any cache generally. For safety applications, partitioning  VMs to their own CPU cores without sharing L2 cache (for all but one VM) would allow you to eliminate cross VM jitter caused by cache contention, while still gaining some advantage by using the L1 cache (and all of the advantage of using L2 cache for one of them).  For security applications, there's a similar desire not to share a common resource like cache between VMs for fear that a rogue actor could extract information from it. In both situations having a  shared page would be useful for inter-VM communication. Both use-cases presume that the base memory allocated to the guest as part of its VM environment is also set up as non-cacheable (or inner cacheable), which is why it would be useful to have an interface to control those attributes better.

The best use-case I can think of for normal, non-cacheable buffers for VMs with otherwise cacheable "main" memory would again be a security application where the cacheable main memory handles encrypted information, but where the decrypted data is put into a non-cached shared buffer for another VM to consume. Again the concern is that if the buffer was cacheable then a rogue agent in the system could use some side channel exploit to gain information about the data.



 
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-22 17:27   ` Zhongze Liu
  2017-06-22 18:55     ` Zhongze Liu
@ 2017-06-28 16:03     ` Wei Liu
  1 sibling, 0 replies; 19+ messages in thread
From: Wei Liu @ 2017-06-28 16:03 UTC (permalink / raw)
  To: Zhongze Liu
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, Julien Grall, xen-devel, Jarvis Roach

Sorry for the late reply.

I can see the thread already contains answers to some of my questions so
I will just reply to the bits that are still relevant.

On Fri, Jun 23, 2017 at 01:27:24AM +0800, Zhongze Liu wrote:
> Hi Wei,
> 
> Thank you for your valuable comments.
> 
> 2017-06-21 23:09 GMT+08:00 Wei Liu <wei.liu2@citrix.com>:
[...]
> >> To handle all these, I would suggest using an unsigned integer to serve as the
> >> identifier, and using a "master" tag in the master domain's xl config entry
> >> to announce that she will provide the backing memory pages. A separate
> >> entry would be used to describe the attributes of the shared memory area, of
> >> the form "prot=RW".
> >
> > I think using an integer is too limiting. You would need the user to
> > know if a particular number is already used. Maybe using a number is
> > good enough for the use case you have in mind, but it is not future
> > proof. I don't know how sophisticated we want this to be, though.
> >
> 
> Sounds reasonable. I chose integers because I think integers are fast
> and easy to
> manipulate. But integers are somewhat hard to memorize and this isn't
> a good thing
> from a user's point of view. So maybe I'll make it a string with a
> maximum size of 32
> or longer.
> 

Sounds reasonable.

[...]
> >>  granularity = 4k, prot = RW, master”]
> >>
> >> In xl config file of vm2:
> >>
> >>     static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
> >>                           granularity = 4k, prot = RO”]
> >>
> >> In xl config file of vm3:
> >>
> >>     static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
> >>                           granularity = 4k, prot = RW”]
> >>
> >> gmfn's above are all hex of the form "0x20000".
> >>
> >> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> >> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> >> The parameter "prot=RO" means that this memory area are offered with read-only
> >> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> >> gmfn5~gmfn6.
> >> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
> >> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
> >> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
> >>
> >> The "granularity" is optional in the slaves' config entries. But if it's
> >> presented in the slaves' config entry, it has to be the same with its master's.
> >> Besides, the size of the gmfn range must also match. And overlapping backing
> >> memory areas are well defined.
> >>
> >
> > What do you mean by "well defined"?
> 
> Em...I think I should have put it in a more clear way. In fact, I mean
> that overlapping
> areas are allowed, and when two areas overlap with each other, any
> operations done
> on the overlapping area will be seen on both sides. Besides this, they
> just act like two
> independent areas. And the job of serializing the access to the
> overlapping area is
> left to the user.
> 

OK. "Well defined" means "clearly defined or described" but I didn't see
any definition or description of it. Just use "allowed" should be OK.

> >
> > Why is inserting a sub-range not allowed?
> >
> 
> This is also a feature under consideration.Maybe the use cases that I have
> in mind is not that complicated, so I chose to keep it simple. But
> after giving it
> a second thought, I found this will not add too much complexity to the code and
> will be useful in some cases. So I think I'll allow this in my next
> version of the proposal.
> 

That's what I thought as well. Essentially it is not any harder than
implementing the overlapping case.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-06-20 17:18 [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file Zhongze Liu
                   ` (2 preceding siblings ...)
  2017-06-22 21:05 ` Stefano Stabellini
@ 2017-07-18 12:10 ` Julien Grall
  2017-07-18 14:22   ` Zhongze Liu
  3 siblings, 1 reply; 19+ messages in thread
From: Julien Grall @ 2017-07-18 12:10 UTC (permalink / raw)
  To: Zhongze Liu, xen-devel
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, Jarvis Roach

Hi,

On 20/06/17 18:18, Zhongze Liu wrote:
> ====================================================
> 1. Motivation and Description
> ====================================================
> Virtual machines use grant table hypercalls to setup a share page for
> inter-VMs communications. These hypercalls are used by all PV
> protocols today. However, very simple guests, such as baremetal
> applications, might not have the infrastructure to handle the grant table.
> This project is about setting up several shared memory areas for inter-VMs
> communications directly from the VM config file.
> So that the guest kernel doesn't have to have grant table support (in the
> embedded space, this is not unusual) to be able to communicate with
> other guests.
>
> ====================================================
> 2. Implementation Plan:
> ====================================================
>
> ======================================
> 2.1 Introduce a new VM config option in xl:
> ======================================
> The shared areas should be shareable among several (>=2) VMs, so
> every shared physical memory area is assigned to a set of VMs.
> Therefore, a “token” or “identifier” should be used here to uniquely
> identify a backing memory area.
>
> The backing area would be taken from one domain, which we will regard
> as the "master domain", and this domain should be created prior to any
> other "slave domain"s. Again, we have to use some kind of tag to tell who
> is the "master domain".
>
> And the ability to specify the attributes of the pages (say, WO/RO/X)
> to be shared should be also given to the user. For the master domain,
> these attributes often describes the maximum permission allowed for the
> shared pages, and for the slave domains, these attributes are often used
> to describe with what permissions this area will be mapped.
> This information should also be specified in the xl config entry.
>
> To handle all these, I would suggest using an unsigned integer to serve as the
> identifier, and using a "master" tag in the master domain's xl config entry
> to announce that she will provide the backing memory pages. A separate
> entry would be used to describe the attributes of the shared memory area, of
> the form "prot=RW".
> For example:
>
> In xl config file of vm1:
>
>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>                           granularity = 4k, prot = RO, master”,
>                          "id = ID2, begin = gmfn3, end = gmfn4,
>  granularity = 4k, prot = RW, master”]

Replying here regarding the discussion we had during the summit. AArch64 
is supporting multiple page granularities (4KB, 16KB, 64KB).

Each guest and the Hypervisor are free to use different page 
granularity. To go further, if I am not mistaken, an OS is free to use 
different page granularity on each processor.

In reality, I have only seen OS using the same granularity across all 
the processors.

At the moment, Xen is only supporting 4KB page granularity. Although, 
there are plan to also support 64KB because this is the only way to 
support above 48-bit physical address.

With that in mind, this interface is a bit confusing. What does the 
"granularity" refers to? Hypervisor? Guest A? Guest B?

Similarly, gmfn* are frames. But what is its granularity?

I think it would make sense to start using the full address on the 
toolstack side, avoiding confusion for the user what is the page 
granularity to be used here.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-07-18 12:10 ` Julien Grall
@ 2017-07-18 14:22   ` Zhongze Liu
  2017-07-18 16:13     ` Stefano Stabellini
  0 siblings, 1 reply; 19+ messages in thread
From: Zhongze Liu @ 2017-07-18 14:22 UTC (permalink / raw)
  To: Julien Grall
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, xen-devel, Jarvis Roach

Hi Julien,

After our discussion during the summit, I have revised my plan, but
I'm still working on it and haven't sent it to the ML yet.
I'm planning to send a new version of my proposal together with the
parsing code later so that I could reference the
proposal in the commit message.
But here is what's related to our discussion about the granularity in
my current draft:

  @granularity          can be a number with an optional unit: k, m,
kb or mb,
                                 the final result should be a multiple of 4k.

The actual address of begin/end will then be calculated by multiplying them
with @granularity. For example, if begin=0x100 and granularity=4k then the
shared space will begin at the address 0x100000.


Cheers,

Zhongze Liu

2017-07-18 20:10 GMT+08:00 Julien Grall <julien.grall@arm.com>:
> Hi,
>
>
> On 20/06/17 18:18, Zhongze Liu wrote:
>>
>> ====================================================
>> 1. Motivation and Description
>> ====================================================
>> Virtual machines use grant table hypercalls to setup a share page for
>> inter-VMs communications. These hypercalls are used by all PV
>> protocols today. However, very simple guests, such as baremetal
>> applications, might not have the infrastructure to handle the grant table.
>> This project is about setting up several shared memory areas for inter-VMs
>> communications directly from the VM config file.
>> So that the guest kernel doesn't have to have grant table support (in the
>> embedded space, this is not unusual) to be able to communicate with
>> other guests.
>>
>> ====================================================
>> 2. Implementation Plan:
>> ====================================================
>>
>> ======================================
>> 2.1 Introduce a new VM config option in xl:
>> ======================================
>> The shared areas should be shareable among several (>=2) VMs, so
>> every shared physical memory area is assigned to a set of VMs.
>> Therefore, a “token” or “identifier” should be used here to uniquely
>> identify a backing memory area.
>>
>> The backing area would be taken from one domain, which we will regard
>> as the "master domain", and this domain should be created prior to any
>> other "slave domain"s. Again, we have to use some kind of tag to tell who
>> is the "master domain".
>>
>> And the ability to specify the attributes of the pages (say, WO/RO/X)
>> to be shared should be also given to the user. For the master domain,
>> these attributes often describes the maximum permission allowed for the
>> shared pages, and for the slave domains, these attributes are often used
>> to describe with what permissions this area will be mapped.
>> This information should also be specified in the xl config entry.
>>
>> To handle all these, I would suggest using an unsigned integer to serve as
>> the
>> identifier, and using a "master" tag in the master domain's xl config
>> entry
>> to announce that she will provide the backing memory pages. A separate
>> entry would be used to describe the attributes of the shared memory area,
>> of
>> the form "prot=RW".
>> For example:
>>
>> In xl config file of vm1:
>>
>>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>>                           granularity = 4k, prot = RO, master”,
>>                          "id = ID2, begin = gmfn3, end = gmfn4,
>>  granularity = 4k, prot = RW, master”]
>
>
> Replying here regarding the discussion we had during the summit. AArch64 is
> supporting multiple page granularities (4KB, 16KB, 64KB).
>
> Each guest and the Hypervisor are free to use different page granularity. To
> go further, if I am not mistaken, an OS is free to use different page
> granularity on each processor.
>
> In reality, I have only seen OS using the same granularity across all the
> processors.
>
> At the moment, Xen is only supporting 4KB page granularity. Although, there
> are plan to also support 64KB because this is the only way to support above
> 48-bit physical address.
>
> With that in mind, this interface is a bit confusing. What does the
> "granularity" refers to? Hypervisor? Guest A? Guest B?
>
> Similarly, gmfn* are frames. But what is its granularity?
>
> I think it would make sense to start using the full address on the toolstack
> side, avoiding confusion for the user what is the page granularity to be
> used here.
>
> Cheers,
>
> --
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
  2017-07-18 14:22   ` Zhongze Liu
@ 2017-07-18 16:13     ` Stefano Stabellini
  0 siblings, 0 replies; 19+ messages in thread
From: Stefano Stabellini @ 2017-07-18 16:13 UTC (permalink / raw)
  To: Zhongze Liu
  Cc: Edgar E. Iglesias, Stefano Stabellini, Wei Liu, Ian Jackson,
	edgari, Julien Grall, xen-devel, Jarvis Roach

[-- Attachment #1: Type: TEXT/PLAIN, Size: 4834 bytes --]

On Tue, 18 Jul 2017, Zhongze Liu wrote:
> Hi Julien,
> 
> After our discussion during the summit, I have revised my plan, but
> I'm still working on it and haven't sent it to the ML yet.
> I'm planning to send a new version of my proposal together with the
> parsing code later so that I could reference the
> proposal in the commit message.
> But here is what's related to our discussion about the granularity in
> my current draft:
> 
>   @granularity          can be a number with an optional unit: k, m,
> kb or mb,
>                                  the final result should be a multiple of 4k.
> 
> The actual address of begin/end will then be calculated by multiplying them
> with @granularity. For example, if begin=0x100 and granularity=4k then the
> shared space will begin at the address 0x100000.

I would remove "granularity" from the interface and just use full
addresses for begin and end (or begin and size).

 
> Cheers,
> 
> Zhongze Liu
> 
> 2017-07-18 20:10 GMT+08:00 Julien Grall <julien.grall@arm.com>:
> > Hi,
> >
> >
> > On 20/06/17 18:18, Zhongze Liu wrote:
> >>
> >> ====================================================
> >> 1. Motivation and Description
> >> ====================================================
> >> Virtual machines use grant table hypercalls to setup a share page for
> >> inter-VMs communications. These hypercalls are used by all PV
> >> protocols today. However, very simple guests, such as baremetal
> >> applications, might not have the infrastructure to handle the grant table.
> >> This project is about setting up several shared memory areas for inter-VMs
> >> communications directly from the VM config file.
> >> So that the guest kernel doesn't have to have grant table support (in the
> >> embedded space, this is not unusual) to be able to communicate with
> >> other guests.
> >>
> >> ====================================================
> >> 2. Implementation Plan:
> >> ====================================================
> >>
> >> ======================================
> >> 2.1 Introduce a new VM config option in xl:
> >> ======================================
> >> The shared areas should be shareable among several (>=2) VMs, so
> >> every shared physical memory area is assigned to a set of VMs.
> >> Therefore, a “token” or “identifier” should be used here to uniquely
> >> identify a backing memory area.
> >>
> >> The backing area would be taken from one domain, which we will regard
> >> as the "master domain", and this domain should be created prior to any
> >> other "slave domain"s. Again, we have to use some kind of tag to tell who
> >> is the "master domain".
> >>
> >> And the ability to specify the attributes of the pages (say, WO/RO/X)
> >> to be shared should be also given to the user. For the master domain,
> >> these attributes often describes the maximum permission allowed for the
> >> shared pages, and for the slave domains, these attributes are often used
> >> to describe with what permissions this area will be mapped.
> >> This information should also be specified in the xl config entry.
> >>
> >> To handle all these, I would suggest using an unsigned integer to serve as
> >> the
> >> identifier, and using a "master" tag in the master domain's xl config
> >> entry
> >> to announce that she will provide the backing memory pages. A separate
> >> entry would be used to describe the attributes of the shared memory area,
> >> of
> >> the form "prot=RW".
> >> For example:
> >>
> >> In xl config file of vm1:
> >>
> >>     static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
> >>                           granularity = 4k, prot = RO, master”,
> >>                          "id = ID2, begin = gmfn3, end = gmfn4,
> >>  granularity = 4k, prot = RW, master”]
> >
> >
> > Replying here regarding the discussion we had during the summit. AArch64 is
> > supporting multiple page granularities (4KB, 16KB, 64KB).
> >
> > Each guest and the Hypervisor are free to use different page granularity. To
> > go further, if I am not mistaken, an OS is free to use different page
> > granularity on each processor.
> >
> > In reality, I have only seen OS using the same granularity across all the
> > processors.
> >
> > At the moment, Xen is only supporting 4KB page granularity. Although, there
> > are plan to also support 64KB because this is the only way to support above
> > 48-bit physical address.
> >
> > With that in mind, this interface is a bit confusing. What does the
> > "granularity" refers to? Hypervisor? Guest A? Guest B?
> >
> > Similarly, gmfn* are frames. But what is its granularity?
> >
> > I think it would make sense to start using the full address on the toolstack
> > side, avoiding confusion for the user what is the page granularity to be
> > used here.
> >
> > Cheers,
> >
> > --
> > Julien Grall
> 

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2017-07-18 16:13 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-20 17:18 [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file Zhongze Liu
2017-06-20 17:29 ` Julien Grall
2017-06-22 16:58   ` Zhongze Liu
2017-06-22 20:55     ` Stefano Stabellini
2017-06-21 15:09 ` Wei Liu
2017-06-21 15:12   ` Julien Grall
2017-06-22 17:27   ` Zhongze Liu
2017-06-22 18:55     ` Zhongze Liu
2017-06-28 16:03     ` Wei Liu
2017-06-22 21:05 ` Stefano Stabellini
2017-06-23  9:16   ` Julien Grall
2017-06-23 18:21     ` Stefano Stabellini
2017-06-23 19:19       ` Jarvis Roach
2017-06-23 20:09         ` Stefano Stabellini
2017-06-23 20:47           ` Jarvis Roach
2017-06-23 20:18       ` Julien Grall
2017-07-18 12:10 ` Julien Grall
2017-07-18 14:22   ` Zhongze Liu
2017-07-18 16:13     ` Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.