All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jarvis Roach <Jarvis.Roach@dornerworks.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>, Zhongze Liu <blackskygg@gmail.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"edgari@xilinx.com" <edgari@xilinx.com>,
	Julien Grall <julien.grall@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file
Date: Fri, 23 Jun 2017 20:47:46 +0000	[thread overview]
Message-ID: <e767074c0bc24d44b1407b30322b1f03@dornerworks.com> (raw)
In-Reply-To: <alpine.DEB.2.10.1706231240200.12819@sstabellini-ThinkPad-X260>



> -----Original Message-----
> From: Stefano Stabellini [mailto:sstabellini@kernel.org]
> Sent: Friday, June 23, 2017 4:09 PM
> To: Jarvis Roach <Jarvis.Roach@dornerworks.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien.grall@arm.com>; Zhongze Liu <blackskygg@gmail.com>; xen-
> devel@lists.xenproject.org; Wei Liu <wei.liu2@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; edgari@xilinx.com; Edgar E. Iglesias
> <edgar.iglesias@xilinx.com>
> Subject: RE: [RFC v2]Proposal to allow setting up shared memory areas
> between VMs from xl config file
> 
> On Fri, 23 Jun 2017, Jarvis Roach wrote:
> > > -----Original Message-----
> > > From: Stefano Stabellini [mailto:sstabellini@kernel.org]
> > > Sent: Friday, June 23, 2017 2:21 PM
> > > To: Julien Grall <julien.grall@arm.com>
> > > Cc: Stefano Stabellini <sstabellini@kernel.org>; Zhongze Liu
> > > <blackskygg@gmail.com>; xen-devel@lists.xenproject.org; Wei Liu
> > > <wei.liu2@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>;
> > > Jarvis Roach <Jarvis.Roach@dornerworks.com>; edgari@xilinx.com;
> > > Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> > > Subject: Re: [RFC v2]Proposal to allow setting up shared memory
> > > areas between VMs from xl config file
> > >
> > > On Fri, 23 Jun 2017, Julien Grall wrote:
> > > > Hi,
> > > >
> > > > On 22/06/17 22:05, Stefano Stabellini wrote:
> > > > > > When we encounter an id IDx during "xl create":
> > > > > >
> > > > > >   + If it’s not under /local/shared_mem:
> > > > > >     + If the corresponding entry has a "master" tag, create the
> > > > > >       corresponding entries for IDx in xenstore
> > > > > >     + If there isn't a "master" tag, say error.
> > > > > >
> > > > > >   + If it’s found under /local/shared_mem:
> > > > > >     + If the corresponding entry has a "master" tag, say error
> > > > > >     + If there isn't a "master" tag, map the pages to the newly
> > > > > >       created domain, and add the current domain and necessary
> > > information
> > > > > >       under /local/shared_mem/IDx/slaves.
> > > > >
> > > > > Aside from using "gfn" instead of gmfn everywhere, I think it
> > > > > looks pretty good.
> > > > >
> > > > > I would leave out permissions and cacheability attributes from
> > > > > this version of the work. I would just add a note saying that
> > > > > memory will be mapped as RW regular cacheable RAM. Other
> > > > > permissions and cacheability will be possible, but they are not
> implemented yet.
> > > >
> > > > Well, I think we should design the interface correctly from the
> > > > beginning to facilitate future extension.
> > >
> > > Which interface are you speaking about?
> > >
> > > I don't think we should attemp to write how the hypercall interface
> > > might look like in the future to support setting permissions and
> > > cacheability attributes.
> > >
> > >
> > > > Also, you need to clarify what you mean by "regular cacheable RAM".
> > > > Are they write-through, write-back...? But, on ARM, this would
> > > > only be the caching attribute in stage-2 page table. The final
> > > > caching, memory type, shareability would be a combination of stage-2
> and stage-1 attributes.
> > >
> > > The very same that is used today for the ram of virtual machines, do
> > > we need to say any more than that? (For ARM, p2m_ram_rw and
> > > MATTR_MEM, LPAE_SH_INNER. For stage1, we should refer to
> > > xen/include/public/arch-arm.h.)
> >
> > I have customers who need some buffers LPAE_SH_OUTER and others
> who need NORMAL non-cacheable or inner-cacheable buffers, so my
> suggestion is to provide a way to support the full combination of
> configurations.
> >
> > While the stage 1/stage 2 combination results allow guests (via the stage 1
> translation regime) to force the two combinations I specifically mentioned,  in
> the first case the customers want LPAE_SH_OUTER for cache coherency with
> a DMA-capable I/O device. In that case, Xen needs to set the shareability
> attribute to OUTER in the stage 2 table since that's what is used for the
> SMMU. In the second case,  NORMAL non-cacheable or inner-cacheable, the
> customers are in a position where they can't trust the guests to disable their
> cache or set it for inner-cacheable, so it would be good for a way to Xen or
> privileged/trusted domain to do so.
> 
> Let me premise that I would be happy to see the whole set of configurations
> implemented in the long run, we might just not get there on day1. We could
> spec out how the VM config option should look like, but leave the
> cacheability and shareability parameteres unimplemented for now (also to
> address Julien't comment on defining future proof interfaces).
> 
> I understand the need for cache-coherent buffers for dma to/from devices,
> but I think that problem should be solved with the iomem config option. This
> project was meant to setup shared memory regions for VM-to-VM
> communications. It doesn't look like that is the kind of requirement that this
> framework is meant to meet, unless I am missing something?

As the intent is for direct VM-to-VM communication I concede the point. However, there is interest in I/O -> common buffer that both VMs can access using a distributed access algorithm, in which case you have indirect VM-to-VM communication occurring, though no doubt I'm stretching the meaning and intent of the project.

> Normal non-cacheable buffers are more interesting: do you actually see
> guests running on non-cacheable memory? If not, could you make an
> example of a use-case for two VMs sharing a non-cacheable page?

There a couple of different use cases for guests running without outer cache specifically, or without any cache generally. For safety applications, partitioning  VMs to their own CPU cores without sharing L2 cache (for all but one VM) would allow you to eliminate cross VM jitter caused by cache contention, while still gaining some advantage by using the L1 cache (and all of the advantage of using L2 cache for one of them).  For security applications, there's a similar desire not to share a common resource like cache between VMs for fear that a rogue actor could extract information from it. In both situations having a  shared page would be useful for inter-VM communication. Both use-cases presume that the base memory allocated to the guest as part of its VM environment is also set up as non-cacheable (or inner cacheable), which is why it would be useful to have an interface to control those attributes better.

The best use-case I can think of for normal, non-cacheable buffers for VMs with otherwise cacheable "main" memory would again be a security application where the cacheable main memory handles encrypted information, but where the decrypted data is put into a non-cached shared buffer for another VM to consume. Again the concern is that if the buffer was cacheable then a rogue agent in the system could use some side channel exploit to gain information about the data.



 
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-06-23 20:47 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-20 17:18 [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file Zhongze Liu
2017-06-20 17:29 ` Julien Grall
2017-06-22 16:58   ` Zhongze Liu
2017-06-22 20:55     ` Stefano Stabellini
2017-06-21 15:09 ` Wei Liu
2017-06-21 15:12   ` Julien Grall
2017-06-22 17:27   ` Zhongze Liu
2017-06-22 18:55     ` Zhongze Liu
2017-06-28 16:03     ` Wei Liu
2017-06-22 21:05 ` Stefano Stabellini
2017-06-23  9:16   ` Julien Grall
2017-06-23 18:21     ` Stefano Stabellini
2017-06-23 19:19       ` Jarvis Roach
2017-06-23 20:09         ` Stefano Stabellini
2017-06-23 20:47           ` Jarvis Roach [this message]
2017-06-23 20:18       ` Julien Grall
2017-07-18 12:10 ` Julien Grall
2017-07-18 14:22   ` Zhongze Liu
2017-07-18 16:13     ` Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e767074c0bc24d44b1407b30322b1f03@dornerworks.com \
    --to=jarvis.roach@dornerworks.com \
    --cc=blackskygg@gmail.com \
    --cc=edgar.iglesias@xilinx.com \
    --cc=edgari@xilinx.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.