All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
To: xen-devel@lists.xen.org
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Rusty Russell <rusty@au1.ibm.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	ian@bromium.com, Anthony Liguori <anthony@codemonkey.ws>,
	sasha.levin@oracle.com
Subject: Re: VIRTIO - compatibility with different virtualization solutions
Date: Fri, 21 Feb 2014 11:41:42 -0500	[thread overview]
Message-ID: <C625F7EE-A8B6-48E4-9ED1-DA935C8A41BD@gridcentric.ca> (raw)
In-Reply-To: <mailman.9276.1392977438.24322.xen-devel@lists.xen.org>

> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
>> On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>>> Anthony Liguori <anthony@codemonkey.ws> writes:
>>>> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>>>>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>>>>> Hi,
>>>>>> 
>>>>>> Below you could find a summary of work in regards to VIRTIO compatibility with
>>>>>> different virtualization solutions. It was done mainly from Xen point of view
>>>>>> but results are quite generic and can be applied to wide spectrum
>>>>>> of virtualization platforms.
>>>>> 
>>>>> Hi Daniel,
>>>>> 
>>>>>        Sorry for the delayed response, I was pondering...  CC changed
>>>>> to virtio-dev.
>>>>> 
>>>>> From a standard POV: It's possible to abstract out the where we use
>>>>> 'physical address' for 'address handle'.  It's also possible to define
>>>>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
>>>>> Xen-PV is a distinct platform from x86.
>>>> 
>>>> I'll go even further and say that "address handle" doesn't make sense too.
>>> 
>>> I was trying to come up with a unique term, I wasn't trying to define
>>> semantics :)
>> 
>> Understood, that wasn't really directed at you.
>> 
>>> There are three debates here now: (1) what should the standard say, and
>> 
>> The standard should say, "physical address"
>> 
>>> (2) how would Linux implement it,
>> 
>> Linux should use the PCI DMA API.
>> 
>>> (3) should we use each platform's PCI
>>> IOMMU.
>> 
>> Just like any other PCI device :-)
>> 
>>>> Just using grant table references is not enough to make virtio work
>>>> well under Xen.  You really need to use bounce buffers ala persistent
>>>> grants.
>>> 
>>> Wait, if you're using bounce buffers, you didn't make it "work well"!
>> 
>> Preaching to the choir man...  but bounce buffering is proven to be
>> faster than doing grant mappings on every request.  xen-blk does
>> bounce buffering by default and I suspect netfront is heading that
>> direction soon.
>> 
> 
> FWIW Annie Li @ Oracle once implemented a persistent map prototype for
> netfront and the result was not satisfying.
> 
>> It would be a lot easier to simply have a global pool of grant tables
>> that effectively becomes the DMA pool.  Then the DMA API can bounce
>> into that pool and those addresses can be placed on the ring.
>> 
>> It's a little different for Xen because now the backends have to deal
>> with physical addresses but the concept is still the same.
>> 
> 
> How would you apply this to Xen's security model? How can hypervisor
> effectively enforce access control? "Handle" and "physical address" are
> essentially not the same concept, otherwise you wouldn't have proposed
> this change. Not saying I'm against this change, just this description
> is too vague for me to understand the bigger picture.

I might be missing something trivial. But the burden of enforcing visibility of memory only for handles befalls on the hypervisor. Taking KVM for example, the whole RAM of a guest is a vma in the mm of the faulting qemu process. That's KVM's way of doing things. "Handles" could be pfns for all that model cares, and translation+mapping from handles to actual guest RAM addresses is trivially O(1). And there's no guest control over ram visibility, and that's happy KVM.

Xen, on the other hand, can encode a 64 bit grant handle in the "__u64 addr" field of a virtio descriptor. The negotiation happens up front, the flags field is set to signal the guest is encoding handles in there. Once the Xen virtio backend gets that descriptor out of the ring, what is left is not all that different from what netback/blkback/gntdev do today with a ring request.

I'm obviously glossing over serious details (e.g. negotiation of what u64 addr means), but I what I'm going at is that I fail to understand why whole RAM visibility is a requirement for virtio. It seems to me to be a requirement for KVM and other hypervisors, while virtio is a transport and sync mechanism for high(er) level IO descriptors.

Can someone please clarify why "under Xen, you really need to use bounce buffers ala persistent grants?" Is that a performance need to avoid backend side repeated mapping and TLB junking? Granted. But why would it be a correctness need? Guest side grant table works requires no hyper calls in the data path.

If I am rewinding the conversation, feel free to ignore, but I'm not feeling a lot of clarity in the dialogue right now.

Thanks
Andres

> 
> But a downside for sure is that if we go with this change we then have
> to maintain two different paths in backend. However small the difference
> is it is still a burden.


> 
> Wei.
> 

       reply	other threads:[~2014-02-21 16:41 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <mailman.9276.1392977438.24322.xen-devel@lists.xen.org>
2014-02-21 16:41 ` Andres Lagar-Cavilla [this message]
2014-02-17 13:23 VIRTIO - compatibility with different virtualization solutions Daniel Kiper
2014-02-19  0:26 ` Rusty Russell
     [not found] ` <87vbwcaqxe.fsf@rustcorp.com.au>
2014-02-19  4:42   ` Anthony Liguori
2014-02-20  1:31     ` Rusty Russell
     [not found]     ` <87ha7ubme0.fsf@rustcorp.com.au>
2014-02-20 12:28       ` Stefano Stabellini
2014-02-20 20:28       ` Daniel Kiper
2014-02-21  2:50       ` Anthony Liguori
2014-02-21 10:05         ` Wei Liu
2014-02-21 15:01           ` Konrad Rzeszutek Wilk
2014-02-25  0:33             ` Rusty Russell
     [not found]             ` <87y51058vf.fsf@rustcorp.com.au>
2014-02-25 21:09               ` Konrad Rzeszutek Wilk
2014-02-19 10:09   ` Ian Campbell
2014-02-20  7:48     ` Rusty Russell
     [not found]     ` <8761oab4y7.fsf@rustcorp.com.au>
2014-02-20 20:37       ` Daniel Kiper
2014-02-19 10:11   ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C625F7EE-A8B6-48E4-9ED1-DA935C8A41BD@gridcentric.ca \
    --to=andreslc@gridcentric.ca \
    --cc=anthony@codemonkey.ws \
    --cc=daniel.kiper@oracle.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian@bromium.com \
    --cc=rusty@au1.ibm.com \
    --cc=sasha.levin@oracle.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.