All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jürgen Groß" <jgross@suse.com>
To: "Alex Bennée" <alex.bennee@linaro.org>,
	"Jan Kiszka" <jan.kiszka@siemens.com>
Cc: virtio-dev@lists.oasis-open.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	virtualization@lists.linux-foundation.org,
	Wei Liu <liuw@liuw.name>
Subject: Re: VIRTIO adoption in other hypervisors
Date: Fri, 28 Feb 2020 18:08:51 +0100	[thread overview]
Message-ID: <2a8b2d4f-28f0-7f03-744d-c8f03d50e383@suse.com> (raw)
In-Reply-To: <878skmwtei.fsf@linaro.org>

On 28.02.20 17:47, Alex Bennée wrote:
> 
> Jan Kiszka <jan.kiszka@siemens.com> writes:
> 
>> On 28.02.20 11:30, Jan Kiszka wrote:
>>> On 28.02.20 11:16, Alex Bennée wrote:
>>>> Hi,
>>>>
> <snip>
>>>> I believe there has been some development work for supporting VIRTIO on
>>>> Xen although it seems to have stalled according to:
>>>>
>>>>     https://wiki.xenproject.org/wiki/Virtio_On_Xen
>>>>
>>>> Recently at KVM Forum there was Jan's talk about Inter-VM shared memory
>>>> which proposed ivshmemv2 as a VIRTIO transport:
>>>>
>>>>     https://events19.linuxfoundation.org/events/kvm-forum-2019/program/schedule/
>>>>
>>>>
>>>> As I understood it this would allow Xen (and other hypervisors) a simple
>>>> way to be able to carry virtio traffic between guest and end point.
>>
>> And to clarify the scope of this effort: virtio-over-ivshmem is not
>> the fastest option to offer virtio to a guest (static "DMA" window),
>> but it is the simplest one from the hypervisor PoV and, thus, also
>> likely the easiest one to argue over when it comes to security and
>> safety.
> 
> So to drill down on this is this a particular problem with type-1
> hypervisors?
> 
> It seems to me any KVM-like run loop trivially supports a range of
> virtio devices by virtue of trapping accesses to the signalling area of
> a virtqueue and allowing the VMM to handle the transaction which ever
> way it sees fit.
> 
> I've not quite understood the way Xen interfaces to QEMU aside from it's
> different to everything else. More over it seems the type-1 hypervisors
> are more interested in providing better isolation between segments of a
> system whereas VIRTIO currently assumes either the VMM or the hypervisor
> has full access the full guest address space. I've seen quite a lot of
> slides that want to isolate sections of device emulation to separate
> processes or even separate guest VMs.

In Xen device emulation is done by other VMs. Normally the devices are
emulated via dom0, but it is possible to have other driver domains, too
(those need to get passed through the related PCI devices, of course).

PV device backends get access only to the guest pages the PV frontends
allow. This is done via so called "grants", which are per guest. So a
frontend can grant another Xen VM access to dedicated pages. The backend
is using the grants to map those pages via the hypervisor in order to
perform the I/O. After finishing the I/O the I/O-pages are unmapped by
the backend again.

For legacy device emulation via qemu the guest running qemu needs to get
access to all the guests memory, as the guest won't grant any pages to
the emulating VM. It is possible to let qemu run in a small stub guest
using PV devices in order to isolate the legacy guest from e.g. dom0.


Hope that makes it clearer,


Juergen

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2020-02-28 17:08 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-28 10:16 VIRTIO adoption in other hypervisors Alex Bennée
2020-02-28 10:16 ` [virtio-dev] " Alex Bennée
2020-02-28 10:30 ` Jan Kiszka
2020-02-28 10:30   ` [virtio-dev] " Jan Kiszka
2020-02-28 10:36   ` Jan Kiszka
2020-02-28 10:36     ` [virtio-dev] " Jan Kiszka
2020-02-28 16:47     ` Alex Bennée
2020-02-28 16:47       ` [virtio-dev] " Alex Bennée
2020-02-28 17:08       ` Jürgen Groß [this message]
2020-02-28 17:16       ` Jan Kiszka
2020-02-28 17:16         ` [virtio-dev] " Jan Kiszka
2020-02-29  1:29         ` Stefano Stabellini
2020-02-28 10:58 ` Paolo Bonzini
2020-02-28 10:58   ` [virtio-dev] " Paolo Bonzini
2020-02-28 11:18   ` Alex Bennée
2020-02-28 11:18     ` Alex Bennée
2020-02-28 11:21     ` Paolo Bonzini
2020-02-28 11:21       ` [virtio-dev] " Paolo Bonzini
2020-02-28 11:09 ` Stefan Hajnoczi
2020-02-28 11:09   ` [virtio-dev] " Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a8b2d4f-28f0-7f03-744d-c8f03d50e383@suse.com \
    --to=jgross@suse.com \
    --cc=alex.bennee@linaro.org \
    --cc=jan.kiszka@siemens.com \
    --cc=liuw@liuw.name \
    --cc=sstabellini@kernel.org \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.