From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: virtio-dev-return-5376-cohuck=redhat.com@lists.oasis-open.org Sender: List-Post: List-Help: List-Unsubscribe: List-Subscribe: Received: from lists.oasis-open.org (oasis-open.org [10.110.1.242]) by lists.oasis-open.org (Postfix) with ESMTP id 8541A985D76 for ; Wed, 6 Feb 2019 15:11:56 +0000 (UTC) MIME-Version: 1.0 References: <20190204054053.GE29758@stefanha-x1.localdomain> <20190204101316.4e3e6rj32suwdmur@sirius.home.kraxel.org> <20190205100427.GA2693@work-vm> <20190206070350.vaonutyqljccuh5y@sirius.home.kraxel.org> In-Reply-To: From: Frank Yang Date: Wed, 6 Feb 2019 07:11:43 -0800 Message-ID: Content-Type: multipart/alternative; boundary="00000000000074381a05813b2543" Subject: Re: [virtio-dev] Memory sharing device To: Gerd Hoffmann Cc: Roman Kiryanov , "Dr. David Alan Gilbert" , Stefan Hajnoczi , virtio-dev@lists.oasis-open.org List-ID: --00000000000074381a05813b2543 Content-Type: text/plain; charset="UTF-8" (Virtio-serial also doesn't seem like a good option due to specialization to console forwarding and having an encoded limit on the number of connections) On Wed, Feb 6, 2019 at 7:09 AM Frank Yang wrote: > > > On Tue, Feb 5, 2019 at 11:03 PM Gerd Hoffmann wrote: > >> On Tue, Feb 05, 2019 at 01:06:42PM -0800, Roman Kiryanov wrote: >> > Hi Dave, >> > >> > > In virtio-fs we have two separate stages: >> > > a) A shared arena is setup (and that's what the spec Stefan pointed >> to is about) - >> > > it's statically allocated at device creation and corresponds to >> a chunk >> > > of guest physical address space >> > >> > We do exactly the same: >> > >> https://android.googlesource.com/platform/external/qemu/+/emu-master-dev/hw/pci/goldfish_address_space.c#659 >> > >> > > b) During operation the guest kernel asks for files to be mapped >> into >> > > part of that arena dynamically, using commands sent over the >> queue >> > > - our queue carries FUSE commands, and we've added two new FUSE >> > > commands to perform the map/unmap. They talk in terms of offsets >> > > within the shared arena, rather than GPAs. >> > >> > In our case we have no files to map, only pointers returned from >> > OpenGL or Vulkan. >> > Do you have the approach to share for this use case? >> >> Fundamentally the same: The memory region (PCI bar in case of >> virtio-pci) reserves address space. The guest manages the address >> space, it can ask the host to map host gpu resources there. >> >> Well, that is at least the plan. Some incomplete WIP patches exist, I'm >> still busy hammering virtio-gpu ttm code into shape so it can support >> different kinds of gpu objects. >> >> > Do you this it is possible to have virtio-pipe where we could send >> > arbitrary blobs between >> > guest and host? >> >> Well, there are virtio-serial and virtio-vsock which both give you a >> pipe between host and guest, simliar to a serial line / tcp socket. >> Dunno how good they are at handling larger blobs though. >> >> > I've looked at virtio-vsock and it seems general, but requires Unix > sockets, which is not going to work for us on Windows and not going to work > as expected on macOS (most likely). Is there anything that is similar to > and as portable as goldfish pipe which is more like a raw virtqueue? This > would then work on memory in the same process, with callbacks registered to > trigger upon transmission. > > cheers, >> Gerd >> >> --00000000000074381a05813b2543 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
(Virtio-serial also doesn't seem like a good option du= e to specialization to console forwarding and having an encoded limit on th= e number of connections)

On Wed, Feb 6, 2019 at 7:09 AM Frank Yang <lfy@google.com> wrote:


On Tue, Feb 5, 2019 at 11:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
On Tue, Feb 05, 2019 a= t 01:06:42PM -0800, Roman Kiryanov wrote:
> Hi Dave,
>
> > In virtio-fs we have two separate stages:
> >=C2=A0 =C2=A0a) A shared arena is setup (and that's what the s= pec Stefan pointed to is about) -
> >=C2=A0 =C2=A0 =C2=A0 it's statically allocated at device creat= ion and corresponds to a chunk
> >=C2=A0 =C2=A0 =C2=A0 of guest physical address space
>
> We do exactly the same:
> https://android.googlesource.com/platform/external/qemu/+/emu-= master-dev/hw/pci/goldfish_address_space.c#659
>
> >=C2=A0 =C2=A0b) During operation the guest kernel asks for files t= o be mapped into
> >=C2=A0 =C2=A0 =C2=A0 part of that arena dynamically, using command= s sent over the queue
> >=C2=A0 =C2=A0 =C2=A0 - our queue carries FUSE commands, and we'= ;ve added two new FUSE
> >=C2=A0 =C2=A0 =C2=A0 commands to perform the map/unmap.=C2=A0 They= talk in terms of offsets
> >=C2=A0 =C2=A0 =C2=A0 within the shared arena, rather than GPAs. >
> In our case we have no files to map, only pointers returned from
> OpenGL or Vulkan.
> Do you have the approach to share for this use case?

Fundamentally the same:=C2=A0 The memory region (PCI bar in case of
virtio-pci) reserves address space.=C2=A0 The guest manages the address
space, it can ask the host to map host gpu resources there.

Well, that is at least the plan.=C2=A0 Some incomplete WIP patches exist, I= 'm
still busy hammering virtio-gpu ttm code into shape so it can support
different kinds of gpu objects.

> Do you this it is possible to have virtio-pipe where we could send
> arbitrary blobs between
> guest and host?

Well, there are virtio-serial and virtio-vsock which both give you a
pipe between host and guest, simliar to a serial line / tcp socket.
Dunno how good they are at handling larger blobs though.


I've looked at virtio-vsock and it= seems general, but requires Unix sockets, which is not going to work for u= s on Windows and not going to work as expected on macOS (most likely). Is t= here anything that is similar to and as portable as goldfish pipe which is = more like a raw virtqueue? This would then work on memory in the same proce= ss, with callbacks registered to trigger upon transmission.

<= /div>
cheers,
=C2=A0 Gerd

--00000000000074381a05813b2543--