From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: virtio-dev-return-5388-cohuck=redhat.com@lists.oasis-open.org Sender: List-Post: List-Help: List-Unsubscribe: List-Subscribe: Received: from lists.oasis-open.org (oasis-open.org [10.110.1.242]) by lists.oasis-open.org (Postfix) with ESMTP id DEBE5985C57 for ; Mon, 11 Feb 2019 15:15:10 +0000 (UTC) MIME-Version: 1.0 References: <20190204054053.GE29758@stefanha-x1.localdomain> <20190204101316.4e3e6rj32suwdmur@sirius.home.kraxel.org> <20190211092943-mutt-send-email-mst@kernel.org> In-Reply-To: <20190211092943-mutt-send-email-mst@kernel.org> From: Frank Yang Date: Mon, 11 Feb 2019 07:14:53 -0800 Message-ID: Content-Type: multipart/alternative; boundary="00000000000003426205819fc6db" Subject: Re: [virtio-dev] Memory sharing device To: "Michael S. Tsirkin" Cc: Roman Kiryanov , Gerd Hoffmann , Stefan Hajnoczi , virtio-dev@lists.oasis-open.org, "Dr. David Alan Gilbert" List-ID: --00000000000003426205819fc6db Content-Type: text/plain; charset="UTF-8" On Mon, Feb 11, 2019 at 6:49 AM Michael S. Tsirkin wrote: > On Mon, Feb 04, 2019 at 11:42:25PM -0800, Roman Kiryanov wrote: > > Hi Gerd, > > > > > virtio-gpu specifically needs that to support vulkan and opengl > > > extensions for coherent buffers, which must be allocated by the host > gpu > > > driver. It's WIP still. > > > > the proposed spec says: > > > > +Shared memory regions MUST NOT be used to control the operation > > +of the device, nor to stream data; those should still be performed > > +using virtqueues. > > > > Is there a strong reason to prohibit using memory regions for control > purposes? > > That's in order to make virtio have portability implications, such that if > people see a virtio device in lspci they know there's > no lock-in, their guest can be moved between hypervisors > and will still work. > > > Our long term goal is to have as few kernel drivers as possible and to > move > > "drivers" into userspace. If we go with the virtqueues, is there > > general a purpose > > device/driver to talk between our host and guest to support custom > hardware > > (with own blobs)? > > The challenge is to answer the following question: > how to do this without losing the benefits of standartization? > > Draft spec is incoming, but the basic idea is to standardize how to enumerate, discover, and operate (with high performance) such userspace drivers/devices; the basic operations would be standardized, and userspace drivers would be constructed out of the resulting primitives. > Could you please advise if we can use something else to > > achieve this goal? > > I am not sure what the goal is though. Blobs is a means I guess > or it should be :) E.g. is it about being able to iterate quickly? > > Maybe you should look at vhost-user-gpu patches on qemu? > Would this address your need? > Acks for these patches would be a good thing. > > Is this it: https://patchwork.kernel.org/patch/10444089/ ? I'll check it out and try to discuss. Is there a draft spec for it as well? > > -- > MST > --00000000000003426205819fc6db Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


On Mon, Feb 11, 2019= at 6:49 AM Michael S. Tsirkin <mst@re= dhat.com> wrote:
On Mon, Feb 04, 2019 at 11:42:25PM -0800, Roman Kiryanov wrote:
> Hi Gerd,
>
> > virtio-gpu specifically needs that to support vulkan and opengl > > extensions for coherent buffers, which must be allocated by the h= ost gpu
> > driver.=C2=A0 It's WIP still.
>
> the proposed spec says:
>
> +Shared memory regions MUST NOT be used to control the operation
> +of the device, nor to stream data; those should still be performed > +using virtqueues.
>
> Is there a strong reason to prohibit using memory regions for control = purposes?

That's in order to make virtio have portability implications, such that= if
people see a virtio device in lspci they know there's
no lock-in, their guest can be moved between hypervisors
and will still work.

> Our long term goal is to have as few kernel drivers as possible and to= move
> "drivers" into userspace. If we go with the virtqueues, is t= here
> general a purpose
> device/driver to talk between our host and guest to support custom har= dware
> (with own blobs)?

The challenge is to answer the following question:
how to do this without losing the benefits of standartization?

Draft spec is incoming, but the basic idea is to stan= dardize how to enumerate, discover, and operate (with high performance) suc= h userspace drivers/devices; the basic operations would be standardized, an= d userspace drivers would be constructed out of the resulting primitives.

> Could you please advise if we can use something else to
> achieve this goal?

I am not sure what the goal is though. Blobs is a means I guess
or it should be :) E.g. is it about being able to iterate quickly?

Maybe you should look at vhost-user-gpu patches on qemu?
Would this address your need?
Acks for these patches would be a good thing.


Is this it:

<= a href=3D"https://patchwork.kernel.org/patch/10444089/">https://patchwork.k= ernel.org/patch/10444089/ ?

I'll check it = out and try to discuss. Is there a draft spec for it as well?
=C2=A0

--
MST
--00000000000003426205819fc6db--