From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAE49C5DF62 for ; Wed, 6 Nov 2019 10:11:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6E8A22173E for ; Wed, 6 Nov 2019 10:11:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LtN7EDcS" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727015AbfKFKLK (ORCPT ); Wed, 6 Nov 2019 05:11:10 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:48512 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728523AbfKFKLK (ORCPT ); Wed, 6 Nov 2019 05:11:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573035069; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Dl2/5t2Vd2WMlL00ZsEi+wRQEg6gs2pJQNMep5utF04=; b=LtN7EDcSkMiuOqdNQ5aFseKdCUoWtSpNwNKSBWSuB0lbqxDnVGnTu5IFTU+Ol/jlJlEHNE vOiNLKqDv0BfEH4H62sWh5Fn2kvHi0p1op37o1piyFqZN0OEAG9SdppmqnBm6doJPQ0AKx 4KXPUhEIm3g14TyAzXvTN0qjAdlDZVE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-142-ONhj1MRPOnePhZ_VaDYnXA-1; Wed, 06 Nov 2019 05:11:05 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9C620477; Wed, 6 Nov 2019 10:11:03 +0000 (UTC) Received: from work-vm (unknown [10.36.118.29]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1E96060852; Wed, 6 Nov 2019 10:10:59 +0000 (UTC) Date: Wed, 6 Nov 2019 10:10:57 +0000 From: "Dr. David Alan Gilbert" To: Gerd Hoffmann Cc: Stefan Hajnoczi , geoff@hostfission.com, virtio-dev@lists.oasis-open.org, Alex Lau , Daniel Vetter , Alexandre Courbot , qemu-devel@nongnu.org, Tomasz Figa , Keiichi Watanabe , David Stevens , Hans Verkuil , =?iso-8859-1?Q?St=E9phane?= Marchesin , Dylan Reid , Gurchetan Singh , Dmitry Morozov , Pawel Osciak , Linux Media Mailing List Subject: Re: [virtio-dev] Re: guest / host buffer sharing ... Message-ID: <20191106101057.GC2802@work-vm> References: <20191105105456.7xbhtistnbp272lj@sirius.home.kraxel.org> <20191106084344.GB189998@stefanha-x1.localdomain> <20191106095122.jju7eo57scfoat6a@sirius.home.kraxel.org> MIME-Version: 1.0 In-Reply-To: <20191106095122.jju7eo57scfoat6a@sirius.home.kraxel.org> User-Agent: Mutt/1.12.1 (2019-06-15) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-MC-Unique: ONhj1MRPOnePhZ_VaDYnXA-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org * Gerd Hoffmann (kraxel@redhat.com) wrote: > Hi, >=20 > > > Reason is: Meanwhile I'm wondering whenever "just use virtio-gpu > > > resources" is really a good answer for all the different use cases > > > we have collected over time. Maybe it is better to have a dedicated > > > buffer sharing virtio device? Here is the rough idea: > >=20 > > My concern is that buffer sharing isn't a "device". It's a primitive > > used in building other devices. When someone asks for just buffer > > sharing it's often because they do not intend to upstream a > > specification for their device. >=20 > Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either. > It is more a service to allow communication between host and guest >=20 > That buffer sharing device falls into the same category. Maybe it even > makes sense to build that as virtio-vsock extension. Not sure how well > that would work with the multi-transport architecture of vsock though. >=20 > > If this buffer sharing device's main purpose is for building proprietar= y > > devices without contributing to VIRTIO, then I don't think it makes > > sense for the VIRTIO community to assist in its development. >=20 > One possible use case would be building a wayland proxy, using vsock for > the wayland protocol messages and virtio-buffers for the shared buffers > (wayland client window content). >=20 > It could also simplify buffer sharing between devices (feed decoded > video frames from decoder to gpu), although in that case it is less > clear that it'll actually simplify things because virtio-gpu is > involved anyway. >=20 > We can't prevent people from using that for proprietary stuff (same goes > for vsock). >=20 > There is the option to use virtio-gpu instead, i.e. add support to qemu > to export dma-buf handles for virtio-gpu resources to other processes > (such as a wayland proxy). That would provide very similar > functionality (and thereby create the same loophole). >=20 > > VIRTIO recently gained a shared memory resource concept for access to > > host memory. It is being used in virtio-pmem and virtio-fs (and > > virtio-gpu?). >=20 > virtio-gpu is in progress still unfortunately (all kinds of fixes for > the qemu drm drivers and virtio-gpu guest driver refactoring kept me > busy for quite a while ...). >=20 > > If another flavor of shared memory is required it can be > > added to the spec and new VIRTIO device types can use it. But it's not > > clear why this should be its own device. >=20 > This is not about host memory, buffers are in guest ram, everything else > would make sharing those buffers between drivers inside the guest (as > dma-buf) quite difficult. Given it's just guest memory, can the guest just have a virt queue on which it places pointers to the memory it wants to share as elements in the queue? Dave > > My question would be "what is the actual problem you are trying to > > solve?". >=20 > Typical use cases center around sharing graphics data between guest > and host. >=20 > cheers, > Gerd >=20 >=20 > --------------------------------------------------------------------- > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org >=20 -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.0 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78CAFC5DF62 for ; Wed, 6 Nov 2019 10:11:52 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3B39F206A3 for ; Wed, 6 Nov 2019 10:11:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LtN7EDcS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B39F206A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:54894 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iSII3-0007YC-Cp for qemu-devel@archiver.kernel.org; Wed, 06 Nov 2019 05:11:51 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:40032) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iSIHP-00078A-UN for qemu-devel@nongnu.org; Wed, 06 Nov 2019 05:11:13 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iSIHO-0001pE-E1 for qemu-devel@nongnu.org; Wed, 06 Nov 2019 05:11:11 -0500 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:47644 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iSIHO-0001ou-A6 for qemu-devel@nongnu.org; Wed, 06 Nov 2019 05:11:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573035069; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Dl2/5t2Vd2WMlL00ZsEi+wRQEg6gs2pJQNMep5utF04=; b=LtN7EDcSkMiuOqdNQ5aFseKdCUoWtSpNwNKSBWSuB0lbqxDnVGnTu5IFTU+Ol/jlJlEHNE vOiNLKqDv0BfEH4H62sWh5Fn2kvHi0p1op37o1piyFqZN0OEAG9SdppmqnBm6doJPQ0AKx 4KXPUhEIm3g14TyAzXvTN0qjAdlDZVE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-142-ONhj1MRPOnePhZ_VaDYnXA-1; Wed, 06 Nov 2019 05:11:05 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9C620477; Wed, 6 Nov 2019 10:11:03 +0000 (UTC) Received: from work-vm (unknown [10.36.118.29]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1E96060852; Wed, 6 Nov 2019 10:10:59 +0000 (UTC) Date: Wed, 6 Nov 2019 10:10:57 +0000 From: "Dr. David Alan Gilbert" To: Gerd Hoffmann Subject: Re: [virtio-dev] Re: guest / host buffer sharing ... Message-ID: <20191106101057.GC2802@work-vm> References: <20191105105456.7xbhtistnbp272lj@sirius.home.kraxel.org> <20191106084344.GB189998@stefanha-x1.localdomain> <20191106095122.jju7eo57scfoat6a@sirius.home.kraxel.org> MIME-Version: 1.0 In-Reply-To: <20191106095122.jju7eo57scfoat6a@sirius.home.kraxel.org> User-Agent: Mutt/1.12.1 (2019-06-15) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-MC-Unique: ONhj1MRPOnePhZ_VaDYnXA-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 205.139.110.61 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: geoff@hostfission.com, virtio-dev@lists.oasis-open.org, Alex Lau , Alexandre Courbot , Stefan Hajnoczi , qemu-devel@nongnu.org, Tomasz Figa , Keiichi Watanabe , David Stevens , Daniel Vetter , =?iso-8859-1?Q?St=E9phane?= Marchesin , Dylan Reid , Gurchetan Singh , Hans Verkuil , Dmitry Morozov , Pawel Osciak , Linux Media Mailing List Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" * Gerd Hoffmann (kraxel@redhat.com) wrote: > Hi, >=20 > > > Reason is: Meanwhile I'm wondering whenever "just use virtio-gpu > > > resources" is really a good answer for all the different use cases > > > we have collected over time. Maybe it is better to have a dedicated > > > buffer sharing virtio device? Here is the rough idea: > >=20 > > My concern is that buffer sharing isn't a "device". It's a primitive > > used in building other devices. When someone asks for just buffer > > sharing it's often because they do not intend to upstream a > > specification for their device. >=20 > Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either. > It is more a service to allow communication between host and guest >=20 > That buffer sharing device falls into the same category. Maybe it even > makes sense to build that as virtio-vsock extension. Not sure how well > that would work with the multi-transport architecture of vsock though. >=20 > > If this buffer sharing device's main purpose is for building proprietar= y > > devices without contributing to VIRTIO, then I don't think it makes > > sense for the VIRTIO community to assist in its development. >=20 > One possible use case would be building a wayland proxy, using vsock for > the wayland protocol messages and virtio-buffers for the shared buffers > (wayland client window content). >=20 > It could also simplify buffer sharing between devices (feed decoded > video frames from decoder to gpu), although in that case it is less > clear that it'll actually simplify things because virtio-gpu is > involved anyway. >=20 > We can't prevent people from using that for proprietary stuff (same goes > for vsock). >=20 > There is the option to use virtio-gpu instead, i.e. add support to qemu > to export dma-buf handles for virtio-gpu resources to other processes > (such as a wayland proxy). That would provide very similar > functionality (and thereby create the same loophole). >=20 > > VIRTIO recently gained a shared memory resource concept for access to > > host memory. It is being used in virtio-pmem and virtio-fs (and > > virtio-gpu?). >=20 > virtio-gpu is in progress still unfortunately (all kinds of fixes for > the qemu drm drivers and virtio-gpu guest driver refactoring kept me > busy for quite a while ...). >=20 > > If another flavor of shared memory is required it can be > > added to the spec and new VIRTIO device types can use it. But it's not > > clear why this should be its own device. >=20 > This is not about host memory, buffers are in guest ram, everything else > would make sharing those buffers between drivers inside the guest (as > dma-buf) quite difficult. Given it's just guest memory, can the guest just have a virt queue on which it places pointers to the memory it wants to share as elements in the queue? Dave > > My question would be "what is the actual problem you are trying to > > solve?". >=20 > Typical use cases center around sharing graphics data between guest > and host. >=20 > cheers, > Gerd >=20 >=20 > --------------------------------------------------------------------- > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org >=20 -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: virtio-dev-return-6290-cohuck=redhat.com@lists.oasis-open.org Sender: List-Post: List-Help: List-Unsubscribe: List-Subscribe: Received: from lists.oasis-open.org (oasis-open.org [10.110.1.242]) by lists.oasis-open.org (Postfix) with ESMTP id E78F7985EE3 for ; Wed, 6 Nov 2019 10:11:10 +0000 (UTC) Date: Wed, 6 Nov 2019 10:10:57 +0000 From: "Dr. David Alan Gilbert" Message-ID: <20191106101057.GC2802@work-vm> References: <20191105105456.7xbhtistnbp272lj@sirius.home.kraxel.org> <20191106084344.GB189998@stefanha-x1.localdomain> <20191106095122.jju7eo57scfoat6a@sirius.home.kraxel.org> MIME-Version: 1.0 In-Reply-To: <20191106095122.jju7eo57scfoat6a@sirius.home.kraxel.org> Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Subject: Re: [virtio-dev] Re: guest / host buffer sharing ... To: Gerd Hoffmann Cc: Stefan Hajnoczi , geoff@hostfission.com, virtio-dev@lists.oasis-open.org, Alex Lau , Daniel Vetter , Alexandre Courbot , qemu-devel@nongnu.org, Tomasz Figa , Keiichi Watanabe , David Stevens , Hans Verkuil , =?iso-8859-1?Q?St=E9phane?= Marchesin , Dylan Reid , Gurchetan Singh , Dmitry Morozov , Pawel Osciak , Linux Media Mailing List List-ID: * Gerd Hoffmann (kraxel@redhat.com) wrote: > Hi, >=20 > > > Reason is: Meanwhile I'm wondering whenever "just use virtio-gpu > > > resources" is really a good answer for all the different use cases > > > we have collected over time. Maybe it is better to have a dedicated > > > buffer sharing virtio device? Here is the rough idea: > >=20 > > My concern is that buffer sharing isn't a "device". It's a primitive > > used in building other devices. When someone asks for just buffer > > sharing it's often because they do not intend to upstream a > > specification for their device. >=20 > Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either. > It is more a service to allow communication between host and guest >=20 > That buffer sharing device falls into the same category. Maybe it even > makes sense to build that as virtio-vsock extension. Not sure how well > that would work with the multi-transport architecture of vsock though. >=20 > > If this buffer sharing device's main purpose is for building proprietar= y > > devices without contributing to VIRTIO, then I don't think it makes > > sense for the VIRTIO community to assist in its development. >=20 > One possible use case would be building a wayland proxy, using vsock for > the wayland protocol messages and virtio-buffers for the shared buffers > (wayland client window content). >=20 > It could also simplify buffer sharing between devices (feed decoded > video frames from decoder to gpu), although in that case it is less > clear that it'll actually simplify things because virtio-gpu is > involved anyway. >=20 > We can't prevent people from using that for proprietary stuff (same goes > for vsock). >=20 > There is the option to use virtio-gpu instead, i.e. add support to qemu > to export dma-buf handles for virtio-gpu resources to other processes > (such as a wayland proxy). That would provide very similar > functionality (and thereby create the same loophole). >=20 > > VIRTIO recently gained a shared memory resource concept for access to > > host memory. It is being used in virtio-pmem and virtio-fs (and > > virtio-gpu?). >=20 > virtio-gpu is in progress still unfortunately (all kinds of fixes for > the qemu drm drivers and virtio-gpu guest driver refactoring kept me > busy for quite a while ...). >=20 > > If another flavor of shared memory is required it can be > > added to the spec and new VIRTIO device types can use it. But it's not > > clear why this should be its own device. >=20 > This is not about host memory, buffers are in guest ram, everything else > would make sharing those buffers between drivers inside the guest (as > dma-buf) quite difficult. Given it's just guest memory, can the guest just have a virt queue on which it places pointers to the memory it wants to share as elements in the queue? Dave > > My question would be "what is the actual problem you are trying to > > solve?". >=20 > Typical use cases center around sharing graphics data between guest > and host. >=20 > cheers, > Gerd >=20 >=20 > --------------------------------------------------------------------- > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org >=20 -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK --------------------------------------------------------------------- To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org