All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Liang, Cunming" <cunming.liang@intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Bie, Tiwei" <tiwei.bie@intel.com>
Cc: "mst@redhat.com" <mst@redhat.com>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"ddutile@redhat.com" <ddutile@redhat.com>,
	"Duyck, Alexander H" <alexander.h.duyck@intel.com>,
	"virtio-dev@lists.oasis-open.org"
	<virtio-dev@lists.oasis-open.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Daly, Dan" <dan.daly@intel.com>,
	"Wang, Zhihong" <zhihong.wang@intel.com>,
	"Tan, Jianfeng" <jianfeng.tan@intel.com>,
	"Wang, Xiao W" <xiao.w.wang@intel.com>
Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
Date: Wed, 11 Apr 2018 10:08:32 +0800	[thread overview]
Message-ID: <56f8d47b-48d4-cd4c-6795-21e809efcb1b@redhat.com> (raw)
In-Reply-To: <D0158A423229094DA7ABF71CF2FA0DA34E9222F4@SHSMSX104.ccr.corp.intel.com>



On 2018年04月10日 17:23, Liang, Cunming wrote:
>
>> -----Original Message-----
>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>> Sent: Tuesday, April 10, 2018 3:52 PM
>> To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
>> Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
>> Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
>> open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
>> virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
>> <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
>> Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
>> Wang, Xiao W <xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
>> vhost backend
>>
>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>> So you just move the abstraction layer from qemu to kernel, and you
>>>> still need different drivers in kernel for different device
>>>> interfaces of accelerators. This looks even more complex than leaving
>>>> it in qemu. As you said, another idea is to implement userspace vhost
>>>> backend for accelerators which seems easier and could co-work with
>>>> other parts of qemu without inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to add various vendor
>>> specific hardware drivers in QEMU?
>> I think so.  We have vendor-specific quirks, and at some point there was an
>> idea of using quirks to implement (vendor-specific) live migration support for
>> assigned devices.
> Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.
>
> While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.
>
> The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.

Let me clarify my question, it's not qemu vs kenrel but userspace vs 
kernel. It could be a library which could be linked to qemu. Doing it in 
userspace have the following obvious advantages:

- attack surface was limited to userspace
- easier to be maintained (compared to kernel driver)
- easier to be extended without introducing new userspace/kernel interfaces
- not tied to a specific operating system

If we want to do it in the kernel, need to consider to unify code 
between mdev device driver and generic driver. For net driver, maybe we 
can even consider to do it on top of exist drivers.

>
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

I'm doing this now by implementing vhost inside qemu IOThreads. Hope I 
can post RFC in few months.

Thanks

> Steve
>
>> Paolo

WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com>
To: "Liang, Cunming" <cunming.liang@intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Bie, Tiwei" <tiwei.bie@intel.com>
Cc: "mst@redhat.com" <mst@redhat.com>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"ddutile@redhat.com" <ddutile@redhat.com>,
	"Duyck, Alexander H" <alexander.h.duyck@intel.com>,
	"virtio-dev@lists.oasis-open.org"
	<virtio-dev@lists.oasis-open.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Daly, Dan" <dan.daly@intel.com>,
	"Wang, Zhihong" <zhihong.wang@intel.com>,
	"Tan, Jianfeng" <jianfeng.tan@intel.com>,
	"Wang, Xiao W" <xiao.w.wang@intel.com>
Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware vhost backend
Date: Wed, 11 Apr 2018 10:08:32 +0800	[thread overview]
Message-ID: <56f8d47b-48d4-cd4c-6795-21e809efcb1b@redhat.com> (raw)
In-Reply-To: <D0158A423229094DA7ABF71CF2FA0DA34E9222F4@SHSMSX104.ccr.corp.intel.com>



On 2018年04月10日 17:23, Liang, Cunming wrote:
>
>> -----Original Message-----
>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>> Sent: Tuesday, April 10, 2018 3:52 PM
>> To: Bie, Tiwei <tiwei.bie@intel.com>; Jason Wang <jasowang@redhat.com>
>> Cc: mst@redhat.com; alex.williamson@redhat.com; ddutile@redhat.com;
>> Duyck, Alexander H <alexander.h.duyck@intel.com>; virtio-dev@lists.oasis-
>> open.org; linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
>> virtualization@lists.linux-foundation.org; netdev@vger.kernel.org; Daly, Dan
>> <dan.daly@intel.com>; Liang, Cunming <cunming.liang@intel.com>; Wang,
>> Zhihong <zhihong.wang@intel.com>; Tan, Jianfeng <jianfeng.tan@intel.com>;
>> Wang, Xiao W <xiao.w.wang@intel.com>
>> Subject: Re: [virtio-dev] Re: [RFC] vhost: introduce mdev based hardware
>> vhost backend
>>
>> On 10/04/2018 06:57, Tiwei Bie wrote:
>>>> So you just move the abstraction layer from qemu to kernel, and you
>>>> still need different drivers in kernel for different device
>>>> interfaces of accelerators. This looks even more complex than leaving
>>>> it in qemu. As you said, another idea is to implement userspace vhost
>>>> backend for accelerators which seems easier and could co-work with
>>>> other parts of qemu without inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to add various vendor
>>> specific hardware drivers in QEMU?
>> I think so.  We have vendor-specific quirks, and at some point there was an
>> idea of using quirks to implement (vendor-specific) live migration support for
>> assigned devices.
> Vendor-specific quirks of accessing VGA is a small portion. Other major portions are still handled by guest driver.
>
> While in this case, when saying various vendor specific drivers in QEMU, it says QEMU takes over and provides the entire user space device drivers. Some parts are even not relevant to vhost, they're basic device function enabling. Moreover, it could be different kinds of devices(network/block/...) under vhost. No matter # of vendors or # of types, total LOC is not small.
>
> The idea is to avoid introducing these extra complexity out of QEMU, keeping vhost adapter simple. As vhost protocol is de factor standard, it leverages kernel device driver to provide the diversity. Changing once in QEMU, then it supports multi-vendor devices whose drivers naturally providing kernel driver there.

Let me clarify my question, it's not qemu vs kenrel but userspace vs 
kernel. It could be a library which could be linked to qemu. Doing it in 
userspace have the following obvious advantages:

- attack surface was limited to userspace
- easier to be maintained (compared to kernel driver)
- easier to be extended without introducing new userspace/kernel interfaces
- not tied to a specific operating system

If we want to do it in the kernel, need to consider to unify code 
between mdev device driver and generic driver. For net driver, maybe we 
can even consider to do it on top of exist drivers.

>
> If QEMU is going to build a user space driver framework there, we're open mind on that, even leveraging DPDK as the underlay library. Looking forward to more others' comments from community.

I'm doing this now by implementing vhost inside qemu IOThreads. Hope I 
can post RFC in few months.

Thanks

> Steve
>
>> Paolo


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


  parent reply	other threads:[~2018-04-11  2:08 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-02 15:23 [RFC] vhost: introduce mdev based hardware vhost backend Tiwei Bie
2018-04-02 15:23 ` [virtio-dev] " Tiwei Bie
2018-04-10  2:52 ` Jason Wang
2018-04-10  2:52   ` [virtio-dev] " Jason Wang
2018-04-10  4:57   ` Tiwei Bie
2018-04-10  4:57     ` [virtio-dev] " Tiwei Bie
2018-04-10  7:25     ` Jason Wang
2018-04-10  7:25     ` Jason Wang
2018-04-10  7:25       ` [virtio-dev] " Jason Wang
2018-04-19 18:40       ` Michael S. Tsirkin
2018-04-19 18:40       ` Michael S. Tsirkin
2018-04-19 18:40         ` [virtio-dev] " Michael S. Tsirkin
2018-04-20  3:28         ` Tiwei Bie
2018-04-20  3:28         ` Tiwei Bie
2018-04-20  3:28           ` [virtio-dev] " Tiwei Bie
2018-04-20  3:50           ` Michael S. Tsirkin
2018-04-20  3:50           ` Michael S. Tsirkin
2018-04-20  3:50             ` [virtio-dev] " Michael S. Tsirkin
2018-04-20  3:50           ` Liang, Cunming
2018-04-20  3:50             ` [virtio-dev] " Liang, Cunming
2018-04-20 13:52             ` Michael S. Tsirkin
2018-04-20 13:52             ` Michael S. Tsirkin
2018-04-20 13:52               ` [virtio-dev] " Michael S. Tsirkin
2018-04-20  3:50           ` Liang, Cunming
2018-04-20  3:52         ` Jason Wang
2018-04-20  3:52           ` [virtio-dev] " Jason Wang
2018-04-20  3:52           ` Jason Wang
2018-04-20 14:12           ` Michael S. Tsirkin
2018-04-20 14:12             ` [virtio-dev] " Michael S. Tsirkin
2018-04-20 14:12           ` Michael S. Tsirkin
2018-04-10  7:51     ` [virtio-dev] " Paolo Bonzini
2018-04-10  7:51       ` Paolo Bonzini
2018-04-10  7:51       ` Paolo Bonzini
2018-04-10  9:23       ` Liang, Cunming
2018-04-10 13:36         ` Michael S. Tsirkin
2018-04-10 13:36         ` Michael S. Tsirkin
2018-04-10 13:36           ` Michael S. Tsirkin
2018-04-10 14:23           ` Liang, Cunming
2018-04-10 14:23             ` Liang, Cunming
2018-04-11  1:38             ` Tian, Kevin
2018-04-11  1:38               ` Tian, Kevin
2018-04-11  1:38             ` Tian, Kevin
2018-04-11  2:18             ` Jason Wang
2018-04-11  2:18             ` Jason Wang
2018-04-11  2:18               ` Jason Wang
2018-04-11  2:18               ` Jason Wang
2018-04-11  2:01         ` [virtio-dev] " Stefan Hajnoczi
2018-04-11  2:01         ` Stefan Hajnoczi
2018-04-11  2:01           ` Stefan Hajnoczi
2018-04-11  2:08         ` Jason Wang [this message]
2018-04-11  2:08           ` Jason Wang
2018-04-11  2:08         ` Jason Wang
2018-04-10  9:23       ` Liang, Cunming
2018-04-10  4:57   ` Tiwei Bie
2018-04-10  2:52 ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56f8d47b-48d4-cd4c-6795-21e809efcb1b@redhat.com \
    --to=jasowang@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=alexander.h.duyck@intel.com \
    --cc=cunming.liang@intel.com \
    --cc=dan.daly@intel.com \
    --cc=ddutile@redhat.com \
    --cc=jianfeng.tan@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=tiwei.bie@intel.com \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xiao.w.wang@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.