All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: "Longpeng(Mike)" <longpeng2@huawei.com>
Cc: mst@redhat.com, jasowang@redhat.com, qemu-devel@nongnu.org,
	yechuan@huawei.com, xieyongji@bytedance.com,
	arei.gonglei@huawei.com, parav@nvidia.com, sgarzare@redhat.com
Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
Date: Thu, 9 Dec 2021 09:16:58 +0000	[thread overview]
Message-ID: <YbHJivhCDvKo4eB0@stefanha-x1.localdomain> (raw)
In-Reply-To: <20211208052010.1719-1-longpeng2@huawei.com>

[-- Attachment #1: Type: text/plain, Size: 2274 bytes --]

On Wed, Dec 08, 2021 at 01:20:10PM +0800, Longpeng(Mike) wrote:
> From: Longpeng <longpeng2@huawei.com>
> 
> Hi guys,
> 
> This patch introduces vhost-vdpa-net device, which is inspired
> by vhost-user-blk and the proposal of vhost-vdpa-blk device [1].
> 
> I've tested this patch on Huawei's offload card:
> ./x86_64-softmmu/qemu-system-x86_64 \
>     -device vhost-vdpa-net-pci,vdpa-dev=/dev/vhost-vdpa-0
> 
> For virtio hardware offloading, the most important requirement for us
> is to support live migration between offloading cards from different
> vendors, the combination of netdev and virtio-net seems too heavy, we
> prefer a lightweight way.
> 
> Maybe we could support both in the future ? Such as:
> 
> * Lightweight
>  Net: vhost-vdpa-net
>  Storage: vhost-vdpa-blk
> 
> * Heavy but more powerful
>  Net: netdev + virtio-net + vhost-vdpa
>  Storage: bdrv + virtio-blk + vhost-vdpa
> 
> [1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html

Stefano presented a plan for vdpa-blk at KVM Forum 2021:
https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-software-offload-for-virtio-blk-stefano-garzarella-red-hat

It's closer to today's virtio-net + vhost-net approach than the
vhost-vdpa-blk device you have mentioned. The idea is to treat vDPA as
an offload feature rather than a completely separate code path that
needs to be maintained and tested. That way QEMU's block layer features
and live migration work with vDPA devices and re-use the virtio-blk
code. The key functionality that has not been implemented yet is a "fast
path" mechanism that allows the QEMU virtio-blk device's virtqueue to be
offloaded to vDPA.

The unified vdpa-blk architecture should deliver the same performance
as the vhost-vdpa-blk device you mentioned but with more features, so I
wonder what aspects of the vhost-vdpa-blk idea are important to you?

QEMU already has vhost-user-blk, which takes a similar approach as the
vhost-vdpa-blk device you are proposing. I'm not against the
vhost-vdpa-blk approach in priciple, but would like to understand your
requirements and see if there is a way to collaborate on one vdpa-blk
implementation instead of dividing our efforts between two.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  parent reply	other threads:[~2021-12-09  9:18 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-08  5:20 [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support Longpeng(Mike) via
2021-12-08  6:27 ` Jason Wang
2021-12-11  5:23   ` longpeng2--- via
2021-12-13  3:23     ` Jason Wang
2021-12-14  0:15       ` longpeng2--- via
2021-12-08 19:05 ` Michael S. Tsirkin
2021-12-11  2:43   ` longpeng2--- via
2021-12-09  9:16 ` Stefan Hajnoczi [this message]
2021-12-09 15:55   ` Stefano Garzarella
2021-12-11  4:11     ` longpeng2--- via
2021-12-13 11:10       ` Stefano Garzarella
2021-12-13 15:16       ` Stefan Hajnoczi
2021-12-14  1:44         ` longpeng2--- via
2021-12-14 13:03           ` Stefan Hajnoczi
2021-12-11  3:00   ` longpeng2--- via
2021-12-12  9:30     ` Michael S. Tsirkin
2021-12-13  2:47       ` Jason Wang
2021-12-13 10:58         ` Stefano Garzarella
2021-12-13 15:14         ` Stefan Hajnoczi
2021-12-14  2:22           ` Jason Wang
2021-12-14 13:11             ` Stefan Hajnoczi
2021-12-15  3:18               ` Jason Wang
2021-12-15 10:07                 ` Stefan Hajnoczi
2021-12-16  3:01                   ` Jason Wang
2021-12-16  9:10                     ` Stefan Hajnoczi
2021-12-17  4:26                       ` Jason Wang
2021-12-17  8:35                         ` Stefan Hajnoczi
2021-12-20  2:48                           ` Jason Wang
2021-12-20  8:11                             ` Stefan Hajnoczi
2021-12-20  9:17                               ` longpeng2--- via
2021-12-20 14:05                                 ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YbHJivhCDvKo4eB0@stefanha-x1.localdomain \
    --to=stefanha@redhat.com \
    --cc=arei.gonglei@huawei.com \
    --cc=jasowang@redhat.com \
    --cc=longpeng2@huawei.com \
    --cc=mst@redhat.com \
    --cc=parav@nvidia.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=xieyongji@bytedance.com \
    --cc=yechuan@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.