All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Junji Wei <weijunji@bytedance.com>
Cc: dledford@redhat.com, mst@redhat.com, jasowang@redhat.com,
	yuval.shaia.ml@gmail.com, marcel.apfelbaum@gmail.com,
	cohuck@redhat.com, hare@suse.de, xieyongji@bytedance.com,
	chaiwen.cc@bytedance.com, linux-rdma@vger.kernel.org,
	virtualization@lists.linux-foundation.org, qemu-devel@nongnu.org
Subject: Re: [RFC 0/5] VirtIO RDMA
Date: Wed, 15 Sep 2021 10:43:01 -0300	[thread overview]
Message-ID: <20210915134301.GA211485@nvidia.com> (raw)
In-Reply-To: <20210902130625.25277-1-weijunji@bytedance.com>

On Thu, Sep 02, 2021 at 09:06:20PM +0800, Junji Wei wrote:
> Hi all,
> 
> This RFC aims to reopen the discussion of Virtio RDMA.
> Now this is based on Yuval Shaia's RFC "VirtIO RDMA"
> which implemented a frame for Virtio RDMA and a simple
> control path (Not sure if Yuval Shaia has any further
> plan for it).
> 
> We try to extend this work and implement a simple
> data-path and a completed control path. Now this can
> work with SEND, RECV and REG_MR in kernel. There is a
> simple test module in this patch that can communicate
> with ibv_rc_pingpong in rdma-core.
> 
> During doing this work, we have found some problems and
> would like to ask for some suggestions from community:

These seem like serious problems! Shouldn't these be solved before
sending patches?

> 1. Each qp need two VQ, but qemu default only support 1024 VQ.
>    I think it is possible to multiplex the VQ, since the
>    cmd_post_send carry the qpn in request.

QPs and CQs need to have predictable fixed WQE sizes, I don't know how
you can reasonably expect to map them to a shared queue.

> 2. The virtio-rdma device's gid should equal to host rdma
>    device's gid. This means that we cannot use gid cache in
>    rdma subsystem. And theoretically the gid should also equal
>    to the device's netdev's ip address, how can we deal with
>    this conflict.

You have to follow the correct semantics, the GID flows from the guest
into the host and updates the hosts GID table, not the other way
around.
 
> 3. How to support DMA mr? The verbs in host cannot support it.
>    And it seems hard to ping whole guest physical memory in qemu.

Either you have to trap the FRWR in the hypervisor and pin the memory,
remap the MR, etc or you have to pin the entire guest and rely on
something like memory windows to emulate FRWR.
 
> 4. The FRMR api need to set key of MR through IB_WR_REG_MR.
>    But it is impossible to change a key of mr using uverbs.

FRMR is more like memory windows in user space, you can't support it
using just regular MRs.

>    In our implementation, we change the key of WR while post_send,
>    but this means the MR can only work with SEND and RECV since we
>    cannot change the key in the remote.

Yes, this is not a realistic solution

> 5. The GSI is not supported now. And we think it's a problem that
>    when the host receive a GSI package, it doesn't know which
>    device it belongs to.

Of course, GSI packets are not virtualized. You need to somehow
capture GSI messages for the entire GID that the guest is using. We
don't have any API to do this in userspace.

Jason

  parent reply	other threads:[~2021-09-15 13:43 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-02 13:06 [RFC 0/5] VirtIO RDMA Junji Wei
2021-09-02 13:06 ` Junji Wei
2021-09-02 13:06 ` [RFC 1/5] RDMA/virtio-rdma Introduce a new core cap prot Junji Wei
2021-09-02 13:06   ` Junji Wei
2021-09-02 13:06 ` [RFC 2/5] RDMA/virtio-rdma: VirtIO RDMA driver Junji Wei
2021-09-02 13:06   ` Junji Wei
2021-09-02 13:06 ` [RFC 3/5] RDMA/virtio-rdma: VirtIO RDMA test module Junji Wei
2021-09-02 13:06   ` Junji Wei
2021-09-02 13:06 ` [RFC 4/5] virtio-net: Move some virtio-net-pci decl to include/hw/virtio Junji Wei
2021-09-02 13:06   ` Junji Wei
2021-09-02 13:06 ` [RFC 5/5] hw/virtio-rdma: VirtIO rdma device Junji Wei
2021-09-02 13:06   ` Junji Wei
2021-09-02 15:16   ` Michael S. Tsirkin
2021-09-02 15:16     ` Michael S. Tsirkin
2021-09-02 15:16     ` Michael S. Tsirkin
2021-09-03  0:57 ` [RFC 0/5] VirtIO RDMA Jason Wang
2021-09-03  0:57   ` Jason Wang
2021-09-03  0:57   ` Jason Wang
2021-09-03  7:41   ` 魏俊吉
2021-09-03  7:41     ` 魏俊吉
2021-09-15 13:43 ` Jason Gunthorpe [this message]
2021-09-22 12:08   ` Junji Wei
2021-09-22 12:08     ` Junji Wei
2021-09-22 13:06     ` Leon Romanovsky
2021-09-22 13:06       ` Leon Romanovsky
2021-09-22 13:06       ` Leon Romanovsky
2021-09-22 13:37       ` 魏俊吉
2021-09-22 13:37         ` 魏俊吉
2021-09-22 13:59         ` Leon Romanovsky
2021-09-22 13:59           ` Leon Romanovsky
2021-09-22 13:59           ` Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210915134301.GA211485@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=chaiwen.cc@bytedance.com \
    --cc=cohuck@redhat.com \
    --cc=dledford@redhat.com \
    --cc=hare@suse.de \
    --cc=jasowang@redhat.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=weijunji@bytedance.com \
    --cc=xieyongji@bytedance.com \
    --cc=yuval.shaia.ml@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.