qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: Eli Cohen <elic@nvidia.com>, Cindy Lu <lulu@redhat.com>,
	qemu-level <qemu-devel@nongnu.org>,
	lingshan.zhu@intel.com, Michael Tsirkin <mst@redhat.com>
Subject: Re: [PATCH 18/18] vhost-vdpa: multiqueue support
Date: Thu, 1 Jul 2021 16:15:34 +0800	[thread overview]
Message-ID: <4a9981c4-be51-d221-0b11-0d41376b2b5b@redhat.com> (raw)
In-Reply-To: <CAJaqyWeT+VhXSzu9VA7UrJMFeOCUwNXUoN9-yWZzp9Rg4pBZWQ@mail.gmail.com>


在 2021/7/1 下午2:51, Eugenio Perez Martin 写道:
> On Mon, Jun 21, 2021 at 6:18 AM Jason Wang <jasowang@redhat.com> wrote:
>> This patch implements the multiqueue support for vhost-vdpa. This is
>> done simply by reading the number of queue pairs from the config space
>> and initialize the datapath and control path net client.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>>   hw/net/virtio-net.c |  3 +-
>>   net/vhost-vdpa.c    | 98 ++++++++++++++++++++++++++++++++++++++++-----
>>   2 files changed, 91 insertions(+), 10 deletions(-)
>>
>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>> index 5074b521cf..2c2ed98c0b 100644
>> --- a/hw/net/virtio-net.c
>> +++ b/hw/net/virtio-net.c
>> @@ -3370,7 +3370,8 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>>
>>       n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
>>
>> -    /* Figure out the datapath queue pairs since the bakcend could
>> +    /*
>> +     * Figure out the datapath queue pairs since the bakcend could
> If we are going to modify the comment we could s/bakcend/backend/.


Will fix.


>
>>        * provide control queue via peers as well.
>>        */
>>       if (n->nic_conf.peers.queues) {
>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>> index cc11b2ec40..048344b4bc 100644
>> --- a/net/vhost-vdpa.c
>> +++ b/net/vhost-vdpa.c
>> @@ -18,6 +18,7 @@
>>   #include "qemu/error-report.h"
>>   #include "qemu/option.h"
>>   #include "qapi/error.h"
>> +#include <linux/vhost.h>
>>   #include <sys/ioctl.h>
>>   #include <err.h>
>>   #include "standard-headers/linux/virtio_net.h"
>> @@ -52,6 +53,8 @@ const int vdpa_feature_bits[] = {
>>       VIRTIO_NET_F_HOST_UFO,
>>       VIRTIO_NET_F_MRG_RXBUF,
>>       VIRTIO_NET_F_MTU,
>> +    VIRTIO_NET_F_MQ,
>> +    VIRTIO_NET_F_CTRL_VQ,
>
> Hi!
>
> I'm not sure if it's qemu the one that must control it, but I cannot
> use vdpa_sim of linux 5.13 (i.e., with no control vq patches) with
> this series applied:
>
> [    3.967421] virtio_net virtio0: device advertises feature
> VIRTIO_NET_F_CTRL_RX but not VIRTIO_NET_F_CTRL_VQ
> [    3.968613] virtio_net: probe of virtio0 failed with error -22


Interesting, looks like a bug somewhere.

We never advertise CTRL_RX in the case of simulator.


>
> Did you mention it somewhere else and I've missed it? or is it
> actually a bug in the device? In this second case, I think we should
> still workaround it in qemu, because old vdpasim_net with no
> VIRTIO_NET_F_CTRL_VQ still works ok without this patch.


Should be a bug, will have a look.

Thanks


>
> Thanks!
>
>>       VIRTIO_F_IOMMU_PLATFORM,
>>       VIRTIO_F_RING_PACKED,
>>       VIRTIO_NET_F_RSS,
>> @@ -82,7 +85,8 @@ static int vhost_vdpa_net_check_device_id(struct vhost_net *net)
>>       return ret;
>>   }
>>
>> -static int vhost_vdpa_add(NetClientState *ncs, void *be)
>> +static int vhost_vdpa_add(NetClientState *ncs, void *be, int qp_index,
>> +                          int nvqs)
>>   {
>>       VhostNetOptions options;
>>       struct vhost_net *net = NULL;
>> @@ -95,7 +99,7 @@ static int vhost_vdpa_add(NetClientState *ncs, void *be)
>>       options.net_backend = ncs;
>>       options.opaque      = be;
>>       options.busyloop_timeout = 0;
>> -    options.nvqs = 2;
>> +    options.nvqs = nvqs;
>>
>>       net = vhost_net_init(&options);
>>       if (!net) {
>> @@ -159,18 +163,28 @@ static NetClientInfo net_vhost_vdpa_info = {
>>   static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>>                                              const char *device,
>>                                              const char *name,
>> -                                           int vdpa_device_fd)
>> +                                           int vdpa_device_fd,
>> +                                           int qp_index,
>> +                                           int nvqs,
>> +                                           bool is_datapath)
>>   {
>>       NetClientState *nc = NULL;
>>       VhostVDPAState *s;
>>       int ret = 0;
>>       assert(name);
>> -    nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device, name);
>> +    if (is_datapath) {
>> +        nc = qemu_new_net_client(&net_vhost_vdpa_info, peer, device,
>> +                                 name);
>> +    } else {
>> +        nc = qemu_new_net_control_client(&net_vhost_vdpa_info, peer,
>> +                                         device, name);
>> +    }
>>       snprintf(nc->info_str, sizeof(nc->info_str), TYPE_VHOST_VDPA);
>>       s = DO_UPCAST(VhostVDPAState, nc, nc);
>>
>>       s->vhost_vdpa.device_fd = vdpa_device_fd;
>> -    ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa);
>> +    s->vhost_vdpa.index = qp_index;
>> +    ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, qp_index, nvqs);
>>       if (ret) {
>>           qemu_del_net_client(nc);
>>           return NULL;
>> @@ -196,12 +210,52 @@ static int net_vhost_check_net(void *opaque, QemuOpts *opts, Error **errp)
>>       return 0;
>>   }
>>
>> +static int vhost_vdpa_get_max_qps(int fd, int *has_cvq, Error **errp)
>> +{
>> +    unsigned long config_size = offsetof(struct vhost_vdpa_config, buf);
>> +    struct vhost_vdpa_config *config;
>> +    __virtio16 *max_qps;
>> +    uint64_t features;
>> +    int ret;
>> +
>> +    ret = ioctl(fd, VHOST_GET_FEATURES, &features);
>> +    if (ret) {
>> +        error_setg(errp, "Fail to query features from vhost-vDPA device");
>> +        return ret;
>> +    }
>> +
>> +    if (features & (1 << VIRTIO_NET_F_CTRL_VQ)) {
>> +        *has_cvq = 1;
>> +    } else {
>> +        *has_cvq = 0;
>> +    }
>> +
>> +    if (features & (1 << VIRTIO_NET_F_MQ)) {
>> +        config = g_malloc0(config_size + sizeof(*max_qps));
>> +        config->off = offsetof(struct virtio_net_config, max_virtqueue_pairs);
>> +        config->len = sizeof(*max_qps);
>> +
>> +        ret = ioctl(fd, VHOST_VDPA_GET_CONFIG, config);
>> +        if (ret) {
>> +            error_setg(errp, "Fail to get config from vhost-vDPA device");
>> +            return -ret;
>> +        }
>> +
>> +        max_qps = (__virtio16 *)&config->buf;
>> +
>> +        return lduw_le_p(max_qps);
>> +    }
>> +
>> +    return 1;
>> +}
>> +
>>   int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>>                           NetClientState *peer, Error **errp)
>>   {
>>       const NetdevVhostVDPAOptions *opts;
>>       int vdpa_device_fd;
>> -    NetClientState *nc;
>> +    NetClientState **ncs, *nc;
>> +    int qps, i, has_cvq = 0;
>>
>>       assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
>>       opts = &netdev->u.vhost_vdpa;
>> @@ -216,11 +270,37 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>>           return -errno;
>>       }
>>
>> -    nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name, vdpa_device_fd);
>> -    if (!nc) {
>> +    qps = vhost_vdpa_get_max_qps(vdpa_device_fd, &has_cvq, errp);
>> +    if (qps < 0) {
>>           qemu_close(vdpa_device_fd);
>> -        return -1;
>> +        return qps;
>> +    }
>> +
>> +    ncs = g_malloc0(sizeof(*ncs) * qps);
>> +
>> +    for (i = 0; i < qps; i++) {
>> +        ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>> +                                     vdpa_device_fd, i, 2, true);
>> +        if (!ncs[i])
>> +            goto err;
>>       }
>>
>> +    if (has_cvq) {
>> +        nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>> +                                 vdpa_device_fd, i, 1, false);
>> +        if (!nc)
>> +            goto err;
>> +    }
>> +
>> +    g_free(ncs);
>>       return 0;
>> +
>> +err:
>> +    if (i) {
>> +        qemu_del_net_client(ncs[0]);
>> +    }
>> +    qemu_close(vdpa_device_fd);
>> +    g_free(ncs);
>> +
>> +    return -1;
>>   }
>> --
>> 2.25.1
>>



  reply	other threads:[~2021-07-01  8:16 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-21  4:16 [PATCH 00/18] vhost-vDPA multiqueue Jason Wang
2021-06-21  4:16 ` [PATCH 01/18] vhost_net: remove the meaningless assignment in vhost_net_start_one() Jason Wang
2021-06-21 11:45   ` Eli Cohen
2021-06-24  7:42     ` Jason Wang
2021-06-21  4:16 ` [PATCH 02/18] vhost: use unsigned int for nvqs Jason Wang
2021-06-21 11:46   ` Eli Cohen
2021-06-21  4:16 ` [PATCH 03/18] vhost_net: do not assume nvqs is always 2 Jason Wang
2021-06-23 14:49   ` Stefano Garzarella
2021-06-24  6:22   ` Eli Cohen
2021-06-24  7:42     ` Jason Wang
2021-06-21  4:16 ` [PATCH 04/18] vhost-vdpa: remove the unnecessary check in vhost_vdpa_add() Jason Wang
2021-06-23 14:53   ` Stefano Garzarella
2021-06-24  6:38     ` Eli Cohen
2021-06-24  7:46     ` Jason Wang
2021-06-21  4:16 ` [PATCH 05/18] vhost-vdpa: don't cleanup twice " Jason Wang
2021-06-23 14:56   ` Stefano Garzarella
2021-06-21  4:16 ` [PATCH 06/18] vhost-vdpa: fix leaking of vhost_net " Jason Wang
2021-06-23 15:00   ` Stefano Garzarella
2021-06-24  7:06     ` Eli Cohen
2021-06-24  7:10       ` Jason Wang
2021-06-24  7:32         ` Eli Cohen
2021-06-24  7:14     ` Eli Cohen
2021-06-24  7:41       ` Jason Wang
2021-06-21  4:16 ` [PATCH 07/18] vhost-vdpa: tweak the error label " Jason Wang
2021-06-23 15:03   ` Stefano Garzarella
2021-07-06  8:03     ` Jason Wang
2021-07-06  8:10       ` Jason Wang
2021-07-06  8:27         ` Stefano Garzarella
2021-07-06  8:28           ` Jason Wang
2021-06-21  4:16 ` [PATCH 08/18] vhost-vdpa: fix the wrong assertion in vhost_vdpa_init() Jason Wang
2021-06-23 15:04   ` Stefano Garzarella
2021-06-21  4:16 ` [PATCH 09/18] vhost-vdpa: remove the unncessary queue_index assignment Jason Wang
2021-06-23 15:05   ` Stefano Garzarella
2021-06-21  4:16 ` [PATCH 10/18] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
2021-06-23 15:07   ` Stefano Garzarella
2021-06-21  4:16 ` [PATCH 11/18] vhost-vdpa: classify one time request Jason Wang
2021-06-21  4:16 ` [PATCH 12/18] vhost-vdpa: prepare for the multiqueue support Jason Wang
2021-06-21  4:16 ` [PATCH 13/18] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
2021-06-21  4:16 ` [PATCH 14/18] net: introduce control client Jason Wang
2021-06-21  4:16 ` [PATCH 15/18] vhost-net: control virtqueue support Jason Wang
2021-06-24  7:42   ` Eli Cohen
2021-06-24  7:44     ` Jason Wang
2021-06-30 17:33   ` Eugenio Perez Martin
2021-07-01  3:03     ` Jason Wang
2021-06-21  4:16 ` [PATCH 16/18] virito-net: use "qps" instead of "queues" when possible Jason Wang
2021-06-21  4:16 ` [PATCH 17/18] virtio-net: vhost control virtqueue support Jason Wang
2021-06-21  4:16 ` [PATCH 18/18] vhost-vdpa: multiqueue support Jason Wang
2021-07-01  6:51   ` Eugenio Perez Martin
2021-07-01  8:15     ` Jason Wang [this message]
2021-07-06  7:46     ` Jason Wang
2021-06-21  4:33 ` [PATCH 00/18] vhost-vDPA multiqueue no-reply

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4a9981c4-be51-d221-0b11-0d41376b2b5b@redhat.com \
    --to=jasowang@redhat.com \
    --cc=elic@nvidia.com \
    --cc=eperezma@redhat.com \
    --cc=lingshan.zhu@intel.com \
    --cc=lulu@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).