All of lore.kernel.org
 help / color / mirror / Atom feed
From: Eugenio Perez Martin <eperezma@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>,
	qemu-level <qemu-devel@nongnu.org>, Peter Xu <peterx@redhat.com>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Eli Cohen <eli@mellanox.com>, Eric Blake <eblake@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>, Cindy Lu <lulu@redhat.com>,
	"Fangyi \(Eric\)" <eric.fangyi@huawei.com>,
	Markus Armbruster <armbru@redhat.com>,
	yebiaoxiang@huawei.com, Liuxiangdong <liuxiangdong5@huawei.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Parav Pandit <parav@mellanox.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Gautam Dawar <gdawar@xilinx.com>,
	Xiao W Wang <xiao.w.wang@intel.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Lingshan <lingshan.zhu@intel.com>
Subject: Re: [PATCH v5 15/15] vdpa: Add x-svq to NetdevVhostVDPAOptions
Date: Tue, 8 Mar 2022 09:24:05 +0100	[thread overview]
Message-ID: <CAJaqyWeAxjOtvtAD2Ow2MUXQpaBUbP21=CZ4g-S0pPizq_Az-g@mail.gmail.com> (raw)
In-Reply-To: <20220308030140-mutt-send-email-mst@kernel.org>

On Tue, Mar 8, 2022 at 9:02 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Tue, Mar 08, 2022 at 08:32:07AM +0100, Eugenio Perez Martin wrote:
> > On Tue, Mar 8, 2022 at 8:11 AM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >
> > > On Mon, Mar 07, 2022 at 04:33:34PM +0100, Eugenio Pérez wrote:
> > > > Finally offering the possibility to enable SVQ from the command line.
> > > >
> > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > ---
> > > >  qapi/net.json    |  8 +++++++-
> > > >  net/vhost-vdpa.c | 48 ++++++++++++++++++++++++++++++++++++++++--------
> > > >  2 files changed, 47 insertions(+), 9 deletions(-)
> > > >
> > > > diff --git a/qapi/net.json b/qapi/net.json
> > > > index 7fab2e7cd8..d626fa441c 100644
> > > > --- a/qapi/net.json
> > > > +++ b/qapi/net.json
> > > > @@ -445,12 +445,18 @@
> > > >  # @queues: number of queues to be created for multiqueue vhost-vdpa
> > > >  #          (default: 1)
> > > >  #
> > > > +# @svq: Start device with (experimental) shadow virtqueue. (Since 7.0)
> > > > +#
> > > > +# Features:
> > > > +# @unstable: Member @svq is experimental.
> > > > +#
> > > >  # Since: 5.1
> > > >  ##
> > > >  { 'struct': 'NetdevVhostVDPAOptions',
> > > >    'data': {
> > > >      '*vhostdev':     'str',
> > > > -    '*queues':       'int' } }
> > > > +    '*queues':       'int',
> > > > +    '*svq':          {'type': 'bool', 'features' : [ 'unstable'] } } }
> > > >
> > > >  ##
> > > >  # @NetClientDriver:
> > >
> > > I think this should be x-svq same as other unstable features.
> > >
> >
> > I'm fine with both, but I was pointed to the other direction at [1] and [2].
> >
> > Thanks!
> >
> > [1] https://patchwork.kernel.org/project/qemu-devel/patch/20220302203012.3476835-15-eperezma@redhat.com/
> > [2] https://lore.kernel.org/qemu-devel/20220303185147.3605350-15-eperezma@redhat.com/
>
>
> I think what Markus didn't know is that a bunch of changes in
> behaviour will occur before we rename it to "svq".
> The rename is thus less of a bother more of a bonus.
>

I'm totally fine with going back to x-svq. I'm not sure if it's more
appropriate to do different modes of different parameters (svq=off,
dynamic-svq=on) or different modes of the same parameter (svq=on vs
svq=on_migration). Or something totally different.

My impression is that all of the changes are covered with @unstable
but I can see the advantage of x- prefix since we have not come to an
agreement on it. I think it's the first time it is mentioned in the
mail list.

Do you want me to send a new series with x- prefix?

Thanks!

> > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > index 1e9fe47c03..c827921654 100644
> > > > --- a/net/vhost-vdpa.c
> > > > +++ b/net/vhost-vdpa.c
> > > > @@ -127,7 +127,11 @@ err_init:
> > > >  static void vhost_vdpa_cleanup(NetClientState *nc)
> > > >  {
> > > >      VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> > > > +    struct vhost_dev *dev = s->vhost_vdpa.dev;
> > > >
> > > > +    if (dev && dev->vq_index + dev->nvqs == dev->vq_index_end) {
> > > > +        g_clear_pointer(&s->vhost_vdpa.iova_tree, vhost_iova_tree_delete);
> > > > +    }
> > > >      if (s->vhost_net) {
> > > >          vhost_net_cleanup(s->vhost_net);
> > > >          g_free(s->vhost_net);
> > > > @@ -187,13 +191,23 @@ static NetClientInfo net_vhost_vdpa_info = {
> > > >          .check_peer_type = vhost_vdpa_check_peer_type,
> > > >  };
> > > >
> > > > +static int vhost_vdpa_get_iova_range(int fd,
> > > > +                                     struct vhost_vdpa_iova_range *iova_range)
> > > > +{
> > > > +    int ret = ioctl(fd, VHOST_VDPA_GET_IOVA_RANGE, iova_range);
> > > > +
> > > > +    return ret < 0 ? -errno : 0;
> > > > +}
> > > > +
> > > >  static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > > > -                                           const char *device,
> > > > -                                           const char *name,
> > > > -                                           int vdpa_device_fd,
> > > > -                                           int queue_pair_index,
> > > > -                                           int nvqs,
> > > > -                                           bool is_datapath)
> > > > +                                       const char *device,
> > > > +                                       const char *name,
> > > > +                                       int vdpa_device_fd,
> > > > +                                       int queue_pair_index,
> > > > +                                       int nvqs,
> > > > +                                       bool is_datapath,
> > > > +                                       bool svq,
> > > > +                                       VhostIOVATree *iova_tree)
> > > >  {
> > > >      NetClientState *nc = NULL;
> > > >      VhostVDPAState *s;
> > > > @@ -211,6 +225,8 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
> > > >
> > > >      s->vhost_vdpa.device_fd = vdpa_device_fd;
> > > >      s->vhost_vdpa.index = queue_pair_index;
> > > > +    s->vhost_vdpa.shadow_vqs_enabled = svq;
> > > > +    s->vhost_vdpa.iova_tree = iova_tree;
> > > >      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
> > > >      if (ret) {
> > > >          qemu_del_net_client(nc);
> > > > @@ -266,6 +282,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >      g_autofree NetClientState **ncs = NULL;
> > > >      NetClientState *nc;
> > > >      int queue_pairs, i, has_cvq = 0;
> > > > +    g_autoptr(VhostIOVATree) iova_tree = NULL;
> > > >
> > > >      assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > >      opts = &netdev->u.vhost_vdpa;
> > > > @@ -285,29 +302,44 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
> > > >          qemu_close(vdpa_device_fd);
> > > >          return queue_pairs;
> > > >      }
> > > > +    if (opts->svq) {
> > > > +        struct vhost_vdpa_iova_range iova_range;
> > > > +
> > > > +        if (has_cvq) {
> > > > +            error_setg(errp, "vdpa svq does not work with cvq");
> > > > +            goto err_svq;
> > > > +        }
> > > > +        vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
> > > > +        iova_tree = vhost_iova_tree_new(iova_range.first, iova_range.last);
> > > > +    }
> > > >
> > > >      ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
> > > >
> > > >      for (i = 0; i < queue_pairs; i++) {
> > > >          ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > > > -                                     vdpa_device_fd, i, 2, true);
> > > > +                                     vdpa_device_fd, i, 2, true, opts->svq,
> > > > +                                     iova_tree);
> > > >          if (!ncs[i])
> > > >              goto err;
> > > >      }
> > > >
> > > >      if (has_cvq) {
> > > >          nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
> > > > -                                 vdpa_device_fd, i, 1, false);
> > > > +                                 vdpa_device_fd, i, 1, false, opts->svq,
> > > > +                                 iova_tree);
> > > >          if (!nc)
> > > >              goto err;
> > > >      }
> > > >
> > > > +    iova_tree = NULL;
> > > >      return 0;
> > > >
> > > >  err:
> > > >      if (i) {
> > > >          qemu_del_net_client(ncs[0]);
> > > >      }
> > > > +
> > > > +err_svq:
> > > >      qemu_close(vdpa_device_fd);
> > > >
> > > >      return -1;
> > > > --
> > > > 2.27.0
> > >
>



  reply	other threads:[~2022-03-08  8:26 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-07 15:33 [PATCH v5 00/15] vDPA shadow virtqueue Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 01/15] vhost: Add VhostShadowVirtqueue Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 02/15] vhost: Add Shadow VirtQueue kick forwarding capabilities Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 03/15] vhost: Add Shadow VirtQueue call " Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 04/15] vhost: Add vhost_svq_valid_features to shadow vq Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 05/15] virtio: Add vhost_svq_get_vring_addr Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 06/15] vdpa: adapt vhost_ops callbacks to svq Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 07/15] vhost: Shadow virtqueue buffers forwarding Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 08/15] util: Add iova_tree_alloc_map Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 09/15] util: add iova_tree_find_iova Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 10/15] vhost: Add VhostIOVATree Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 11/15] vdpa: Add custom IOTLB translations to SVQ Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 12/15] vdpa: Adapt vhost_vdpa_get_vring_base " Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 13/15] vdpa: Never set log_base addr if SVQ is enabled Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 14/15] vdpa: Expose VHOST_F_LOG_ALL on SVQ Eugenio Pérez
2022-03-07 15:33 ` [PATCH v5 15/15] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
2022-03-08  7:11   ` Michael S. Tsirkin
2022-03-08  7:11     ` Michael S. Tsirkin
2022-03-08  7:32     ` Eugenio Perez Martin
2022-03-08  7:33       ` Eugenio Perez Martin
2022-03-08  8:02       ` Michael S. Tsirkin
2022-03-08  8:02         ` Michael S. Tsirkin
2022-03-08  8:24         ` Eugenio Perez Martin [this message]
2022-03-08 12:31           ` Michael S. Tsirkin
2022-03-08 12:31             ` Michael S. Tsirkin
2022-03-08  9:29   ` Markus Armbruster
2022-03-08  6:03 ` [PATCH v5 00/15] vDPA shadow virtqueue Jason Wang
2022-03-08  6:03   ` Jason Wang
2022-03-08  7:11   ` Michael S. Tsirkin
2022-03-08  7:11     ` Michael S. Tsirkin
2022-03-08  7:14     ` Jason Wang
2022-03-08  7:14       ` Jason Wang
2022-03-08  7:27       ` Michael S. Tsirkin
2022-03-08  7:27         ` Michael S. Tsirkin
2022-03-08  7:34         ` Jason Wang
2022-03-08  7:34           ` Jason Wang
2022-03-08  7:55           ` Michael S. Tsirkin
2022-03-08  7:55             ` Michael S. Tsirkin
2022-03-08  8:15             ` Eugenio Perez Martin
2022-03-08  8:19               ` Michael S. Tsirkin
2022-03-08  8:19                 ` Michael S. Tsirkin
2022-03-08  8:20             ` Jason Wang
2022-03-08  8:20               ` Jason Wang
2022-03-08 10:46               ` Michael S. Tsirkin
2022-03-08 10:46                 ` Michael S. Tsirkin
2022-03-08 13:23                 ` Jason Wang
2022-03-08 13:23                   ` Jason Wang
2022-03-08 10:48               ` Michael S. Tsirkin
2022-03-08 10:48                 ` Michael S. Tsirkin
2022-03-08 11:37                 ` Eugenio Perez Martin
2022-03-08 12:16                   ` Michael S. Tsirkin
2022-03-08 12:16                     ` Michael S. Tsirkin
2022-03-08 13:56                     ` Eugenio Perez Martin
2022-03-09  3:38                     ` Jason Wang
2022-03-09  3:38                       ` Jason Wang
2022-03-09  7:30                       ` Michael S. Tsirkin
2022-03-09  7:30                         ` Michael S. Tsirkin
2022-03-09  7:45                         ` Jason Wang
2022-03-09  7:45                           ` Jason Wang
2022-03-08  7:49         ` Eugenio Perez Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJaqyWeAxjOtvtAD2Ow2MUXQpaBUbP21=CZ4g-S0pPizq_Az-g@mail.gmail.com' \
    --to=eperezma@redhat.com \
    --cc=armbru@redhat.com \
    --cc=eblake@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=eli@mellanox.com \
    --cc=eric.fangyi@huawei.com \
    --cc=gdawar@xilinx.com \
    --cc=hanand@xilinx.com \
    --cc=jasowang@redhat.com \
    --cc=lingshan.zhu@intel.com \
    --cc=liuxiangdong5@huawei.com \
    --cc=lulu@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=parav@mellanox.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=richard.henderson@linaro.org \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xiao.w.wang@intel.com \
    --cc=yebiaoxiang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.