From: Hawkins Jiawei <yin31149@gmail.com>
To: jasowang@redhat.com, mst@redhat.com, eperezma@redhat.com
Cc: qemu-devel@nongnu.org, yin31149@gmail.com, leiyang@redhat.com,
18801353760@163.com
Subject: [PATCH v4 2/8] vdpa: Use iovec for vhost_vdpa_net_cvq_add()
Date: Tue, 29 Aug 2023 13:54:44 +0800 [thread overview]
Message-ID: <5e090c2af922192f5897ba7072df4d9e4754e1e0.1693287885.git.yin31149@gmail.com> (raw)
In-Reply-To: <cover.1693287885.git.yin31149@gmail.com>
Next patches in this series will no longer perform an
immediate poll and check of the device's used buffers
for each CVQ state load command. Consequently, there
will be multiple pending buffers in the shadow VirtQueue,
making it a must for every control command to have its
own buffer.
To achieve this, this patch refactor vhost_vdpa_net_cvq_add()
to accept `struct iovec`, which eliminates the coupling of
control commands to `s->cvq_cmd_out_buffer` and `s->status`,
allowing them to use their own buffer.
Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
---
v4:
- split `in` to `vdpa_in` and `model_in` instead of reusing `in`
in vhost_vdpa_net_handle_ctrl_avail() suggested by Eugenio
v3: https://lore.kernel.org/all/b1d473772ec4bcb254ab0d12430c9b1efe758606.1689748694.git.yin31149@gmail.com/
net/vhost-vdpa.c | 39 ++++++++++++++++++++++-----------------
1 file changed, 22 insertions(+), 17 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 3acda8591a..a875767ee9 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -596,22 +596,14 @@ static void vhost_vdpa_net_cvq_stop(NetClientState *nc)
vhost_vdpa_net_client_stop(nc);
}
-static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s, size_t out_len,
- size_t in_len)
+static ssize_t vhost_vdpa_net_cvq_add(VhostVDPAState *s,
+ const struct iovec *out_sg, size_t out_num,
+ const struct iovec *in_sg, size_t in_num)
{
- /* Buffers for the device */
- const struct iovec out = {
- .iov_base = s->cvq_cmd_out_buffer,
- .iov_len = out_len,
- };
- const struct iovec in = {
- .iov_base = s->status,
- .iov_len = sizeof(virtio_net_ctrl_ack),
- };
VhostShadowVirtqueue *svq = g_ptr_array_index(s->vhost_vdpa.shadow_vqs, 0);
int r;
- r = vhost_svq_add(svq, &out, 1, &in, 1, NULL);
+ r = vhost_svq_add(svq, out_sg, out_num, in_sg, in_num, NULL);
if (unlikely(r != 0)) {
if (unlikely(r == -ENOSPC)) {
qemu_log_mask(LOG_GUEST_ERROR, "%s: No space on device queue\n",
@@ -637,6 +629,15 @@ static ssize_t vhost_vdpa_net_load_cmd(VhostVDPAState *s, uint8_t class,
.cmd = cmd,
};
size_t data_size = iov_size(data_sg, data_num);
+ /* Buffers for the device */
+ const struct iovec out = {
+ .iov_base = s->cvq_cmd_out_buffer,
+ .iov_len = sizeof(ctrl) + data_size,
+ };
+ const struct iovec in = {
+ .iov_base = s->status,
+ .iov_len = sizeof(*s->status),
+ };
assert(data_size < vhost_vdpa_net_cvq_cmd_page_len() - sizeof(ctrl));
@@ -647,8 +648,7 @@ static ssize_t vhost_vdpa_net_load_cmd(VhostVDPAState *s, uint8_t class,
iov_to_buf(data_sg, data_num, 0,
s->cvq_cmd_out_buffer + sizeof(ctrl), data_size);
- return vhost_vdpa_net_cvq_add(s, data_size + sizeof(ctrl),
- sizeof(virtio_net_ctrl_ack));
+ return vhost_vdpa_net_cvq_add(s, &out, 1, &in, 1);
}
static int vhost_vdpa_net_load_mac(VhostVDPAState *s, const VirtIONet *n)
@@ -1222,10 +1222,15 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
.iov_base = s->cvq_cmd_out_buffer,
};
/* in buffer used for device model */
- const struct iovec in = {
+ const struct iovec model_in = {
.iov_base = &status,
.iov_len = sizeof(status),
};
+ /* in buffer used for vdpa device */
+ const struct iovec vdpa_in = {
+ .iov_base = s->status,
+ .iov_len = sizeof(*s->status),
+ };
ssize_t dev_written = -EINVAL;
out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
@@ -1259,7 +1264,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
goto out;
}
} else {
- dev_written = vhost_vdpa_net_cvq_add(s, out.iov_len, sizeof(status));
+ dev_written = vhost_vdpa_net_cvq_add(s, &out, 1, &vdpa_in, 1);
if (unlikely(dev_written < 0)) {
goto out;
}
@@ -1275,7 +1280,7 @@ static int vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
}
status = VIRTIO_NET_ERR;
- virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
+ virtio_net_handle_ctrl_iov(svq->vdev, &model_in, 1, &out, 1);
if (status != VIRTIO_NET_OK) {
error_report("Bad CVQ processing in model");
}
--
2.25.1
next prev parent reply other threads:[~2023-08-29 5:57 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-29 5:54 [PATCH v4 0/8] vdpa: Send all CVQ state load commands in parallel Hawkins Jiawei
2023-08-29 5:54 ` [PATCH v4 1/8] vhost: Add count argument to vhost_svq_poll() Hawkins Jiawei
2023-08-29 5:54 ` Hawkins Jiawei [this message]
2023-10-03 17:39 ` [PATCH v4 2/8] vdpa: Use iovec for vhost_vdpa_net_cvq_add() Eugenio Perez Martin
2023-08-29 5:54 ` [PATCH v4 3/8] vhost: Expose vhost_svq_available_slots() Hawkins Jiawei
2023-10-03 17:44 ` Eugenio Perez Martin
2023-10-08 1:35 ` Hawkins Jiawei
2023-08-29 5:54 ` [PATCH v4 4/8] vdpa: Avoid using vhost_vdpa_net_load_*() outside vhost_vdpa_net_load() Hawkins Jiawei
2023-10-03 17:48 ` Eugenio Perez Martin
2023-10-08 1:38 ` Hawkins Jiawei
2023-08-29 5:54 ` [PATCH v4 5/8] vdpa: Check device ack in vhost_vdpa_net_load_rx_mode() Hawkins Jiawei
2023-08-29 5:54 ` [PATCH v4 6/8] vdpa: Move vhost_svq_poll() to the caller of vhost_vdpa_net_cvq_add() Hawkins Jiawei
2023-08-29 5:54 ` [PATCH v4 7/8] vdpa: Introduce cursors to vhost_vdpa_net_loadx() Hawkins Jiawei
2023-10-04 7:21 ` Eugenio Perez Martin
2023-10-08 2:03 ` Hawkins Jiawei
2023-08-29 5:54 ` [PATCH v4 8/8] vdpa: Send cvq state load commands in parallel Hawkins Jiawei
2023-10-04 7:33 ` Eugenio Perez Martin
2023-10-08 2:24 ` Hawkins Jiawei
2023-08-29 9:32 ` [PATCH v4 0/8] vdpa: Send all CVQ " Hawkins Jiawei
2023-10-01 19:56 ` Michael S. Tsirkin
2023-10-03 18:21 ` Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5e090c2af922192f5897ba7072df4d9e4754e1e0.1693287885.git.yin31149@gmail.com \
--to=yin31149@gmail.com \
--cc=18801353760@163.com \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=leiyang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.