All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>,
	Parav Pandit <parav@mellanox.com>, Cindy Lu <lulu@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	qemu-level <qemu-devel@nongnu.org>,
	Gautam Dawar <gdawar@xilinx.com>,
	Markus Armbruster <armbru@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Xiao W Wang <xiao.w.wang@intel.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Eli Cohen <eli@mellanox.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Zhu Lingshan <lingshan.zhu@intel.com>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Eric Blake <eblake@redhat.com>
Subject: Re: [PATCH 17/31] vdpa: adapt vhost_ops callbacks to svq
Date: Mon, 21 Feb 2022 15:15:03 +0800	[thread overview]
Message-ID: <02f29b62-6656-ba2f-1475-251b16e0e978@redhat.com> (raw)
In-Reply-To: <CAJaqyWfEEg2PKgxBAFwYhF9LD1oDtwVYXSjHHnCbstT3dvL2GA@mail.gmail.com>


在 2022/2/18 上午1:13, Eugenio Perez Martin 写道:
> On Tue, Feb 8, 2022 at 4:58 AM Jason Wang <jasowang@redhat.com> wrote:
>>
>> 在 2022/2/1 上午2:58, Eugenio Perez Martin 写道:
>>> On Sun, Jan 30, 2022 at 5:03 AM Jason Wang <jasowang@redhat.com> wrote:
>>>> 在 2022/1/22 上午4:27, Eugenio Pérez 写道:
>>>>> First half of the buffers forwarding part, preparing vhost-vdpa
>>>>> callbacks to SVQ to offer it. QEMU cannot enable it at this moment, so
>>>>> this is effectively dead code at the moment, but it helps to reduce
>>>>> patch size.
>>>>>
>>>>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>>>>> ---
>>>>>     hw/virtio/vhost-shadow-virtqueue.h |   2 +-
>>>>>     hw/virtio/vhost-shadow-virtqueue.c |  21 ++++-
>>>>>     hw/virtio/vhost-vdpa.c             | 133 ++++++++++++++++++++++++++---
>>>>>     3 files changed, 143 insertions(+), 13 deletions(-)
>>>>>
>>>>> diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
>>>>> index 035207a469..39aef5ffdf 100644
>>>>> --- a/hw/virtio/vhost-shadow-virtqueue.h
>>>>> +++ b/hw/virtio/vhost-shadow-virtqueue.h
>>>>> @@ -35,7 +35,7 @@ size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq);
>>>>>
>>>>>     void vhost_svq_stop(VhostShadowVirtqueue *svq);
>>>>>
>>>>> -VhostShadowVirtqueue *vhost_svq_new(void);
>>>>> +VhostShadowVirtqueue *vhost_svq_new(uint16_t qsize);
>>>>>
>>>>>     void vhost_svq_free(VhostShadowVirtqueue *vq);
>>>>>
>>>>> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
>>>>> index f129ec8395..7c168075d7 100644
>>>>> --- a/hw/virtio/vhost-shadow-virtqueue.c
>>>>> +++ b/hw/virtio/vhost-shadow-virtqueue.c
>>>>> @@ -277,9 +277,17 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
>>>>>     /**
>>>>>      * Creates vhost shadow virtqueue, and instruct vhost device to use the shadow
>>>>>      * methods and file descriptors.
>>>>> + *
>>>>> + * @qsize Shadow VirtQueue size
>>>>> + *
>>>>> + * Returns the new virtqueue or NULL.
>>>>> + *
>>>>> + * In case of error, reason is reported through error_report.
>>>>>      */
>>>>> -VhostShadowVirtqueue *vhost_svq_new(void)
>>>>> +VhostShadowVirtqueue *vhost_svq_new(uint16_t qsize)
>>>>>     {
>>>>> +    size_t desc_size = sizeof(vring_desc_t) * qsize;
>>>>> +    size_t device_size, driver_size;
>>>>>         g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
>>>>>         int r;
>>>>>
>>>>> @@ -300,6 +308,15 @@ VhostShadowVirtqueue *vhost_svq_new(void)
>>>>>         /* Placeholder descriptor, it should be deleted at set_kick_fd */
>>>>>         event_notifier_init_fd(&svq->svq_kick, INVALID_SVQ_KICK_FD);
>>>>>
>>>>> +    svq->vring.num = qsize;
>>>> I wonder if this is the best. E.g some hardware can support up to 32K
>>>> queue size. So this will probably end up with:
>>>>
>>>> 1) SVQ use 32K queue size
>>>> 2) hardware queue uses 256
>>>>
>>> In that case SVQ vring queue size will be 32K and guest's vring can
>>> negotiate any number with SVQ equal or less than 32K,
>>
>> Sorry for being unclear what I meant is actually
>>
>> 1) SVQ uses 32K queue size
>>
>> 2) guest vq uses 256
>>
>> This looks like a burden that needs extra logic and may damage the
>> performance.
>>
> Still not getting this point.
>
> An available guest buffer, although contiguous in GPA/GVA, can expand
> in multiple buffers if it's not contiguous in qemu's VA (by the while
> loop in virtqueue_map_desc [1]). In that scenario it is better to have
> "plenty" of SVQ buffers.


Yes, but this case should be rare. So in this case we should deal with 
overrun on SVQ, that is

1) SVQ is full
2) guest VQ isn't

We need to

1) check the available buffer slots
2) disable guest kick and wait for the used buffers

But it looks to me the current code is not ready for dealing with this case?


>
> I'm ok if we decide to put an upper limit though, or if we decide not
> to handle this situation. But we would leave out valid virtio drivers.
> Maybe to set a fixed upper limit (1024?)? To add another parameter
> (x-svq-size-n=N)?
>
> If you mean we lose performance because memory gets more sparse I
> think the only possibility is to limit that way.


If guest is not using 32K, having a 32K for svq may gives extra stress 
on the cache since we will end up with a pretty large working set.


>
>> And this can lead other interesting situation:
>>
>> 1) SVQ uses 256
>>
>> 2) guest vq uses 1024
>>
>> Where a lot of more SVQ logic is needed.
>>
> If we agree that a guest descriptor can expand in multiple SVQ
> descriptors, this should be already handled by the previous logic too.
>
> But this should only happen in case that qemu is launched with a "bad"
> cmdline, isn't it?


This seems can happen when we use -device 
virtio-net-pci,tx_queue_size=1024 with a 256 size vp_vdpa device at least?


>
> If I run that example with vp_vdpa, L0 qemu will happily accept 1024
> as a queue size [2]. But if the vdpa device maximum queue size is
> effectively 256, this will result in an error: We're not exposing it
> to the guest at any moment but with qemu's cmdline.
>
>>> including 256.
>>> Is that what you mean?
>>
>> I mean, it looks to me the logic will be much more simplified if we just
>> allocate the shadow virtqueue with the size what guest can see (guest
>> vring).
>>
>> Then we don't need to think if the difference of the queue size can have
>> any side effects.
>>
> I think that we cannot avoid that extra logic unless we force GPA to
> be contiguous in IOVA. If we are sure the guest's buffers cannot be at
> more than one descriptor in SVQ, then yes, we can simplify things. If
> not, I think we are forced to carry all of it.


Yes, I agree, the code should be robust to handle any case.

Thanks


>
> But if we prove it I'm not opposed to simplifying things and making
> head at SVQ == head at guest.
>
> Thanks!
>
> [1] https://gitlab.com/qemu-project/qemu/-/blob/17e31340/hw/virtio/virtio.c#L1297
> [2] But that's not the whole story: I've been running limited in tx
> descriptors because of virtio_net_max_tx_queue_size, which predates
> vdpa. I'll send a patch to also un-limit it.
>
>>> If with hardware queues you mean guest's vring, not sure why it is
>>> "probably 256". I'd say that in that case with the virtio-net kernel
>>> driver the ring size will be the same as the device export, for
>>> example, isn't it?
>>>
>>> The implementation should support any combination of sizes, but the
>>> ring size exposed to the guest is never bigger than hardware one.
>>>
>>>> ? Or we SVQ can stick to 256 but this will this cause trouble if we want
>>>> to add event index support?
>>>>
>>> I think we should not have any problem with event idx. If you mean
>>> that the guest could mark more buffers available than SVQ vring's
>>> size, that should not happen because there must be less entries in the
>>> guest than SVQ.
>>>
>>> But if I understood you correctly, a similar situation could happen if
>>> a guest's contiguous buffer is scattered across many qemu's VA chunks.
>>> Even if that would happen, the situation should be ok too: SVQ knows
>>> the guest's avail idx and, if SVQ is full, it will continue forwarding
>>> avail buffers when the device uses more buffers.
>>>
>>> Does that make sense to you?
>>
>> Yes.
>>
>> Thanks
>>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>,
	Parav Pandit <parav@mellanox.com>, Cindy Lu <lulu@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	qemu-level <qemu-devel@nongnu.org>,
	Gautam Dawar <gdawar@xilinx.com>,
	Markus Armbruster <armbru@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Xiao W Wang <xiao.w.wang@intel.com>, Peter Xu <peterx@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Eli Cohen <eli@mellanox.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Zhu Lingshan <lingshan.zhu@intel.com>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Eric Blake <eblake@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH 17/31] vdpa: adapt vhost_ops callbacks to svq
Date: Mon, 21 Feb 2022 15:15:03 +0800	[thread overview]
Message-ID: <02f29b62-6656-ba2f-1475-251b16e0e978@redhat.com> (raw)
In-Reply-To: <CAJaqyWfEEg2PKgxBAFwYhF9LD1oDtwVYXSjHHnCbstT3dvL2GA@mail.gmail.com>


在 2022/2/18 上午1:13, Eugenio Perez Martin 写道:
> On Tue, Feb 8, 2022 at 4:58 AM Jason Wang <jasowang@redhat.com> wrote:
>>
>> 在 2022/2/1 上午2:58, Eugenio Perez Martin 写道:
>>> On Sun, Jan 30, 2022 at 5:03 AM Jason Wang <jasowang@redhat.com> wrote:
>>>> 在 2022/1/22 上午4:27, Eugenio Pérez 写道:
>>>>> First half of the buffers forwarding part, preparing vhost-vdpa
>>>>> callbacks to SVQ to offer it. QEMU cannot enable it at this moment, so
>>>>> this is effectively dead code at the moment, but it helps to reduce
>>>>> patch size.
>>>>>
>>>>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>>>>> ---
>>>>>     hw/virtio/vhost-shadow-virtqueue.h |   2 +-
>>>>>     hw/virtio/vhost-shadow-virtqueue.c |  21 ++++-
>>>>>     hw/virtio/vhost-vdpa.c             | 133 ++++++++++++++++++++++++++---
>>>>>     3 files changed, 143 insertions(+), 13 deletions(-)
>>>>>
>>>>> diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
>>>>> index 035207a469..39aef5ffdf 100644
>>>>> --- a/hw/virtio/vhost-shadow-virtqueue.h
>>>>> +++ b/hw/virtio/vhost-shadow-virtqueue.h
>>>>> @@ -35,7 +35,7 @@ size_t vhost_svq_device_area_size(const VhostShadowVirtqueue *svq);
>>>>>
>>>>>     void vhost_svq_stop(VhostShadowVirtqueue *svq);
>>>>>
>>>>> -VhostShadowVirtqueue *vhost_svq_new(void);
>>>>> +VhostShadowVirtqueue *vhost_svq_new(uint16_t qsize);
>>>>>
>>>>>     void vhost_svq_free(VhostShadowVirtqueue *vq);
>>>>>
>>>>> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
>>>>> index f129ec8395..7c168075d7 100644
>>>>> --- a/hw/virtio/vhost-shadow-virtqueue.c
>>>>> +++ b/hw/virtio/vhost-shadow-virtqueue.c
>>>>> @@ -277,9 +277,17 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
>>>>>     /**
>>>>>      * Creates vhost shadow virtqueue, and instruct vhost device to use the shadow
>>>>>      * methods and file descriptors.
>>>>> + *
>>>>> + * @qsize Shadow VirtQueue size
>>>>> + *
>>>>> + * Returns the new virtqueue or NULL.
>>>>> + *
>>>>> + * In case of error, reason is reported through error_report.
>>>>>      */
>>>>> -VhostShadowVirtqueue *vhost_svq_new(void)
>>>>> +VhostShadowVirtqueue *vhost_svq_new(uint16_t qsize)
>>>>>     {
>>>>> +    size_t desc_size = sizeof(vring_desc_t) * qsize;
>>>>> +    size_t device_size, driver_size;
>>>>>         g_autofree VhostShadowVirtqueue *svq = g_new0(VhostShadowVirtqueue, 1);
>>>>>         int r;
>>>>>
>>>>> @@ -300,6 +308,15 @@ VhostShadowVirtqueue *vhost_svq_new(void)
>>>>>         /* Placeholder descriptor, it should be deleted at set_kick_fd */
>>>>>         event_notifier_init_fd(&svq->svq_kick, INVALID_SVQ_KICK_FD);
>>>>>
>>>>> +    svq->vring.num = qsize;
>>>> I wonder if this is the best. E.g some hardware can support up to 32K
>>>> queue size. So this will probably end up with:
>>>>
>>>> 1) SVQ use 32K queue size
>>>> 2) hardware queue uses 256
>>>>
>>> In that case SVQ vring queue size will be 32K and guest's vring can
>>> negotiate any number with SVQ equal or less than 32K,
>>
>> Sorry for being unclear what I meant is actually
>>
>> 1) SVQ uses 32K queue size
>>
>> 2) guest vq uses 256
>>
>> This looks like a burden that needs extra logic and may damage the
>> performance.
>>
> Still not getting this point.
>
> An available guest buffer, although contiguous in GPA/GVA, can expand
> in multiple buffers if it's not contiguous in qemu's VA (by the while
> loop in virtqueue_map_desc [1]). In that scenario it is better to have
> "plenty" of SVQ buffers.


Yes, but this case should be rare. So in this case we should deal with 
overrun on SVQ, that is

1) SVQ is full
2) guest VQ isn't

We need to

1) check the available buffer slots
2) disable guest kick and wait for the used buffers

But it looks to me the current code is not ready for dealing with this case?


>
> I'm ok if we decide to put an upper limit though, or if we decide not
> to handle this situation. But we would leave out valid virtio drivers.
> Maybe to set a fixed upper limit (1024?)? To add another parameter
> (x-svq-size-n=N)?
>
> If you mean we lose performance because memory gets more sparse I
> think the only possibility is to limit that way.


If guest is not using 32K, having a 32K for svq may gives extra stress 
on the cache since we will end up with a pretty large working set.


>
>> And this can lead other interesting situation:
>>
>> 1) SVQ uses 256
>>
>> 2) guest vq uses 1024
>>
>> Where a lot of more SVQ logic is needed.
>>
> If we agree that a guest descriptor can expand in multiple SVQ
> descriptors, this should be already handled by the previous logic too.
>
> But this should only happen in case that qemu is launched with a "bad"
> cmdline, isn't it?


This seems can happen when we use -device 
virtio-net-pci,tx_queue_size=1024 with a 256 size vp_vdpa device at least?


>
> If I run that example with vp_vdpa, L0 qemu will happily accept 1024
> as a queue size [2]. But if the vdpa device maximum queue size is
> effectively 256, this will result in an error: We're not exposing it
> to the guest at any moment but with qemu's cmdline.
>
>>> including 256.
>>> Is that what you mean?
>>
>> I mean, it looks to me the logic will be much more simplified if we just
>> allocate the shadow virtqueue with the size what guest can see (guest
>> vring).
>>
>> Then we don't need to think if the difference of the queue size can have
>> any side effects.
>>
> I think that we cannot avoid that extra logic unless we force GPA to
> be contiguous in IOVA. If we are sure the guest's buffers cannot be at
> more than one descriptor in SVQ, then yes, we can simplify things. If
> not, I think we are forced to carry all of it.


Yes, I agree, the code should be robust to handle any case.

Thanks


>
> But if we prove it I'm not opposed to simplifying things and making
> head at SVQ == head at guest.
>
> Thanks!
>
> [1] https://gitlab.com/qemu-project/qemu/-/blob/17e31340/hw/virtio/virtio.c#L1297
> [2] But that's not the whole story: I've been running limited in tx
> descriptors because of virtio_net_max_tx_queue_size, which predates
> vdpa. I'll send a patch to also un-limit it.
>
>>> If with hardware queues you mean guest's vring, not sure why it is
>>> "probably 256". I'd say that in that case with the virtio-net kernel
>>> driver the ring size will be the same as the device export, for
>>> example, isn't it?
>>>
>>> The implementation should support any combination of sizes, but the
>>> ring size exposed to the guest is never bigger than hardware one.
>>>
>>>> ? Or we SVQ can stick to 256 but this will this cause trouble if we want
>>>> to add event index support?
>>>>
>>> I think we should not have any problem with event idx. If you mean
>>> that the guest could mark more buffers available than SVQ vring's
>>> size, that should not happen because there must be less entries in the
>>> guest than SVQ.
>>>
>>> But if I understood you correctly, a similar situation could happen if
>>> a guest's contiguous buffer is scattered across many qemu's VA chunks.
>>> Even if that would happen, the situation should be ok too: SVQ knows
>>> the guest's avail idx and, if SVQ is full, it will continue forwarding
>>> avail buffers when the device uses more buffers.
>>>
>>> Does that make sense to you?
>>
>> Yes.
>>
>> Thanks
>>



  reply	other threads:[~2022-02-21  7:15 UTC|newest]

Thread overview: 182+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-21 20:27 [PATCH 00/31] vDPA shadow virtqueue Eugenio Pérez
2022-01-21 20:27 ` [PATCH 01/31] vdpa: Reorder virtio/vhost-vdpa.c functions Eugenio Pérez
2022-01-28  5:59   ` Jason Wang
2022-01-28  5:59     ` Jason Wang
2022-01-28  7:57     ` Eugenio Perez Martin
2022-02-21  7:31       ` Jason Wang
2022-02-21  7:31         ` Jason Wang
2022-02-21  7:42         ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 02/31] vhost: Add VhostShadowVirtqueue Eugenio Pérez
2022-01-26  8:53   ` Eugenio Perez Martin
2022-01-28  6:00   ` Jason Wang
2022-01-28  6:00     ` Jason Wang
2022-01-28  8:10     ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 03/31] vdpa: Add vhost_svq_get_dev_kick_notifier Eugenio Pérez
2022-01-28  6:03   ` Jason Wang
2022-01-28  6:03     ` Jason Wang
2022-01-31  9:33     ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 04/31] vdpa: Add vhost_svq_set_svq_kick_fd Eugenio Pérez
2022-01-28  6:29   ` Jason Wang
2022-01-28  6:29     ` Jason Wang
2022-01-31 10:18     ` Eugenio Perez Martin
2022-02-08  8:47       ` Jason Wang
2022-02-08  8:47         ` Jason Wang
2022-02-18 18:22         ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 05/31] vhost: Add Shadow VirtQueue kick forwarding capabilities Eugenio Pérez
2022-01-28  6:32   ` Jason Wang
2022-01-28  6:32     ` Jason Wang
2022-01-31 10:48     ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 06/31] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
2022-01-28  6:56   ` Jason Wang
2022-01-28  6:56     ` Jason Wang
2022-01-31 11:33     ` Eugenio Perez Martin
2022-02-08  9:02       ` Jason Wang
2022-02-08  9:02         ` Jason Wang
2022-01-21 20:27 ` [PATCH 07/31] vhost: dd vhost_svq_get_svq_call_notifier Eugenio Pérez
2022-01-29  7:57   ` Jason Wang
2022-01-29  7:57     ` Jason Wang
2022-01-29 17:49     ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 08/31] vhost: Add vhost_svq_set_guest_call_notifier Eugenio Pérez
2022-01-21 20:27 ` [PATCH 09/31] vhost-vdpa: Take into account SVQ in vhost_vdpa_set_vring_call Eugenio Pérez
2022-01-29  8:05   ` Jason Wang
2022-01-29  8:05     ` Jason Wang
2022-01-31 15:34     ` Eugenio Perez Martin
2022-02-08  3:23       ` Jason Wang
2022-02-08  3:23         ` Jason Wang
2022-02-18 12:35         ` Eugenio Perez Martin
2022-02-21  7:39           ` Jason Wang
2022-02-21  7:39             ` Jason Wang
2022-02-21  8:01             ` Eugenio Perez Martin
2022-02-22  7:18               ` Jason Wang
2022-02-22  7:18                 ` Jason Wang
2022-01-21 20:27 ` [PATCH 10/31] vhost: Route host->guest notification through shadow virtqueue Eugenio Pérez
2022-01-21 20:27 ` [PATCH 11/31] vhost: Add vhost_svq_valid_device_features to shadow vq Eugenio Pérez
2022-01-29  8:11   ` Jason Wang
2022-01-29  8:11     ` Jason Wang
2022-01-31 15:49     ` Eugenio Perez Martin
2022-02-01 10:57       ` Eugenio Perez Martin
2022-02-08  3:37         ` Jason Wang
2022-02-08  3:37           ` Jason Wang
2022-02-26  9:11   ` Liuxiangdong via
2022-02-26 11:12     ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 12/31] vhost: Add vhost_svq_valid_guest_features " Eugenio Pérez
2022-01-21 20:27 ` [PATCH 13/31] vhost: Add vhost_svq_ack_guest_features " Eugenio Pérez
2022-01-21 20:27 ` [PATCH 14/31] virtio: Add vhost_shadow_vq_get_vring_addr Eugenio Pérez
2022-01-21 20:27 ` [PATCH 15/31] vdpa: Add vhost_svq_get_num Eugenio Pérez
2022-01-29  8:14   ` Jason Wang
2022-01-29  8:14     ` Jason Wang
2022-01-31 16:36     ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 16/31] vhost: pass queue index to vhost_vq_get_addr Eugenio Pérez
2022-01-29  8:20   ` Jason Wang
2022-01-29  8:20     ` Jason Wang
2022-01-31 17:44     ` Eugenio Perez Martin
2022-02-08  6:58       ` Jason Wang
2022-02-08  6:58         ` Jason Wang
2022-01-21 20:27 ` [PATCH 17/31] vdpa: adapt vhost_ops callbacks to svq Eugenio Pérez
2022-01-30  4:03   ` Jason Wang
2022-01-30  4:03     ` Jason Wang
2022-01-31 18:58     ` Eugenio Perez Martin
2022-02-08  3:57       ` Jason Wang
2022-02-08  3:57         ` Jason Wang
2022-02-17 17:13         ` Eugenio Perez Martin
2022-02-21  7:15           ` Jason Wang [this message]
2022-02-21  7:15             ` Jason Wang
2022-02-21 17:22             ` Eugenio Perez Martin
2022-02-22  3:16               ` Jason Wang
2022-02-22  3:16                 ` Jason Wang
2022-02-22  7:42                 ` Eugenio Perez Martin
2022-02-22  7:59                   ` Jason Wang
2022-02-22  7:59                     ` Jason Wang
2022-01-21 20:27 ` [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding Eugenio Pérez
2022-01-30  4:42   ` Jason Wang
2022-01-30  4:42     ` Jason Wang
2022-02-01 17:08     ` Eugenio Perez Martin
2022-02-08  8:11       ` Jason Wang
2022-02-08  8:11         ` Jason Wang
2022-02-22 19:01         ` Eugenio Perez Martin
2022-02-23  2:03           ` Jason Wang
2022-02-23  2:03             ` Jason Wang
2022-01-30  6:46   ` Jason Wang
2022-01-30  6:46     ` Jason Wang
2022-02-01 11:25     ` Eugenio Perez Martin
2022-02-08  8:15       ` Jason Wang
2022-02-08  8:15         ` Jason Wang
2022-02-17 12:48         ` Eugenio Perez Martin
2022-02-21  7:43           ` Jason Wang
2022-02-21  7:43             ` Jason Wang
2022-02-21  8:15             ` Eugenio Perez Martin
2022-02-22  7:26               ` Jason Wang
2022-02-22  7:26                 ` Jason Wang
2022-02-22  8:55                 ` Eugenio Perez Martin
2022-02-23  2:26                   ` Jason Wang
2022-02-23  2:26                     ` Jason Wang
2022-01-21 20:27 ` [PATCH 19/31] utils: Add internal DMAMap to iova-tree Eugenio Pérez
2022-01-21 20:27 ` [PATCH 20/31] util: Store DMA entries in a list Eugenio Pérez
2022-01-21 20:27 ` [PATCH 21/31] util: Add iova_tree_alloc Eugenio Pérez
2022-01-24  4:32   ` Peter Xu
2022-01-24  4:32     ` Peter Xu
2022-01-24  9:20     ` Eugenio Perez Martin
2022-01-24 11:07       ` Peter Xu
2022-01-24 11:07         ` Peter Xu
2022-01-25  9:40         ` Eugenio Perez Martin
2022-01-27  8:06           ` Peter Xu
2022-01-27  8:06             ` Peter Xu
2022-01-27  9:24             ` Eugenio Perez Martin
2022-01-28  3:57               ` Peter Xu
2022-01-28  3:57                 ` Peter Xu
2022-01-28  5:55                 ` Jason Wang
2022-01-28  5:55                   ` Jason Wang
2022-01-28  7:48                   ` Eugenio Perez Martin
2022-02-15 19:34                   ` Eugenio Pérez
2022-02-15 19:34                   ` [PATCH] util: Add iova_tree_alloc Eugenio Pérez
2022-02-16  7:25                     ` Peter Xu
2022-01-30  5:06       ` [PATCH 21/31] " Jason Wang
2022-01-30  5:06         ` Jason Wang
2022-01-21 20:27 ` [PATCH 22/31] vhost: Add VhostIOVATree Eugenio Pérez
2022-01-30  5:21   ` Jason Wang
2022-01-30  5:21     ` Jason Wang
2022-02-01 17:27     ` Eugenio Perez Martin
2022-02-08  8:17       ` Jason Wang
2022-02-08  8:17         ` Jason Wang
2022-01-21 20:27 ` [PATCH 23/31] vdpa: Add custom IOTLB translations to SVQ Eugenio Pérez
2022-01-30  5:57   ` Jason Wang
2022-01-30  5:57     ` Jason Wang
2022-01-31 19:11     ` Eugenio Perez Martin
2022-02-08  8:19       ` Jason Wang
2022-02-08  8:19         ` Jason Wang
2022-01-21 20:27 ` [PATCH 24/31] vhost: Add vhost_svq_get_last_used_idx Eugenio Pérez
2022-01-21 20:27 ` [PATCH 25/31] vdpa: Adapt vhost_vdpa_get_vring_base to SVQ Eugenio Pérez
2022-01-21 20:27 ` [PATCH 26/31] vdpa: Clear VHOST_VRING_F_LOG at vhost_vdpa_set_vring_addr in SVQ Eugenio Pérez
2022-01-21 20:27 ` [PATCH 27/31] vdpa: Never set log_base addr if SVQ is enabled Eugenio Pérez
2022-01-21 20:27 ` [PATCH 28/31] vdpa: Expose VHOST_F_LOG_ALL on SVQ Eugenio Pérez
2022-01-30  6:50   ` Jason Wang
2022-01-30  6:50     ` Jason Wang
2022-02-01 11:45     ` Eugenio Perez Martin
2022-02-08  8:25       ` Jason Wang
2022-02-08  8:25         ` Jason Wang
2022-02-16 15:53         ` Eugenio Perez Martin
2022-02-17  6:02           ` Jason Wang
2022-02-17  6:02             ` Jason Wang
2022-02-17  8:22             ` Eugenio Perez Martin
2022-02-22  7:41               ` Jason Wang
2022-02-22  7:41                 ` Jason Wang
2022-02-22  8:05                 ` Eugenio Perez Martin
2022-02-23  3:46                   ` Jason Wang
2022-02-23  3:46                     ` Jason Wang
2022-02-23  8:06                     ` Eugenio Perez Martin
2022-02-24  3:45                       ` Jason Wang
2022-02-24  3:45                         ` Jason Wang
2022-01-21 20:27 ` [PATCH 29/31] vdpa: Make ncs autofree Eugenio Pérez
2022-01-30  6:51   ` Jason Wang
2022-01-30  6:51     ` Jason Wang
2022-02-01 17:10     ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 30/31] vdpa: Move vhost_vdpa_get_iova_range to net/vhost-vdpa.c Eugenio Pérez
2022-01-30  6:53   ` Jason Wang
2022-01-30  6:53     ` Jason Wang
2022-02-01 17:11     ` Eugenio Perez Martin
2022-01-21 20:27 ` [PATCH 31/31] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
2022-01-28  6:02 ` [PATCH 00/31] vDPA shadow virtqueue Jason Wang
2022-01-28  6:02   ` Jason Wang
2022-01-31  9:15   ` Eugenio Perez Martin
2022-02-08  8:27     ` Jason Wang
2022-02-08  8:27       ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=02f29b62-6656-ba2f-1475-251b16e0e978@redhat.com \
    --to=jasowang@redhat.com \
    --cc=armbru@redhat.com \
    --cc=eblake@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=eli@mellanox.com \
    --cc=eperezma@redhat.com \
    --cc=gdawar@xilinx.com \
    --cc=hanand@xilinx.com \
    --cc=lingshan.zhu@intel.com \
    --cc=lulu@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=parav@mellanox.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xiao.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.