All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [RFC PATCH v4 18/20] vhost: Add VhostIOVATree
Date: Thu, 21 Oct 2021 16:12:00 +0800	[thread overview]
Message-ID: <CACGkMEuSCSaF4LDUe+X+KFuxGupYDUuNTO=FQUo5oir9pM=b8w@mail.gmail.com> (raw)
In-Reply-To: <CAJaqyWeqK1tyHXuVub0G34s=tVPV9fq5BC7g5-Db4HAcZzyOTw@mail.gmail.com>

On Thu, Oct 21, 2021 at 3:03 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Thu, Oct 21, 2021 at 4:34 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Wed, Oct 20, 2021 at 8:07 PM Eugenio Perez Martin
> > <eperezma@redhat.com> wrote:
> > >
> > > On Wed, Oct 20, 2021 at 11:01 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Wed, Oct 20, 2021 at 3:54 PM Eugenio Perez Martin
> > > > <eperezma@redhat.com> wrote:
> > > > >
> > > > > On Tue, Oct 19, 2021 at 11:23 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > >
> > > > > > On Tue, Oct 19, 2021 at 4:32 PM Jason Wang <jasowang@redhat.com> wrote:
> > > > > > >
> > > > > > >
> > > > > > > 在 2021/10/1 下午3:06, Eugenio Pérez 写道:
> > > > > > > > This tree is able to look for a translated address from an IOVA address.
> > > > > > > >
> > > > > > > > At first glance is similar to util/iova-tree. However, SVQ working on
> > > > > > > > devices with limited IOVA space need more capabilities, like allocating
> > > > > > > > IOVA chunks or perform reverse translations (qemu addresses to iova).
> > > > > > >
> > > > > > >
> > > > > > > I don't see any reverse translation is used in the shadow code. Or
> > > > > > > anything I missed?
> > > > > >
> > > > > > Ok, it looks to me that it is used in the iova allocator. But I think
> > > > > > it's better to decouple it to an independent allocator instead of
> > > > > > vhost iova tree.
> > > > > >
> > > > >
> > > > > Reverse translation is used every time a buffer is made available,
> > > > > since buffers content are not copied, only the descriptors to SVQ
> > > > > vring.
> > > >
> > > > I may miss something but I didn't see the code? Qemu knows the VA of
> > > > virtqueue, and the VA of the VQ is stored in the VirtQueueElem?
> > > >
> > >
> > > It's used in the patch 20/20, could that be the misunderstanding? The
> > > function calling it is vhost_svq_translate_addr.
> > >
> > > Qemu knows the VA address of the buffer, but it must offer a valid SVQ
> > > iova to the device. That is the translation I mean.
> >
> > Ok, I get you. So if I understand correctly, what you did is:
> >
> > 1) allocate IOVA during region_add
> > 2) preform VA->IOVA reverse lookup in handle_kick
> >
> > This should be fine, but here're some suggestions:
> >
> > 1) remove the assert(map) in vhost_svq_translate_addr() since guest
> > can add e.g BAR address
>
> Wouldn't VirtQueue block them in virtqueue_pop / address_space_read_*
> functions? I'm fine to remove it but I would say it adds value against
> coding error.

I think not. Though these addresses were excluded in
vhost_vdpa_listener_skipped_section(). For Qemu memory core, they are
valid addresses. Qemu emulate how hardware work (e.g pci p2p), so dma
to bar is allowed.

>
> > 2) we probably need a better name vhost_iova_tree_alloc(), maybe
> > "vhost_iova_tree_map_alloc()"
> >
>
> Ok I will change for the next version.
>
> > There's actually another method.
> >
> > 1) don't do IOVA/map allocation in region_add()
> > 2) do the allocation in handle_kick(), then we know the IOVA so no
> > reverse lookup
> >
> > The advantage is that this can work for the case of vIOMMU. And they
> > should perform the same:
> >
> > 1) you method avoid the iova allocation per sg
> > 2) my method avoid the reverse lookup per sg
> >
>
> It's somehow doable, but we are replacing a tree search with a linear
> insertion at this moment.
>
> I would say that guest's IOVA -> qemu vaddr part works with no change
> for vIOMMU, since VirtQueue's virtqueue_pop already gives us the vaddr
> even in the case of vIOMMU.

So in this case:

1) listener gives us GPA->host IOVA (host IOVA is allocated per GPA)
2) virtqueue_pop gives us guest IOVA -> VA

We still need extra logic to lookup the vIOMMU to get the guest IOVA
-> GPA then we can know the host IOVA.

If we allocate after virtqueue_pop(), we can follow the same logic as
without vIOMMU. Just allocate an host IOVA then all is done.

> The only change I would add for that case
> is the SVQ -> device map/unmapping part, so the device cannot access
> random addresses but only the exposed ones. I'm assuming that part is
> O(1).
>
> This way, we already have a tree with all the possible guest's
> addresses, and we only need to look for it's SVQ iova -> vaddr
> translation. This is a O(log(N)) operation,

Yes, but it's requires traverse the vIOMMU page table which should be
slower than our own iova tree?

> and read only, so it's
> easily parallelizable when we make each SVQ in it's own thread (if
> needed).

Yes, this is because the host IOVA was allocated before by the memory listener.

> The only thing left is to expose that with an iommu miss
> (O(1)) and unmap it on used buffers processing (also O(1)). The
> domination operation keeps being VirtQueue's own code lookup for
> guest's IOVA -> GPA, which I'm assuming is already well optimized and
> will benefit from future optimizations since qemu's memory system is
> frequently used.
>
> To optimize your use case we would need to add a custom (and smarter
> than the currently used) allocator to SVQ. I've been looking for ways
> to reuse glibc or similar in our own arenas but with no luck. It will
> be code that SVQ needs to maintain by and for itself anyway.

The benefit is to have separate iova allocation from the tree.

>
> In either case it should not be hard to switch to your method, just a
> few call changes in the future, if we achieve a faster allocator.
>
> Would that make sense?

Yes, feel free to choose any method you wish or feel simpler in the next series.

>
> > >
> > > > >
> > > > > At this point all the limits are copied to vhost iova tree in the next
> > > > > revision I will send, defined at its creation at
> > > > > vhost_iova_tree_new(). They are outside of util/iova-tree, only sent
> > > > > to the latter at allocation time.
> > > > >
> > > > > Since vhost_iova_tree has its own vhost_iova_tree_alloc(), that wraps
> > > > > the iova_tree_alloc() [1], limits could be kept in vhost-vdpa and make
> > > > > them an argument of vhost_iova_tree_alloc. But I'm not sure if it's
> > > > > what you are proposing or I'm missing something.
> > > >
> > > > If the reverse translation is only used in iova allocation, I meant to
> > > > split the logic of IOVA allocation itself.
> > > >
> > >
> > > Still don't understand it, sorry :). In SVQ setup we allocate an iova
> > > address for every guest's GPA address its driver can use. After that
> > > there should be no allocation unless memory is hotplugged.
> > >
> > > So the limits are only needed precisely at allocation time. Not sure
> > > if that is what you mean here, but to first allocate and then check if
> > > it is within the range could lead to false negatives, since there
> > > could be a valid range *in* the address but the iova allocator
> > > returned us another range that fell outside the range. How could we
> > > know the cause if it is not using the range itself?
> >
> > See my above reply. And we can teach the iova allocator to return the
> > IOVA in the range that vhost-vDPA supports.
> >
>
> Ok,
>
> For the next series it will be that way. I'm pretty sure we are
> aligned in this part, but the lack of code in this series makes it
> very hard to discuss it :).

Fine. Let's see.

Thanks

>
> Thanks!
>
> > Thanks
> >
> > >
> > > > >
> > > > > Either way, I think it is harder to talk about this specific case
> > > > > without code, since this one still does not address the limits. Would
> > > > > you prefer me to send another RFC in WIP quality, with *not* all
> > > > > comments addressed? I would say that there is not a lot of pending
> > > > > work to send the next one, but it might be easier for all of us.
> > > >
> > > > I'd prefer to try to address them all, otherwise it's not easy to see
> > > > what is missing.
> > > >
> > >
> > > Got it, I will do it that way then!
> > >
> > > Thanks!
> > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks!
> > > > >
> > > > > [1] This util/iova-tree method will be proposed in the next series,
> > > > > and vhost_iova_tree wraps it since it needs to keep in sync both
> > > > > trees: iova->qemu vaddr for iova allocation and the reverse one to
> > > > > translate available buffers.
> > > > >
> > > > > > Thanks
> > > > > >
> > > > >
> > > >
> > >
> >
>



  reply	other threads:[~2021-10-21  8:14 UTC|newest]

Thread overview: 117+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-01  7:05 [RFC PATCH v4 00/20] vDPA shadow virtqueue Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 01/20] virtio: Add VIRTIO_F_QUEUE_STATE Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 02/20] virtio-net: Honor VIRTIO_CONFIG_S_DEVICE_STOPPED Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 03/20] virtio: Add virtio_queue_is_host_notifier_enabled Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 04/20] vhost: Make vhost_virtqueue_{start,stop} public Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 05/20] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez
2021-10-12  5:18   ` Markus Armbruster
2021-10-12 13:08     ` Eugenio Perez Martin
2021-10-12 13:45       ` Markus Armbruster
2021-10-14 12:01         ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 06/20] vhost: Add VhostShadowVirtqueue Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 07/20] vdpa: Register vdpa devices in a list Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 08/20] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
2021-10-12  5:19   ` Markus Armbruster
2021-10-12 13:09     ` Eugenio Perez Martin
2021-10-13  3:27   ` Jason Wang
2021-10-13  3:27     ` Jason Wang
2021-10-14 12:00     ` Eugenio Perez Martin
2021-10-15  3:45       ` Jason Wang
2021-10-15  3:45         ` Jason Wang
2021-10-15  9:08         ` Eugenio Perez Martin
2021-10-15 18:21       ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 09/20] vdpa: Save call_fd in vhost-vdpa Eugenio Pérez
2021-10-13  3:43   ` Jason Wang
2021-10-13  3:43     ` Jason Wang
2021-10-14 12:11     ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 10/20] vhost-vdpa: Take into account SVQ in vhost_vdpa_set_vring_call Eugenio Pérez
2021-10-13  3:43   ` Jason Wang
2021-10-13  3:43     ` Jason Wang
2021-10-14 12:18     ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 11/20] vhost: Route host->guest notification through shadow virtqueue Eugenio Pérez
2021-10-13  3:47   ` Jason Wang
2021-10-13  3:47     ` Jason Wang
2021-10-14 16:39     ` Eugenio Perez Martin
2021-10-15  4:42       ` Jason Wang
2021-10-15  4:42         ` Jason Wang
2021-10-19  8:39         ` Eugenio Perez Martin
2021-10-20  2:01           ` Jason Wang
2021-10-20  2:01             ` Jason Wang
2021-10-20  6:36             ` Eugenio Perez Martin
2021-10-13  3:49   ` Jason Wang
2021-10-13  3:49     ` Jason Wang
2021-10-14 15:58     ` Eugenio Perez Martin
2021-10-15  4:24       ` Jason Wang
2021-10-15  4:24         ` Jason Wang
2021-10-01  7:05 ` [RFC PATCH v4 12/20] virtio: Add vhost_shadow_vq_get_vring_addr Eugenio Pérez
2021-10-13  3:54   ` Jason Wang
2021-10-13  3:54     ` Jason Wang
2021-10-14 14:39     ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 13/20] vdpa: Save host and guest features Eugenio Pérez
2021-10-13  3:56   ` Jason Wang
2021-10-13  3:56     ` Jason Wang
2021-10-14 15:03     ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 14/20] vhost: Add vhost_svq_valid_device_features to shadow vq Eugenio Pérez
2021-10-01  7:05 ` [RFC PATCH v4 15/20] vhost: Shadow virtqueue buffers forwarding Eugenio Pérez
2021-10-12  5:21   ` Markus Armbruster
2021-10-12 13:28     ` Eugenio Perez Martin
2021-10-12 13:48       ` Markus Armbruster
2021-10-14 15:04         ` Eugenio Perez Martin
2021-10-13  4:31   ` Jason Wang
2021-10-13  4:31     ` Jason Wang
2021-10-14 17:56     ` Eugenio Perez Martin
2021-10-15  4:23       ` Jason Wang
2021-10-15  4:23         ` Jason Wang
2021-10-15  9:33         ` Eugenio Perez Martin
2021-10-01  7:05 ` [RFC PATCH v4 16/20] vhost: Check for device VRING_USED_F_NO_NOTIFY at shadow virtqueue kick Eugenio Pérez
2021-10-13  4:35   ` Jason Wang
2021-10-13  4:35     ` Jason Wang
2021-10-15  6:17     ` Eugenio Perez Martin
2021-10-01  7:06 ` [RFC PATCH v4 17/20] vhost: Use VRING_AVAIL_F_NO_INTERRUPT at device call on shadow virtqueue Eugenio Pérez
2021-10-13  4:36   ` Jason Wang
2021-10-13  4:36     ` Jason Wang
2021-10-15  6:22     ` Eugenio Perez Martin
2021-10-01  7:06 ` [RFC PATCH v4 18/20] vhost: Add VhostIOVATree Eugenio Pérez
2021-10-19  8:32   ` Jason Wang
2021-10-19  8:32     ` Jason Wang
2021-10-19  9:22     ` Jason Wang
2021-10-20  7:54       ` Eugenio Perez Martin
2021-10-20  9:01         ` Jason Wang
2021-10-20 12:06           ` Eugenio Perez Martin
2021-10-21  2:34             ` Jason Wang
2021-10-21  7:03               ` Eugenio Perez Martin
2021-10-21  8:12                 ` Jason Wang [this message]
2021-10-21 14:33                   ` Eugenio Perez Martin
2021-10-26  4:29                     ` Jason Wang
2021-10-20  7:36     ` Eugenio Perez Martin
2021-10-01  7:06 ` [RFC PATCH v4 19/20] vhost: Use a tree to store memory mappings Eugenio Pérez
2021-10-01  7:06 ` [RFC PATCH v4 20/20] vdpa: Add custom IOTLB translations to SVQ Eugenio Pérez
2021-10-13  5:34   ` Jason Wang
2021-10-13  5:34     ` Jason Wang
2021-10-15  7:27     ` Eugenio Perez Martin
2021-10-15  7:37       ` Jason Wang
2021-10-15  7:37         ` Jason Wang
2021-10-15  8:20         ` Eugenio Perez Martin
2021-10-15  8:37           ` Jason Wang
2021-10-15  8:37             ` Jason Wang
2021-10-15  9:14           ` Eugenio Perez Martin
2021-10-19  9:24   ` Jason Wang
2021-10-19  9:24     ` Jason Wang
2021-10-19 10:28     ` Eugenio Perez Martin
2021-10-20  2:02       ` Jason Wang
2021-10-20  2:02         ` Jason Wang
2021-10-20  2:07         ` Jason Wang
2021-10-20  2:07           ` Jason Wang
2021-10-20  6:51           ` Eugenio Perez Martin
2021-10-20  9:03             ` Jason Wang
2021-10-20  9:03               ` Jason Wang
2021-10-20 11:56               ` Eugenio Perez Martin
2021-10-21  2:38                 ` Jason Wang
2021-10-21  2:38                   ` Jason Wang
2021-10-26  4:32                 ` Jason Wang
2021-10-26  4:32                   ` Jason Wang
2021-10-12  3:59 ` [RFC PATCH v4 00/20] vDPA shadow virtqueue Jason Wang
2021-10-12  3:59   ` Jason Wang
2021-10-12  4:06   ` Jason Wang
2021-10-12  4:06     ` Jason Wang
2021-10-12  9:09     ` Eugenio Perez Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACGkMEuSCSaF4LDUe+X+KFuxGupYDUuNTO=FQUo5oir9pM=b8w@mail.gmail.com' \
    --to=jasowang@redhat.com \
    --cc=eperezma@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.