linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] vhost_vdpa: fix unmap process in no-batch mode
@ 2023-04-10 15:01 Cindy Lu
  2023-04-11  3:09 ` Jason Wang
  0 siblings, 1 reply; 6+ messages in thread
From: Cindy Lu @ 2023-04-10 15:01 UTC (permalink / raw)
  To: lulu, jasowang, mst, kvm, linux-kernel, virtualization, netdev

While using the no-batch mode, the process will not begin with
VHOST_IOTLB_BATCH_BEGIN, so we need to add the
VHOST_IOTLB_INVALIDATE to get vhost_vdpa_as, the process is the
same as VHOST_IOTLB_UPDATE

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 drivers/vhost/vdpa.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 7be9d9d8f01c..32636a02a0ab 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -1074,6 +1074,7 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid,
 		goto unlock;
 
 	if (msg->type == VHOST_IOTLB_UPDATE ||
+	    msg->type == VHOST_IOTLB_INVALIDATE ||
 	    msg->type == VHOST_IOTLB_BATCH_BEGIN) {
 		as = vhost_vdpa_find_alloc_as(v, asid);
 		if (!as) {
-- 
2.34.3


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] vhost_vdpa: fix unmap process in no-batch mode
  2023-04-10 15:01 [PATCH] vhost_vdpa: fix unmap process in no-batch mode Cindy Lu
@ 2023-04-11  3:09 ` Jason Wang
  2023-04-11  7:28   ` Cindy Lu
  0 siblings, 1 reply; 6+ messages in thread
From: Jason Wang @ 2023-04-11  3:09 UTC (permalink / raw)
  To: Cindy Lu; +Cc: mst, kvm, linux-kernel, virtualization, netdev

On Mon, Apr 10, 2023 at 11:01 PM Cindy Lu <lulu@redhat.com> wrote:
>
> While using the no-batch mode, the process will not begin with
> VHOST_IOTLB_BATCH_BEGIN, so we need to add the
> VHOST_IOTLB_INVALIDATE to get vhost_vdpa_as, the process is the
> same as VHOST_IOTLB_UPDATE
>
> Signed-off-by: Cindy Lu <lulu@redhat.com>
> ---
>  drivers/vhost/vdpa.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 7be9d9d8f01c..32636a02a0ab 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -1074,6 +1074,7 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid,
>                 goto unlock;
>
>         if (msg->type == VHOST_IOTLB_UPDATE ||
> +           msg->type == VHOST_IOTLB_INVALIDATE ||

I'm not sure I get here, invalidation doesn't need to create a new AS.

Or maybe you can post the userspace code that can trigger this issue?

Thanks

>             msg->type == VHOST_IOTLB_BATCH_BEGIN) {
>                 as = vhost_vdpa_find_alloc_as(v, asid);
>                 if (!as) {
> --
> 2.34.3
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] vhost_vdpa: fix unmap process in no-batch mode
  2023-04-11  3:09 ` Jason Wang
@ 2023-04-11  7:28   ` Cindy Lu
  2023-04-11  9:13     ` Jason Wang
  0 siblings, 1 reply; 6+ messages in thread
From: Cindy Lu @ 2023-04-11  7:28 UTC (permalink / raw)
  To: Jason Wang; +Cc: mst, kvm, linux-kernel, virtualization, netdev

On Tue, Apr 11, 2023 at 11:10 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Mon, Apr 10, 2023 at 11:01 PM Cindy Lu <lulu@redhat.com> wrote:
> >
> > While using the no-batch mode, the process will not begin with
> > VHOST_IOTLB_BATCH_BEGIN, so we need to add the
> > VHOST_IOTLB_INVALIDATE to get vhost_vdpa_as, the process is the
> > same as VHOST_IOTLB_UPDATE
> >
> > Signed-off-by: Cindy Lu <lulu@redhat.com>
> > ---
> >  drivers/vhost/vdpa.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> > index 7be9d9d8f01c..32636a02a0ab 100644
> > --- a/drivers/vhost/vdpa.c
> > +++ b/drivers/vhost/vdpa.c
> > @@ -1074,6 +1074,7 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid,
> >                 goto unlock;
> >
> >         if (msg->type == VHOST_IOTLB_UPDATE ||
> > +           msg->type == VHOST_IOTLB_INVALIDATE ||
>
> I'm not sure I get here, invalidation doesn't need to create a new AS.
>
> Or maybe you can post the userspace code that can trigger this issue?
>
> Thanks
>
sorry I didn't write it clearly
For this issue can reproduce in vIOMMU no-batch mode support because
while the vIOMMU enabled, it will
flash a large memory to unmap, and this memory are haven't been mapped
before, so this unmapping will fail

qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
qemu-system-x86_64: vhost_vdpa_dma_unmap(0x7fa26d1dd190, 0x0,
0x80000000) = -5 (Bad address)
qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
....
in batch mode this operation will begin with VHOST_IOTLB_BATCH_BEGIN,
so don't have this issue

Thanks
cindy
> >             msg->type == VHOST_IOTLB_BATCH_BEGIN) {
> >                 as = vhost_vdpa_find_alloc_as(v, asid);
> >                 if (!as) {
> > --
> > 2.34.3
> >
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] vhost_vdpa: fix unmap process in no-batch mode
  2023-04-11  7:28   ` Cindy Lu
@ 2023-04-11  9:13     ` Jason Wang
  2023-04-12  6:31       ` Cindy Lu
  0 siblings, 1 reply; 6+ messages in thread
From: Jason Wang @ 2023-04-11  9:13 UTC (permalink / raw)
  To: Cindy Lu; +Cc: mst, kvm, linux-kernel, virtualization, netdev

On Tue, Apr 11, 2023 at 3:29 PM Cindy Lu <lulu@redhat.com> wrote:
>
> On Tue, Apr 11, 2023 at 11:10 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Mon, Apr 10, 2023 at 11:01 PM Cindy Lu <lulu@redhat.com> wrote:
> > >
> > > While using the no-batch mode, the process will not begin with
> > > VHOST_IOTLB_BATCH_BEGIN, so we need to add the
> > > VHOST_IOTLB_INVALIDATE to get vhost_vdpa_as, the process is the
> > > same as VHOST_IOTLB_UPDATE
> > >
> > > Signed-off-by: Cindy Lu <lulu@redhat.com>
> > > ---
> > >  drivers/vhost/vdpa.c | 1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> > > index 7be9d9d8f01c..32636a02a0ab 100644
> > > --- a/drivers/vhost/vdpa.c
> > > +++ b/drivers/vhost/vdpa.c
> > > @@ -1074,6 +1074,7 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid,
> > >                 goto unlock;
> > >
> > >         if (msg->type == VHOST_IOTLB_UPDATE ||
> > > +           msg->type == VHOST_IOTLB_INVALIDATE ||
> >
> > I'm not sure I get here, invalidation doesn't need to create a new AS.
> >
> > Or maybe you can post the userspace code that can trigger this issue?
> >
> > Thanks
> >
> sorry I didn't write it clearly
> For this issue can reproduce in vIOMMU no-batch mode support because
> while the vIOMMU enabled, it will
> flash a large memory to unmap, and this memory are haven't been mapped
> before, so this unmapping will fail
>
> qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
> qemu-system-x86_64: vhost_vdpa_dma_unmap(0x7fa26d1dd190, 0x0,
> 0x80000000) = -5 (Bad address)

So if this is a simple unmap, which error condition had you met in
vhost_vdpa_process_iotlb_msg()?

I think you need to trace to see what happens. For example:

1) can the code pass asid_to_iotlb()
2) if not, ASID 0 has been deleted since all the mappings have been unmapped

if ASID 0 has been completely unmap, any reason we need to unmap it
again? And do we need to drop the vhost_vdpa_remove_as() from both

1) vhost_vdpa_unmap()
and
2) vhost_vdpa_process_iotlb_msg()
?

Thanks

> qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
> ....
> in batch mode this operation will begin with VHOST_IOTLB_BATCH_BEGIN,
> so don't have this issue
>
> Thanks
> cindy
> > >             msg->type == VHOST_IOTLB_BATCH_BEGIN) {
> > >                 as = vhost_vdpa_find_alloc_as(v, asid);
> > >                 if (!as) {
> > > --
> > > 2.34.3
> > >
> >
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] vhost_vdpa: fix unmap process in no-batch mode
  2023-04-11  9:13     ` Jason Wang
@ 2023-04-12  6:31       ` Cindy Lu
  2023-04-12  6:37         ` Jason Wang
  0 siblings, 1 reply; 6+ messages in thread
From: Cindy Lu @ 2023-04-12  6:31 UTC (permalink / raw)
  To: Jason Wang; +Cc: mst, kvm, linux-kernel, virtualization, netdev

On Tue, Apr 11, 2023 at 5:14 PM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Apr 11, 2023 at 3:29 PM Cindy Lu <lulu@redhat.com> wrote:
> >
> > On Tue, Apr 11, 2023 at 11:10 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Mon, Apr 10, 2023 at 11:01 PM Cindy Lu <lulu@redhat.com> wrote:
> > > >
> > > > While using the no-batch mode, the process will not begin with
> > > > VHOST_IOTLB_BATCH_BEGIN, so we need to add the
> > > > VHOST_IOTLB_INVALIDATE to get vhost_vdpa_as, the process is the
> > > > same as VHOST_IOTLB_UPDATE
> > > >
> > > > Signed-off-by: Cindy Lu <lulu@redhat.com>
> > > > ---
> > > >  drivers/vhost/vdpa.c | 1 +
> > > >  1 file changed, 1 insertion(+)
> > > >
> > > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> > > > index 7be9d9d8f01c..32636a02a0ab 100644
> > > > --- a/drivers/vhost/vdpa.c
> > > > +++ b/drivers/vhost/vdpa.c
> > > > @@ -1074,6 +1074,7 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid,
> > > >                 goto unlock;
> > > >
> > > >         if (msg->type == VHOST_IOTLB_UPDATE ||
> > > > +           msg->type == VHOST_IOTLB_INVALIDATE ||
> > >
> > > I'm not sure I get here, invalidation doesn't need to create a new AS.
> > >
> > > Or maybe you can post the userspace code that can trigger this issue?
> > >
> > > Thanks
> > >
> > sorry I didn't write it clearly
> > For this issue can reproduce in vIOMMU no-batch mode support because
> > while the vIOMMU enabled, it will
> > flash a large memory to unmap, and this memory are haven't been mapped
> > before, so this unmapping will fail
> >
> > qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
> > qemu-system-x86_64: vhost_vdpa_dma_unmap(0x7fa26d1dd190, 0x0,
> > 0x80000000) = -5 (Bad address)
>
> So if this is a simple unmap, which error condition had you met in
> vhost_vdpa_process_iotlb_msg()?
>
> I think you need to trace to see what happens. For example:
>
this happens when vIOMMU enable and vdpa binds to vfio-pci run testpmd
the qemu will unmapped whole memory that was used and then mapped the
iommu MR section
This memory much larger than the memory mapped to vdpa(this is what
actually mapped to
vdpa device in no-iommu MR)

> 1) can the code pass asid_to_iotlb()
> 2) if not, ASID 0 has been deleted since all the mappings have been unmapped
>
> if ASID 0 has been completely unmap, any reason we need to unmap it
> again? And do we need to drop the vhost_vdpa_remove_as() from both
>

> 1) vhost_vdpa_unmap()
> and
> 2) vhost_vdpa_process_iotlb_msg()
> ?
>
> Thanks
>
the code passed the asid_to_iotlb(), The iotlb is NULL at this situation
and the vhost_vdpa_process_iotlb_msg will return fail. this will cause
the mapping
 in qemu fail

thanks
cindy

> > qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
> > ....
> > in batch mode this operation will begin with VHOST_IOTLB_BATCH_BEGIN,
> > so don't have this issue
> >
> > Thanks
> > cindy
> > > >             msg->type == VHOST_IOTLB_BATCH_BEGIN) {
> > > >                 as = vhost_vdpa_find_alloc_as(v, asid);
> > > >                 if (!as) {
> > > > --
> > > > 2.34.3
> > > >
> > >
> >
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] vhost_vdpa: fix unmap process in no-batch mode
  2023-04-12  6:31       ` Cindy Lu
@ 2023-04-12  6:37         ` Jason Wang
  0 siblings, 0 replies; 6+ messages in thread
From: Jason Wang @ 2023-04-12  6:37 UTC (permalink / raw)
  To: Cindy Lu; +Cc: mst, kvm, linux-kernel, virtualization, netdev

On Wed, Apr 12, 2023 at 2:32 PM Cindy Lu <lulu@redhat.com> wrote:
>
> On Tue, Apr 11, 2023 at 5:14 PM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Tue, Apr 11, 2023 at 3:29 PM Cindy Lu <lulu@redhat.com> wrote:
> > >
> > > On Tue, Apr 11, 2023 at 11:10 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Mon, Apr 10, 2023 at 11:01 PM Cindy Lu <lulu@redhat.com> wrote:
> > > > >
> > > > > While using the no-batch mode, the process will not begin with
> > > > > VHOST_IOTLB_BATCH_BEGIN, so we need to add the
> > > > > VHOST_IOTLB_INVALIDATE to get vhost_vdpa_as, the process is the
> > > > > same as VHOST_IOTLB_UPDATE
> > > > >
> > > > > Signed-off-by: Cindy Lu <lulu@redhat.com>
> > > > > ---
> > > > >  drivers/vhost/vdpa.c | 1 +
> > > > >  1 file changed, 1 insertion(+)
> > > > >
> > > > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> > > > > index 7be9d9d8f01c..32636a02a0ab 100644
> > > > > --- a/drivers/vhost/vdpa.c
> > > > > +++ b/drivers/vhost/vdpa.c
> > > > > @@ -1074,6 +1074,7 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid,
> > > > >                 goto unlock;
> > > > >
> > > > >         if (msg->type == VHOST_IOTLB_UPDATE ||
> > > > > +           msg->type == VHOST_IOTLB_INVALIDATE ||
> > > >
> > > > I'm not sure I get here, invalidation doesn't need to create a new AS.
> > > >
> > > > Or maybe you can post the userspace code that can trigger this issue?
> > > >
> > > > Thanks
> > > >
> > > sorry I didn't write it clearly
> > > For this issue can reproduce in vIOMMU no-batch mode support because
> > > while the vIOMMU enabled, it will
> > > flash a large memory to unmap, and this memory are haven't been mapped
> > > before, so this unmapping will fail
> > >
> > > qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
> > > qemu-system-x86_64: vhost_vdpa_dma_unmap(0x7fa26d1dd190, 0x0,
> > > 0x80000000) = -5 (Bad address)
> >
> > So if this is a simple unmap, which error condition had you met in
> > vhost_vdpa_process_iotlb_msg()?
> >
> > I think you need to trace to see what happens. For example:
> >
> this happens when vIOMMU enable and vdpa binds to vfio-pci run testpmd
> the qemu will unmapped whole memory that was used and then mapped the
> iommu MR section

So it's a map after an unmap, not an invalidation?

> This memory much larger than the memory mapped to vdpa(this is what
> actually mapped to
> vdpa device in no-iommu MR)
>
> > 1) can the code pass asid_to_iotlb()
> > 2) if not, ASID 0 has been deleted since all the mappings have been unmapped
> >
> > if ASID 0 has been completely unmap, any reason we need to unmap it
> > again? And do we need to drop the vhost_vdpa_remove_as() from both
> >
>
> > 1) vhost_vdpa_unmap()
> > and
> > 2) vhost_vdpa_process_iotlb_msg()
> > ?
> >
> > Thanks
> >
> the code passed the asid_to_iotlb(), The iotlb is NULL at this situation
> and the vhost_vdpa_process_iotlb_msg will return fail. this will cause
> the mapping
>  in qemu fail

Yes, so what I meant:

Instead of free the AS in vhost_vdpa_unmap() or
vhost_vdpa_process_iotlb_msg() and allocate it again here.

Is it better to not remove the AS in those two functions even if
there's no maps?

Thanks

>
> thanks
> cindy
>
> > > qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
> > > ....
> > > in batch mode this operation will begin with VHOST_IOTLB_BATCH_BEGIN,
> > > so don't have this issue
> > >
> > > Thanks
> > > cindy
> > > > >             msg->type == VHOST_IOTLB_BATCH_BEGIN) {
> > > > >                 as = vhost_vdpa_find_alloc_as(v, asid);
> > > > >                 if (!as) {
> > > > > --
> > > > > 2.34.3
> > > > >
> > > >
> > >
> >
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-04-12  6:38 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-10 15:01 [PATCH] vhost_vdpa: fix unmap process in no-batch mode Cindy Lu
2023-04-11  3:09 ` Jason Wang
2023-04-11  7:28   ` Cindy Lu
2023-04-11  9:13     ` Jason Wang
2023-04-12  6:31       ` Cindy Lu
2023-04-12  6:37         ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).