virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] *** IRQ offloading for vDPA ***
@ 2020-07-12 14:52 Zhu Lingshan
  2020-07-15 13:43 ` Jason Wang
  0 siblings, 1 reply; 7+ messages in thread
From: Zhu Lingshan @ 2020-07-12 14:52 UTC (permalink / raw)
  To: mst, jasowang, alex.williamson, pbonzini, sean.j.christopherson,
	wanpengli
  Cc: virtualization, kvm, netdev, dan.daly, Zhu Lingshan

Hi All,

This series intends to implement IRQ offloading for
vhost_vdpa.

By the feat of irq forwarding facilities like posted
interrupt on X86, irq bypass can  help deliver
interrupts to vCPU directly.

vDPA devices have dedicated hardware backends like VFIO
pass-throughed devices. So it would be possible to setup
irq offloading(irq bypass) for vDPA devices and gain
performance improvements.

In my testing, with this feature, we can save 0.1ms
in a ping between two VFs on average.


Zhu Lingshan (7):
  vhost: introduce vhost_call_ctx
  kvm/vfio: detect assigned device via irqbypass manager
  vhost_vdpa: implement IRQ offloading functions in vhost_vdpa
  vDPA: implement IRQ offloading helpers in vDPA core
  virtio_vdpa: init IRQ offloading function pointers to NULL.
  ifcvf: replace irq_request/free with helpers in vDPA core.
  irqbypass: do not start consumer or producer when failed to connect

 arch/x86/kvm/x86.c              | 10 ++++--
 drivers/vdpa/ifcvf/ifcvf_main.c | 11 +++---
 drivers/vdpa/vdpa.c             | 46 +++++++++++++++++++++++++
 drivers/vhost/Kconfig           |  1 +
 drivers/vhost/vdpa.c            | 75 +++++++++++++++++++++++++++++++++++++++--
 drivers/vhost/vhost.c           | 22 ++++++++----
 drivers/vhost/vhost.h           |  9 ++++-
 drivers/virtio/virtio_vdpa.c    |  2 ++
 include/linux/vdpa.h            | 11 ++++++
 virt/kvm/vfio.c                 |  2 --
 virt/lib/irqbypass.c            | 16 +++++----
 11 files changed, 181 insertions(+), 24 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/7] *** IRQ offloading for vDPA ***
  2020-07-12 14:52 [PATCH 0/7] *** IRQ offloading for vDPA *** Zhu Lingshan
@ 2020-07-15 13:43 ` Jason Wang
       [not found]   ` <97032c51-3265-c94a-9ce1-f42fcc6d3075@intel.com>
  0 siblings, 1 reply; 7+ messages in thread
From: Jason Wang @ 2020-07-15 13:43 UTC (permalink / raw)
  To: Zhu Lingshan, mst, alex.williamson, pbonzini,
	sean.j.christopherson, wanpengli
  Cc: virtualization, kvm, netdev, dan.daly


On 2020/7/12 下午10:52, Zhu Lingshan wrote:
> Hi All,
>
> This series intends to implement IRQ offloading for
> vhost_vdpa.
>
> By the feat of irq forwarding facilities like posted
> interrupt on X86, irq bypass can  help deliver
> interrupts to vCPU directly.
>
> vDPA devices have dedicated hardware backends like VFIO
> pass-throughed devices. So it would be possible to setup
> irq offloading(irq bypass) for vDPA devices and gain
> performance improvements.
>
> In my testing, with this feature, we can save 0.1ms
> in a ping between two VFs on average.


Hi Lingshan:

During the virtio-networking meeting, Michael spots two possible issues:

1) do we need an new uAPI to stop the irq offloading?
2) can interrupt lost during the eventfd ctx?

For 1) I think we probably not, we can allocate an independent eventfd 
which does not map to MSIX. So the consumer can't match the producer and 
we fallback to eventfd based irq.
For 2) it looks to me guest should deal with the irq synchronization 
when mask or unmask MSIX vectors.

What's your thought?

Thanks


>
>
> Zhu Lingshan (7):
>    vhost: introduce vhost_call_ctx
>    kvm/vfio: detect assigned device via irqbypass manager
>    vhost_vdpa: implement IRQ offloading functions in vhost_vdpa
>    vDPA: implement IRQ offloading helpers in vDPA core
>    virtio_vdpa: init IRQ offloading function pointers to NULL.
>    ifcvf: replace irq_request/free with helpers in vDPA core.
>    irqbypass: do not start consumer or producer when failed to connect
>
>   arch/x86/kvm/x86.c              | 10 ++++--
>   drivers/vdpa/ifcvf/ifcvf_main.c | 11 +++---
>   drivers/vdpa/vdpa.c             | 46 +++++++++++++++++++++++++
>   drivers/vhost/Kconfig           |  1 +
>   drivers/vhost/vdpa.c            | 75 +++++++++++++++++++++++++++++++++++++++--
>   drivers/vhost/vhost.c           | 22 ++++++++----
>   drivers/vhost/vhost.h           |  9 ++++-
>   drivers/virtio/virtio_vdpa.c    |  2 ++
>   include/linux/vdpa.h            | 11 ++++++
>   virt/kvm/vfio.c                 |  2 --
>   virt/lib/irqbypass.c            | 16 +++++----
>   11 files changed, 181 insertions(+), 24 deletions(-)
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/7] *** IRQ offloading for vDPA ***
       [not found]   ` <97032c51-3265-c94a-9ce1-f42fcc6d3075@intel.com>
@ 2020-07-16  2:59     ` Jason Wang
       [not found]       ` <29ab6da8-ed8e-6b91-d658-f3d240543b29@intel.com>
  2020-07-16  6:13     ` Michael S. Tsirkin
  1 sibling, 1 reply; 7+ messages in thread
From: Jason Wang @ 2020-07-16  2:59 UTC (permalink / raw)
  To: Zhu, Lingshan, mst, alex.williamson, pbonzini,
	sean.j.christopherson, wanpengli
  Cc: virtualization, kvm, netdev, dan.daly


On 2020/7/16 上午9:39, Zhu, Lingshan wrote:
>
>
> On 7/15/2020 9:43 PM, Jason Wang wrote:
>>
>> On 2020/7/12 下午10:52, Zhu Lingshan wrote:
>>> Hi All,
>>>
>>> This series intends to implement IRQ offloading for
>>> vhost_vdpa.
>>>
>>> By the feat of irq forwarding facilities like posted
>>> interrupt on X86, irq bypass can  help deliver
>>> interrupts to vCPU directly.
>>>
>>> vDPA devices have dedicated hardware backends like VFIO
>>> pass-throughed devices. So it would be possible to setup
>>> irq offloading(irq bypass) for vDPA devices and gain
>>> performance improvements.
>>>
>>> In my testing, with this feature, we can save 0.1ms
>>> in a ping between two VFs on average.
>>
>>
>> Hi Lingshan:
>>
>> During the virtio-networking meeting, Michael spots two possible issues:
>>
>> 1) do we need an new uAPI to stop the irq offloading?
>> 2) can interrupt lost during the eventfd ctx?
>>
>> For 1) I think we probably not, we can allocate an independent 
>> eventfd which does not map to MSIX. So the consumer can't match the 
>> producer and we fallback to eventfd based irq.
> Hi Jason,
>
> I wonder why we need to stop irq offloading, but if we need to do so, maybe a new uAPI would be more intuitive to me,
> but why and who(user? qemu?) shall initialize this process, based on what kinda of basis to make the decision?


The reason is we may want to fallback to software datapath for some 
reason (e.g software assisted live migration). In this case we need 
intercept device write to used ring so we can not offloading virtqueue 
interrupt in this case.


>> For 2) it looks to me guest should deal with the irq synchronization 
>> when mask or unmask MSIX vectors.
> Agreed!


It's better to double check for this.

Thanks


>
> Thanks,
> BR
> Zhu Lingshan
>>
>> What's your thought?
>>
>> Thanks
>>
>>
>>>
>>>
>>> Zhu Lingshan (7):
>>>    vhost: introduce vhost_call_ctx
>>>    kvm/vfio: detect assigned device via irqbypass manager
>>>    vhost_vdpa: implement IRQ offloading functions in vhost_vdpa
>>>    vDPA: implement IRQ offloading helpers in vDPA core
>>>    virtio_vdpa: init IRQ offloading function pointers to NULL.
>>>    ifcvf: replace irq_request/free with helpers in vDPA core.
>>>    irqbypass: do not start consumer or producer when failed to connect
>>>
>>>   arch/x86/kvm/x86.c              | 10 ++++--
>>>   drivers/vdpa/ifcvf/ifcvf_main.c | 11 +++---
>>>   drivers/vdpa/vdpa.c             | 46 +++++++++++++++++++++++++
>>>   drivers/vhost/Kconfig           |  1 +
>>>   drivers/vhost/vdpa.c            | 75 
>>> +++++++++++++++++++++++++++++++++++++++--
>>>   drivers/vhost/vhost.c           | 22 ++++++++----
>>>   drivers/vhost/vhost.h           |  9 ++++-
>>>   drivers/virtio/virtio_vdpa.c    |  2 ++
>>>   include/linux/vdpa.h            | 11 ++++++
>>>   virt/kvm/vfio.c                 |  2 --
>>>   virt/lib/irqbypass.c            | 16 +++++----
>>>   11 files changed, 181 insertions(+), 24 deletions(-)
>>>
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/7] *** IRQ offloading for vDPA ***
       [not found]           ` <a319cba3-8b3d-8968-0fb7-48a1d34042bf@intel.com>
@ 2020-07-16  4:20             ` Jason Wang
  2020-07-16  6:15               ` Michael S. Tsirkin
  0 siblings, 1 reply; 7+ messages in thread
From: Jason Wang @ 2020-07-16  4:20 UTC (permalink / raw)
  To: Zhu, Lingshan, Michael S. Tsirkin, alex williamson, pbonzini,
	sean.j.christopherson, wanpengli
  Cc: virtualization, netdev, kvm, dan daly


On 2020/7/16 下午12:13, Zhu, Lingshan wrote:
>
>
> On 7/16/2020 12:02 PM, Jason Wang wrote:
>>
>> On 2020/7/16 上午11:59, Zhu, Lingshan wrote:
>>>
>>> On 7/16/2020 10:59 AM, Jason Wang wrote:
>>>>
>>>> On 2020/7/16 上午9:39, Zhu, Lingshan wrote:
>>>>>
>>>>>
>>>>> On 7/15/2020 9:43 PM, Jason Wang wrote:
>>>>>>
>>>>>> On 2020/7/12 下午10:52, Zhu Lingshan wrote:
>>>>>>> Hi All,
>>>>>>>
>>>>>>> This series intends to implement IRQ offloading for
>>>>>>> vhost_vdpa.
>>>>>>>
>>>>>>> By the feat of irq forwarding facilities like posted
>>>>>>> interrupt on X86, irq bypass can  help deliver
>>>>>>> interrupts to vCPU directly.
>>>>>>>
>>>>>>> vDPA devices have dedicated hardware backends like VFIO
>>>>>>> pass-throughed devices. So it would be possible to setup
>>>>>>> irq offloading(irq bypass) for vDPA devices and gain
>>>>>>> performance improvements.
>>>>>>>
>>>>>>> In my testing, with this feature, we can save 0.1ms
>>>>>>> in a ping between two VFs on average.
>>>>>>
>>>>>>
>>>>>> Hi Lingshan:
>>>>>>
>>>>>> During the virtio-networking meeting, Michael spots two possible 
>>>>>> issues:
>>>>>>
>>>>>> 1) do we need an new uAPI to stop the irq offloading?
>>>>>> 2) can interrupt lost during the eventfd ctx?
>>>>>>
>>>>>> For 1) I think we probably not, we can allocate an independent 
>>>>>> eventfd which does not map to MSIX. So the consumer can't match 
>>>>>> the producer and we fallback to eventfd based irq.
>>>>> Hi Jason,
>>>>>
>>>>> I wonder why we need to stop irq offloading, but if we need to do 
>>>>> so, maybe a new uAPI would be more intuitive to me,
>>>>> but why and who(user? qemu?) shall initialize this process, based 
>>>>> on what kinda of basis to make the decision?
>>>>
>>>>
>>>> The reason is we may want to fallback to software datapath for some 
>>>> reason (e.g software assisted live migration). In this case we need 
>>>> intercept device write to used ring so we can not offloading 
>>>> virtqueue interrupt in this case.
>>> so add a VHOST_VDPA_STOP_IRQ_OFFLOADING? Then do we need a 
>>> VHOST_VDPA_START_IRQ_OFFLOADING, then let userspace fully control 
>>> this? Or any better approaches? 
>>
>>
>> Probably not, it's as simple as allocating another eventfd (but not 
>> irqfd), and pass it to vhost-vdpa. Then the offloading is disabled 
>> since it doesn't have a consumer.
> OK, sounds like QEMU work, no need to take care in this series, right?


That's my understanding.

Thanks


>
> Thanks,
> BR
> Zhu Lingshan
>>
>> Thanks
>>
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/7] *** IRQ offloading for vDPA ***
       [not found]   ` <97032c51-3265-c94a-9ce1-f42fcc6d3075@intel.com>
  2020-07-16  2:59     ` Jason Wang
@ 2020-07-16  6:13     ` Michael S. Tsirkin
  1 sibling, 0 replies; 7+ messages in thread
From: Michael S. Tsirkin @ 2020-07-16  6:13 UTC (permalink / raw)
  To: Zhu, Lingshan
  Cc: Jason Wang, alex.williamson, pbonzini, sean.j.christopherson,
	wanpengli, virtualization, kvm, netdev, dan.daly

On Thu, Jul 16, 2020 at 09:39:17AM +0800, Zhu, Lingshan wrote:
> 
> On 7/15/2020 9:43 PM, Jason Wang wrote:
> 
> 
>     On 2020/7/12 下午10:52, Zhu Lingshan wrote:
> 
>         Hi All,
> 
>         This series intends to implement IRQ offloading for
>         vhost_vdpa.
> 
>         By the feat of irq forwarding facilities like posted
>         interrupt on X86, irq bypass can  help deliver
>         interrupts to vCPU directly.
> 
>         vDPA devices have dedicated hardware backends like VFIO
>         pass-throughed devices. So it would be possible to setup
>         irq offloading(irq bypass) for vDPA devices and gain
>         performance improvements.
> 
>         In my testing, with this feature, we can save 0.1ms
>         in a ping between two VFs on average.
> 
> 
> 
>     Hi Lingshan:
> 
>     During the virtio-networking meeting, Michael spots two possible issues:
> 
>     1) do we need an new uAPI to stop the irq offloading?
>     2) can interrupt lost during the eventfd ctx?
> 
>     For 1) I think we probably not, we can allocate an independent eventfd
>     which does not map to MSIX. So the consumer can't match the producer and we
>     fallback to eventfd based irq.
> 
> Hi Jason,
> 
> I wonder why we need to stop irq offloading, but if we need to do so, maybe a new uAPI would be more intuitive to me,
> but why and who(user? qemu?) shall initialize this process, based on what kinda of basis to make the decision?
> 
>     For 2) it looks to me guest should deal with the irq synchronization when
>     mask or unmask MSIX vectors.
> 
> Agreed!

Well we need to make sure during a switch each interrupt is reported
*somewhere*: either irq or eventfd - and not lost.


> Thanks,
> BR
> Zhu Lingshan
> 
> 
>     What's your thought?
> 
>     Thanks
> 
> 
> 
> 
> 
>         Zhu Lingshan (7):
>            vhost: introduce vhost_call_ctx
>            kvm/vfio: detect assigned device via irqbypass manager
>            vhost_vdpa: implement IRQ offloading functions in vhost_vdpa
>            vDPA: implement IRQ offloading helpers in vDPA core
>            virtio_vdpa: init IRQ offloading function pointers to NULL.
>            ifcvf: replace irq_request/free with helpers in vDPA core.
>            irqbypass: do not start consumer or producer when failed to connect
> 
>           arch/x86/kvm/x86.c              | 10 ++++--
>           drivers/vdpa/ifcvf/ifcvf_main.c | 11 +++---
>           drivers/vdpa/vdpa.c             | 46 +++++++++++++++++++++++++
>           drivers/vhost/Kconfig           |  1 +
>           drivers/vhost/vdpa.c            | 75
>         +++++++++++++++++++++++++++++++++++++++--
>           drivers/vhost/vhost.c           | 22 ++++++++----
>           drivers/vhost/vhost.h           |  9 ++++-
>           drivers/virtio/virtio_vdpa.c    |  2 ++
>           include/linux/vdpa.h            | 11 ++++++
>           virt/kvm/vfio.c                 |  2 --
>           virt/lib/irqbypass.c            | 16 +++++----
>           11 files changed, 181 insertions(+), 24 deletions(-)
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/7] *** IRQ offloading for vDPA ***
  2020-07-16  4:20             ` Jason Wang
@ 2020-07-16  6:15               ` Michael S. Tsirkin
  2020-07-16  8:06                 ` Jason Wang
  0 siblings, 1 reply; 7+ messages in thread
From: Michael S. Tsirkin @ 2020-07-16  6:15 UTC (permalink / raw)
  To: Jason Wang
  Cc: wanpengli, kvm, netdev, sean.j.christopherson, virtualization,
	pbonzini, Zhu, Lingshan

On Thu, Jul 16, 2020 at 12:20:09PM +0800, Jason Wang wrote:
> 
> On 2020/7/16 下午12:13, Zhu, Lingshan wrote:
> > 
> > 
> > On 7/16/2020 12:02 PM, Jason Wang wrote:
> > > 
> > > On 2020/7/16 上午11:59, Zhu, Lingshan wrote:
> > > > 
> > > > On 7/16/2020 10:59 AM, Jason Wang wrote:
> > > > > 
> > > > > On 2020/7/16 上午9:39, Zhu, Lingshan wrote:
> > > > > > 
> > > > > > 
> > > > > > On 7/15/2020 9:43 PM, Jason Wang wrote:
> > > > > > > 
> > > > > > > On 2020/7/12 下午10:52, Zhu Lingshan wrote:
> > > > > > > > Hi All,
> > > > > > > > 
> > > > > > > > This series intends to implement IRQ offloading for
> > > > > > > > vhost_vdpa.
> > > > > > > > 
> > > > > > > > By the feat of irq forwarding facilities like posted
> > > > > > > > interrupt on X86, irq bypass can  help deliver
> > > > > > > > interrupts to vCPU directly.
> > > > > > > > 
> > > > > > > > vDPA devices have dedicated hardware backends like VFIO
> > > > > > > > pass-throughed devices. So it would be possible to setup
> > > > > > > > irq offloading(irq bypass) for vDPA devices and gain
> > > > > > > > performance improvements.
> > > > > > > > 
> > > > > > > > In my testing, with this feature, we can save 0.1ms
> > > > > > > > in a ping between two VFs on average.
> > > > > > > 
> > > > > > > 
> > > > > > > Hi Lingshan:
> > > > > > > 
> > > > > > > During the virtio-networking meeting, Michael spots
> > > > > > > two possible issues:
> > > > > > > 
> > > > > > > 1) do we need an new uAPI to stop the irq offloading?
> > > > > > > 2) can interrupt lost during the eventfd ctx?
> > > > > > > 
> > > > > > > For 1) I think we probably not, we can allocate an
> > > > > > > independent eventfd which does not map to MSIX. So
> > > > > > > the consumer can't match the producer and we
> > > > > > > fallback to eventfd based irq.
> > > > > > Hi Jason,
> > > > > > 
> > > > > > I wonder why we need to stop irq offloading, but if we
> > > > > > need to do so, maybe a new uAPI would be more intuitive
> > > > > > to me,
> > > > > > but why and who(user? qemu?) shall initialize this
> > > > > > process, based on what kinda of basis to make the
> > > > > > decision?
> > > > > 
> > > > > 
> > > > > The reason is we may want to fallback to software datapath
> > > > > for some reason (e.g software assisted live migration). In
> > > > > this case we need intercept device write to used ring so we
> > > > > can not offloading virtqueue interrupt in this case.
> > > > so add a VHOST_VDPA_STOP_IRQ_OFFLOADING? Then do we need a
> > > > VHOST_VDPA_START_IRQ_OFFLOADING, then let userspace fully
> > > > control this? Or any better approaches?
> > > 
> > > 
> > > Probably not, it's as simple as allocating another eventfd (but not
> > > irqfd), and pass it to vhost-vdpa. Then the offloading is disabled
> > > since it doesn't have a consumer.
> > OK, sounds like QEMU work, no need to take care in this series, right?
> 
> 
> That's my understanding.
> 
> Thanks

Let's confirm a switch happens atomically so each interrupt
is sent either to eventfd to guest directly though.

> 
> > 
> > Thanks,
> > BR
> > Zhu Lingshan
> > > 
> > > Thanks
> > > 
> > > 

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/7] *** IRQ offloading for vDPA ***
  2020-07-16  6:15               ` Michael S. Tsirkin
@ 2020-07-16  8:06                 ` Jason Wang
  0 siblings, 0 replies; 7+ messages in thread
From: Jason Wang @ 2020-07-16  8:06 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Zhu, Lingshan, alex williamson, pbonzini, sean.j.christopherson,
	wanpengli, virtualization, netdev, kvm, dan daly


On 2020/7/16 下午2:15, Michael S. Tsirkin wrote:
> On Thu, Jul 16, 2020 at 12:20:09PM +0800, Jason Wang wrote:
>> On 2020/7/16 下午12:13, Zhu, Lingshan wrote:
>>> On 7/16/2020 12:02 PM, Jason Wang wrote:
>>>> On 2020/7/16 上午11:59, Zhu, Lingshan wrote:
>>>>> On 7/16/2020 10:59 AM, Jason Wang wrote:
>>>>>> On 2020/7/16 上午9:39, Zhu, Lingshan wrote:
>>>>>>> On 7/15/2020 9:43 PM, Jason Wang wrote:
>>>>>>>> On 2020/7/12 下午10:52, Zhu Lingshan wrote:
>>>>>>>>> Hi All,
>>>>>>>>>
>>>>>>>>> This series intends to implement IRQ offloading for
>>>>>>>>> vhost_vdpa.
>>>>>>>>>
>>>>>>>>> By the feat of irq forwarding facilities like posted
>>>>>>>>> interrupt on X86, irq bypass can  help deliver
>>>>>>>>> interrupts to vCPU directly.
>>>>>>>>>
>>>>>>>>> vDPA devices have dedicated hardware backends like VFIO
>>>>>>>>> pass-throughed devices. So it would be possible to setup
>>>>>>>>> irq offloading(irq bypass) for vDPA devices and gain
>>>>>>>>> performance improvements.
>>>>>>>>>
>>>>>>>>> In my testing, with this feature, we can save 0.1ms
>>>>>>>>> in a ping between two VFs on average.
>>>>>>>> Hi Lingshan:
>>>>>>>>
>>>>>>>> During the virtio-networking meeting, Michael spots
>>>>>>>> two possible issues:
>>>>>>>>
>>>>>>>> 1) do we need an new uAPI to stop the irq offloading?
>>>>>>>> 2) can interrupt lost during the eventfd ctx?
>>>>>>>>
>>>>>>>> For 1) I think we probably not, we can allocate an
>>>>>>>> independent eventfd which does not map to MSIX. So
>>>>>>>> the consumer can't match the producer and we
>>>>>>>> fallback to eventfd based irq.
>>>>>>> Hi Jason,
>>>>>>>
>>>>>>> I wonder why we need to stop irq offloading, but if we
>>>>>>> need to do so, maybe a new uAPI would be more intuitive
>>>>>>> to me,
>>>>>>> but why and who(user? qemu?) shall initialize this
>>>>>>> process, based on what kinda of basis to make the
>>>>>>> decision?
>>>>>> The reason is we may want to fallback to software datapath
>>>>>> for some reason (e.g software assisted live migration). In
>>>>>> this case we need intercept device write to used ring so we
>>>>>> can not offloading virtqueue interrupt in this case.
>>>>> so add a VHOST_VDPA_STOP_IRQ_OFFLOADING? Then do we need a
>>>>> VHOST_VDPA_START_IRQ_OFFLOADING, then let userspace fully
>>>>> control this? Or any better approaches?
>>>> Probably not, it's as simple as allocating another eventfd (but not
>>>> irqfd), and pass it to vhost-vdpa. Then the offloading is disabled
>>>> since it doesn't have a consumer.
>>> OK, sounds like QEMU work, no need to take care in this series, right?
>> That's my understanding.
>>
>> Thanks
> Let's confirm a switch happens atomically so each interrupt
> is sent either to eventfd to guest directly though.


I think it's safe since:

1) we don't alloc/free interrupt during the eventfd change
2) The irte is modified automatically through cmpxchg_double() in 
modify_irte(), so the interrupt is either remapping to eventfd or pi 
descriptor

Thanks


>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-07-16  8:06 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-12 14:52 [PATCH 0/7] *** IRQ offloading for vDPA *** Zhu Lingshan
2020-07-15 13:43 ` Jason Wang
     [not found]   ` <97032c51-3265-c94a-9ce1-f42fcc6d3075@intel.com>
2020-07-16  2:59     ` Jason Wang
     [not found]       ` <29ab6da8-ed8e-6b91-d658-f3d240543b29@intel.com>
     [not found]         ` <1e91d9dd-d787-beff-2c14-9c76ffc3b285@redhat.com>
     [not found]           ` <a319cba3-8b3d-8968-0fb7-48a1d34042bf@intel.com>
2020-07-16  4:20             ` Jason Wang
2020-07-16  6:15               ` Michael S. Tsirkin
2020-07-16  8:06                 ` Jason Wang
2020-07-16  6:13     ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).