* [PATCH V3 0/6] IRQ offloading for vDPA @ 2020-07-22 9:49 Zhu Lingshan 2020-07-22 9:49 ` [PATCH V3 1/6] vhost: introduce vhost_vring_call Zhu Lingshan 0 siblings, 1 reply; 4+ messages in thread From: Zhu Lingshan @ 2020-07-22 9:49 UTC (permalink / raw) To: jasowang, alex.williamson, mst, pbonzini, sean.j.christopherson, wanpengli Cc: virtualization, netdev, kvm, Zhu Lingshan This series intends to implement IRQ offloading for vhost_vdpa. By the feat of irq forwarding facilities like posted interrupt on X86, irq bypass can help deliver interrupts to vCPU directly. vDPA devices have dedicated hardware backends like VFIO pass-throughed devices. So it would be possible to setup irq offloading(irq bypass) for vDPA devices and gain performance improvements. In my testing, with this feature, we can save 0.1ms in a ping between two VFs on average. changes from V2: (1)rename struct vhost_call_ctx to vhost_vring_call (2)add kvm_arch_end_assignment() in del_producer() code path (3)rename vDPA helpers to vdpa_devm_request_irq() and vdpa_devm_free_irq(). Add better comments for them. (4)better comments for setup_vq_irq() and unsetup_vq_irq() (5)In vDPA VHOST_SET_VRING_CALL, will call vhost_vdpa_update_vq_irq() without checking producer.irq, move this check into vhost_vdpa_update_vq_irq(), so that get advantage of the spinlock. (6)Add a function vhost_vdpa_clean_irq(), this function will unregister the producer of vqs when vhost_vdpa_release(). This is safe for control vq. (7) minor improvements changes from V1: (1)dropped vfio changes. (3)removed KVM_HVAE_IRQ_BYPASS checks (4)locking fixes (5)simplified vhost_vdpa_update_vq_irq() (6)minor improvements Zhu Lingshan (6): vhost: introduce vhost_vring_call kvm: detect assigned device via irqbypass manager vDPA: implement vq IRQ allocate/free helpers in vDPA core vhost_vdpa: implement IRQ offloading in vhost_vdpa ifcvf: replace irq_request/free with vDPA helpers irqbypass: do not start cons/prod when failed connect arch/x86/kvm/x86.c | 11 +++++- drivers/vdpa/ifcvf/ifcvf_main.c | 14 ++++--- drivers/vdpa/vdpa.c | 49 +++++++++++++++++++++++ drivers/vhost/Kconfig | 1 + drivers/vhost/vdpa.c | 70 +++++++++++++++++++++++++++++++-- drivers/vhost/vhost.c | 22 ++++++++--- drivers/vhost/vhost.h | 9 ++++- include/linux/vdpa.h | 13 ++++++ virt/lib/irqbypass.c | 16 +++++--- 9 files changed, 182 insertions(+), 23 deletions(-) -- 2.18.4 ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH V3 1/6] vhost: introduce vhost_vring_call 2020-07-22 9:49 [PATCH V3 0/6] IRQ offloading for vDPA Zhu Lingshan @ 2020-07-22 9:49 ` Zhu Lingshan 2020-07-22 9:59 ` Zhu Lingshan 0 siblings, 1 reply; 4+ messages in thread From: Zhu Lingshan @ 2020-07-22 9:49 UTC (permalink / raw) To: jasowang, alex.williamson, mst, pbonzini, sean.j.christopherson, wanpengli Cc: virtualization, netdev, kvm, Zhu Lingshan, lszhu, Zhu Lingshan From: Zhu Lingshan <lingshan.zhu@intel.com> This commit introduces struct vhost_vring_call which replaced raw struct eventfd_ctx *call_ctx in struct vhost_virtqueue. Besides eventfd_ctx, it contains a spin lock and an irq_bypass_producer in its structure. Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Signed-off-by: lszhu <lszhu@localhost.localdomain> Signed-off-by: Zhu Lingshan <lingshan.zhu@live.com> --- drivers/vhost/vdpa.c | 4 ++-- drivers/vhost/vhost.c | 22 ++++++++++++++++------ drivers/vhost/vhost.h | 9 ++++++++- 3 files changed, 26 insertions(+), 9 deletions(-) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index a54b60d6623f..df3cf386b0cd 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -96,7 +96,7 @@ static void handle_vq_kick(struct vhost_work *work) static irqreturn_t vhost_vdpa_virtqueue_cb(void *private) { struct vhost_virtqueue *vq = private; - struct eventfd_ctx *call_ctx = vq->call_ctx; + struct eventfd_ctx *call_ctx = vq->call_ctx.ctx; if (call_ctx) eventfd_signal(call_ctx, 1); @@ -382,7 +382,7 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd, break; case VHOST_SET_VRING_CALL: - if (vq->call_ctx) { + if (vq->call_ctx.ctx) { cb.callback = vhost_vdpa_virtqueue_cb; cb.private = vq; } else { diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index d7b8df3edffc..9f1a845a9302 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -298,6 +298,13 @@ static void vhost_vq_meta_reset(struct vhost_dev *d) __vhost_vq_meta_reset(d->vqs[i]); } +static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx) +{ + call_ctx->ctx = NULL; + memset(&call_ctx->producer, 0x0, sizeof(struct irq_bypass_producer)); + spin_lock_init(&call_ctx->ctx_lock); +} + static void vhost_vq_reset(struct vhost_dev *dev, struct vhost_virtqueue *vq) { @@ -319,13 +326,13 @@ static void vhost_vq_reset(struct vhost_dev *dev, vq->log_base = NULL; vq->error_ctx = NULL; vq->kick = NULL; - vq->call_ctx = NULL; vq->log_ctx = NULL; vhost_reset_is_le(vq); vhost_disable_cross_endian(vq); vq->busyloop_timeout = 0; vq->umem = NULL; vq->iotlb = NULL; + vhost_vring_call_reset(&vq->call_ctx); __vhost_vq_meta_reset(vq); } @@ -685,8 +692,8 @@ void vhost_dev_cleanup(struct vhost_dev *dev) eventfd_ctx_put(dev->vqs[i]->error_ctx); if (dev->vqs[i]->kick) fput(dev->vqs[i]->kick); - if (dev->vqs[i]->call_ctx) - eventfd_ctx_put(dev->vqs[i]->call_ctx); + if (dev->vqs[i]->call_ctx.ctx) + eventfd_ctx_put(dev->vqs[i]->call_ctx.ctx); vhost_vq_reset(dev, dev->vqs[i]); } vhost_dev_free_iovecs(dev); @@ -1629,7 +1636,10 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg r = PTR_ERR(ctx); break; } - swap(ctx, vq->call_ctx); + + spin_lock(&vq->call_ctx.ctx_lock); + swap(ctx, vq->call_ctx.ctx); + spin_unlock(&vq->call_ctx.ctx_lock); break; case VHOST_SET_VRING_ERR: if (copy_from_user(&f, argp, sizeof f)) { @@ -2440,8 +2450,8 @@ static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq) void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq) { /* Signal the Guest tell them we used something up. */ - if (vq->call_ctx && vhost_notify(dev, vq)) - eventfd_signal(vq->call_ctx, 1); + if (vq->call_ctx.ctx && vhost_notify(dev, vq)) + eventfd_signal(vq->call_ctx.ctx, 1); } EXPORT_SYMBOL_GPL(vhost_signal); diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index c8e96a095d3b..38eb1aa3b68d 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -13,6 +13,7 @@ #include <linux/virtio_ring.h> #include <linux/atomic.h> #include <linux/vhost_iotlb.h> +#include <linux/irqbypass.h> struct vhost_work; typedef void (*vhost_work_fn_t)(struct vhost_work *work); @@ -60,6 +61,12 @@ enum vhost_uaddr_type { VHOST_NUM_ADDRS = 3, }; +struct vhost_vring_call { + struct eventfd_ctx *ctx; + struct irq_bypass_producer producer; + spinlock_t ctx_lock; +}; + /* The virtqueue structure describes a queue attached to a device. */ struct vhost_virtqueue { struct vhost_dev *dev; @@ -72,7 +79,7 @@ struct vhost_virtqueue { vring_used_t __user *used; const struct vhost_iotlb_map *meta_iotlb[VHOST_NUM_ADDRS]; struct file *kick; - struct eventfd_ctx *call_ctx; + struct vhost_vring_call call_ctx; struct eventfd_ctx *error_ctx; struct eventfd_ctx *log_ctx; -- 2.18.4 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH V3 1/6] vhost: introduce vhost_vring_call 2020-07-22 9:49 ` [PATCH V3 1/6] vhost: introduce vhost_vring_call Zhu Lingshan @ 2020-07-22 9:59 ` Zhu Lingshan 0 siblings, 0 replies; 4+ messages in thread From: Zhu Lingshan @ 2020-07-22 9:59 UTC (permalink / raw) To: Zhu Lingshan, jasowang, alex.williamson, mst, pbonzini, sean.j.christopherson, wanpengli Cc: virtualization, netdev, kvm, Zhu Lingshan, lszhu Please ignore this patchset incorrect metadata, will resend soon. Thanks! On 7/22/2020 5:49 PM, Zhu Lingshan wrote: > From: Zhu Lingshan <lingshan.zhu@intel.com> > > This commit introduces struct vhost_vring_call which replaced > raw struct eventfd_ctx *call_ctx in struct vhost_virtqueue. > Besides eventfd_ctx, it contains a spin lock and an > irq_bypass_producer in its structure. > > Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> > Signed-off-by: lszhu <lszhu@localhost.localdomain> > Signed-off-by: Zhu Lingshan <lingshan.zhu@live.com> > --- > drivers/vhost/vdpa.c | 4 ++-- > drivers/vhost/vhost.c | 22 ++++++++++++++++------ > drivers/vhost/vhost.h | 9 ++++++++- > 3 files changed, 26 insertions(+), 9 deletions(-) > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c > index a54b60d6623f..df3cf386b0cd 100644 > --- a/drivers/vhost/vdpa.c > +++ b/drivers/vhost/vdpa.c > @@ -96,7 +96,7 @@ static void handle_vq_kick(struct vhost_work *work) > static irqreturn_t vhost_vdpa_virtqueue_cb(void *private) > { > struct vhost_virtqueue *vq = private; > - struct eventfd_ctx *call_ctx = vq->call_ctx; > + struct eventfd_ctx *call_ctx = vq->call_ctx.ctx; > > if (call_ctx) > eventfd_signal(call_ctx, 1); > @@ -382,7 +382,7 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd, > break; > > case VHOST_SET_VRING_CALL: > - if (vq->call_ctx) { > + if (vq->call_ctx.ctx) { > cb.callback = vhost_vdpa_virtqueue_cb; > cb.private = vq; > } else { > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index d7b8df3edffc..9f1a845a9302 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -298,6 +298,13 @@ static void vhost_vq_meta_reset(struct vhost_dev *d) > __vhost_vq_meta_reset(d->vqs[i]); > } > > +static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx) > +{ > + call_ctx->ctx = NULL; > + memset(&call_ctx->producer, 0x0, sizeof(struct irq_bypass_producer)); > + spin_lock_init(&call_ctx->ctx_lock); > +} > + > static void vhost_vq_reset(struct vhost_dev *dev, > struct vhost_virtqueue *vq) > { > @@ -319,13 +326,13 @@ static void vhost_vq_reset(struct vhost_dev *dev, > vq->log_base = NULL; > vq->error_ctx = NULL; > vq->kick = NULL; > - vq->call_ctx = NULL; > vq->log_ctx = NULL; > vhost_reset_is_le(vq); > vhost_disable_cross_endian(vq); > vq->busyloop_timeout = 0; > vq->umem = NULL; > vq->iotlb = NULL; > + vhost_vring_call_reset(&vq->call_ctx); > __vhost_vq_meta_reset(vq); > } > > @@ -685,8 +692,8 @@ void vhost_dev_cleanup(struct vhost_dev *dev) > eventfd_ctx_put(dev->vqs[i]->error_ctx); > if (dev->vqs[i]->kick) > fput(dev->vqs[i]->kick); > - if (dev->vqs[i]->call_ctx) > - eventfd_ctx_put(dev->vqs[i]->call_ctx); > + if (dev->vqs[i]->call_ctx.ctx) > + eventfd_ctx_put(dev->vqs[i]->call_ctx.ctx); > vhost_vq_reset(dev, dev->vqs[i]); > } > vhost_dev_free_iovecs(dev); > @@ -1629,7 +1636,10 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg > r = PTR_ERR(ctx); > break; > } > - swap(ctx, vq->call_ctx); > + > + spin_lock(&vq->call_ctx.ctx_lock); > + swap(ctx, vq->call_ctx.ctx); > + spin_unlock(&vq->call_ctx.ctx_lock); > break; > case VHOST_SET_VRING_ERR: > if (copy_from_user(&f, argp, sizeof f)) { > @@ -2440,8 +2450,8 @@ static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq) > void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq) > { > /* Signal the Guest tell them we used something up. */ > - if (vq->call_ctx && vhost_notify(dev, vq)) > - eventfd_signal(vq->call_ctx, 1); > + if (vq->call_ctx.ctx && vhost_notify(dev, vq)) > + eventfd_signal(vq->call_ctx.ctx, 1); > } > EXPORT_SYMBOL_GPL(vhost_signal); > > diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h > index c8e96a095d3b..38eb1aa3b68d 100644 > --- a/drivers/vhost/vhost.h > +++ b/drivers/vhost/vhost.h > @@ -13,6 +13,7 @@ > #include <linux/virtio_ring.h> > #include <linux/atomic.h> > #include <linux/vhost_iotlb.h> > +#include <linux/irqbypass.h> > > struct vhost_work; > typedef void (*vhost_work_fn_t)(struct vhost_work *work); > @@ -60,6 +61,12 @@ enum vhost_uaddr_type { > VHOST_NUM_ADDRS = 3, > }; > > +struct vhost_vring_call { > + struct eventfd_ctx *ctx; > + struct irq_bypass_producer producer; > + spinlock_t ctx_lock; > +}; > + > /* The virtqueue structure describes a queue attached to a device. */ > struct vhost_virtqueue { > struct vhost_dev *dev; > @@ -72,7 +79,7 @@ struct vhost_virtqueue { > vring_used_t __user *used; > const struct vhost_iotlb_map *meta_iotlb[VHOST_NUM_ADDRS]; > struct file *kick; > - struct eventfd_ctx *call_ctx; > + struct vhost_vring_call call_ctx; > struct eventfd_ctx *error_ctx; > struct eventfd_ctx *log_ctx; > ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH V3 0/6] IRQ offloading for vDPA @ 2020-07-22 10:08 Zhu Lingshan 2020-07-22 10:08 ` [PATCH V3 1/6] vhost: introduce vhost_vring_call Zhu Lingshan 0 siblings, 1 reply; 4+ messages in thread From: Zhu Lingshan @ 2020-07-22 10:08 UTC (permalink / raw) To: jasowang, alex.williamson, mst, pbonzini, sean.j.christopherson, wanpengli Cc: virtualization, netdev, kvm, Zhu Lingshan This series intends to implement IRQ offloading for vhost_vdpa. By the feat of irq forwarding facilities like posted interrupt on X86, irq bypass can help deliver interrupts to vCPU directly. vDPA devices have dedicated hardware backends like VFIO pass-throughed devices. So it would be possible to setup irq offloading(irq bypass) for vDPA devices and gain performance improvements. In my testing, with this feature, we can save 0.1ms in a ping between two VFs on average. changes from V2: (1)rename struct vhost_call_ctx to vhost_vring_call (2)add kvm_arch_end_assignment() in del_producer() code path (3)rename vDPA helpers to vdpa_devm_request_irq() and vdpa_devm_free_irq(). Add better comments for them. (4)better comments for setup_vq_irq() and unsetup_vq_irq() (5)In vDPA VHOST_SET_VRING_CALL, will call vhost_vdpa_update_vq_irq() without checking producer.irq, move this check into vhost_vdpa_update_vq_irq(), so that get advantage of the spinlock. (6)Add a function vhost_vdpa_clean_irq(), this function will unregister the producer of vqs when vhost_vdpa_release(). This is safe for control vq. (7) minor improvements changes from V1: (1)dropped vfio changes. (3)removed KVM_HVAE_IRQ_BYPASS checks (4)locking fixes (5)simplified vhost_vdpa_updat Zhu Lingshan (6): vhost: introduce vhost_vring_call kvm: detect assigned device via irqbypass manager vDPA: implement vq IRQ allocate/free helpers in vDPA core vhost_vdpa: implement IRQ offloading in vhost_vdpa ifcvf: replace irq_request/free with vDPA helpers irqbypass: do not start cons/prod when failed connect arch/x86/kvm/x86.c | 11 +++++- drivers/vdpa/ifcvf/ifcvf_main.c | 14 ++++--- drivers/vdpa/vdpa.c | 49 +++++++++++++++++++++++ drivers/vhost/Kconfig | 1 + drivers/vhost/vdpa.c | 70 +++++++++++++++++++++++++++++++-- drivers/vhost/vhost.c | 22 ++++++++--- drivers/vhost/vhost.h | 9 ++++- include/linux/vdpa.h | 13 ++++++ virt/lib/irqbypass.c | 16 +++++--- 9 files changed, 182 insertions(+), 23 deletions(-) -- 2.18.4 ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH V3 1/6] vhost: introduce vhost_vring_call 2020-07-22 10:08 [PATCH V3 0/6] IRQ offloading for vDPA Zhu Lingshan @ 2020-07-22 10:08 ` Zhu Lingshan 0 siblings, 0 replies; 4+ messages in thread From: Zhu Lingshan @ 2020-07-22 10:08 UTC (permalink / raw) To: jasowang, alex.williamson, mst, pbonzini, sean.j.christopherson, wanpengli Cc: virtualization, netdev, kvm, Zhu Lingshan This commit introduces struct vhost_vring_call which replaced raw struct eventfd_ctx *call_ctx in struct vhost_virtqueue. Besides eventfd_ctx, it contains a spin lock and an irq_bypass_producer in its structure. Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> --- drivers/vhost/vdpa.c | 4 ++-- drivers/vhost/vhost.c | 22 ++++++++++++++++------ drivers/vhost/vhost.h | 9 ++++++++- 3 files changed, 26 insertions(+), 9 deletions(-) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index a54b60d6623f..df3cf386b0cd 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -96,7 +96,7 @@ static void handle_vq_kick(struct vhost_work *work) static irqreturn_t vhost_vdpa_virtqueue_cb(void *private) { struct vhost_virtqueue *vq = private; - struct eventfd_ctx *call_ctx = vq->call_ctx; + struct eventfd_ctx *call_ctx = vq->call_ctx.ctx; if (call_ctx) eventfd_signal(call_ctx, 1); @@ -382,7 +382,7 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd, break; case VHOST_SET_VRING_CALL: - if (vq->call_ctx) { + if (vq->call_ctx.ctx) { cb.callback = vhost_vdpa_virtqueue_cb; cb.private = vq; } else { diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index d7b8df3edffc..9f1a845a9302 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -298,6 +298,13 @@ static void vhost_vq_meta_reset(struct vhost_dev *d) __vhost_vq_meta_reset(d->vqs[i]); } +static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx) +{ + call_ctx->ctx = NULL; + memset(&call_ctx->producer, 0x0, sizeof(struct irq_bypass_producer)); + spin_lock_init(&call_ctx->ctx_lock); +} + static void vhost_vq_reset(struct vhost_dev *dev, struct vhost_virtqueue *vq) { @@ -319,13 +326,13 @@ static void vhost_vq_reset(struct vhost_dev *dev, vq->log_base = NULL; vq->error_ctx = NULL; vq->kick = NULL; - vq->call_ctx = NULL; vq->log_ctx = NULL; vhost_reset_is_le(vq); vhost_disable_cross_endian(vq); vq->busyloop_timeout = 0; vq->umem = NULL; vq->iotlb = NULL; + vhost_vring_call_reset(&vq->call_ctx); __vhost_vq_meta_reset(vq); } @@ -685,8 +692,8 @@ void vhost_dev_cleanup(struct vhost_dev *dev) eventfd_ctx_put(dev->vqs[i]->error_ctx); if (dev->vqs[i]->kick) fput(dev->vqs[i]->kick); - if (dev->vqs[i]->call_ctx) - eventfd_ctx_put(dev->vqs[i]->call_ctx); + if (dev->vqs[i]->call_ctx.ctx) + eventfd_ctx_put(dev->vqs[i]->call_ctx.ctx); vhost_vq_reset(dev, dev->vqs[i]); } vhost_dev_free_iovecs(dev); @@ -1629,7 +1636,10 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg r = PTR_ERR(ctx); break; } - swap(ctx, vq->call_ctx); + + spin_lock(&vq->call_ctx.ctx_lock); + swap(ctx, vq->call_ctx.ctx); + spin_unlock(&vq->call_ctx.ctx_lock); break; case VHOST_SET_VRING_ERR: if (copy_from_user(&f, argp, sizeof f)) { @@ -2440,8 +2450,8 @@ static bool vhost_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq) void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq) { /* Signal the Guest tell them we used something up. */ - if (vq->call_ctx && vhost_notify(dev, vq)) - eventfd_signal(vq->call_ctx, 1); + if (vq->call_ctx.ctx && vhost_notify(dev, vq)) + eventfd_signal(vq->call_ctx.ctx, 1); } EXPORT_SYMBOL_GPL(vhost_signal); diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index c8e96a095d3b..38eb1aa3b68d 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -13,6 +13,7 @@ #include <linux/virtio_ring.h> #include <linux/atomic.h> #include <linux/vhost_iotlb.h> +#include <linux/irqbypass.h> struct vhost_work; typedef void (*vhost_work_fn_t)(struct vhost_work *work); @@ -60,6 +61,12 @@ enum vhost_uaddr_type { VHOST_NUM_ADDRS = 3, }; +struct vhost_vring_call { + struct eventfd_ctx *ctx; + struct irq_bypass_producer producer; + spinlock_t ctx_lock; +}; + /* The virtqueue structure describes a queue attached to a device. */ struct vhost_virtqueue { struct vhost_dev *dev; @@ -72,7 +79,7 @@ struct vhost_virtqueue { vring_used_t __user *used; const struct vhost_iotlb_map *meta_iotlb[VHOST_NUM_ADDRS]; struct file *kick; - struct eventfd_ctx *call_ctx; + struct vhost_vring_call call_ctx; struct eventfd_ctx *error_ctx; struct eventfd_ctx *log_ctx; -- 2.18.4 ^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-07-22 10:08 UTC | newest] Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-07-22 9:49 [PATCH V3 0/6] IRQ offloading for vDPA Zhu Lingshan 2020-07-22 9:49 ` [PATCH V3 1/6] vhost: introduce vhost_vring_call Zhu Lingshan 2020-07-22 9:59 ` Zhu Lingshan 2020-07-22 10:08 [PATCH V3 0/6] IRQ offloading for vDPA Zhu Lingshan 2020-07-22 10:08 ` [PATCH V3 1/6] vhost: introduce vhost_vring_call Zhu Lingshan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).