virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: wanpengli@tencent.com,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	sean.j.christopherson@intel.com,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	pbonzini@redhat.com, "Zhu, Lingshan" <lingshan.zhu@intel.com>
Subject: Re: [PATCH 0/7] *** IRQ offloading for vDPA ***
Date: Thu, 16 Jul 2020 02:15:24 -0400	[thread overview]
Message-ID: <20200716021435-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <67c4c41d-9e95-2270-4acb-6f04668c34fa@redhat.com>

On Thu, Jul 16, 2020 at 12:20:09PM +0800, Jason Wang wrote:
> 
> On 2020/7/16 下午12:13, Zhu, Lingshan wrote:
> > 
> > 
> > On 7/16/2020 12:02 PM, Jason Wang wrote:
> > > 
> > > On 2020/7/16 上午11:59, Zhu, Lingshan wrote:
> > > > 
> > > > On 7/16/2020 10:59 AM, Jason Wang wrote:
> > > > > 
> > > > > On 2020/7/16 上午9:39, Zhu, Lingshan wrote:
> > > > > > 
> > > > > > 
> > > > > > On 7/15/2020 9:43 PM, Jason Wang wrote:
> > > > > > > 
> > > > > > > On 2020/7/12 下午10:52, Zhu Lingshan wrote:
> > > > > > > > Hi All,
> > > > > > > > 
> > > > > > > > This series intends to implement IRQ offloading for
> > > > > > > > vhost_vdpa.
> > > > > > > > 
> > > > > > > > By the feat of irq forwarding facilities like posted
> > > > > > > > interrupt on X86, irq bypass can  help deliver
> > > > > > > > interrupts to vCPU directly.
> > > > > > > > 
> > > > > > > > vDPA devices have dedicated hardware backends like VFIO
> > > > > > > > pass-throughed devices. So it would be possible to setup
> > > > > > > > irq offloading(irq bypass) for vDPA devices and gain
> > > > > > > > performance improvements.
> > > > > > > > 
> > > > > > > > In my testing, with this feature, we can save 0.1ms
> > > > > > > > in a ping between two VFs on average.
> > > > > > > 
> > > > > > > 
> > > > > > > Hi Lingshan:
> > > > > > > 
> > > > > > > During the virtio-networking meeting, Michael spots
> > > > > > > two possible issues:
> > > > > > > 
> > > > > > > 1) do we need an new uAPI to stop the irq offloading?
> > > > > > > 2) can interrupt lost during the eventfd ctx?
> > > > > > > 
> > > > > > > For 1) I think we probably not, we can allocate an
> > > > > > > independent eventfd which does not map to MSIX. So
> > > > > > > the consumer can't match the producer and we
> > > > > > > fallback to eventfd based irq.
> > > > > > Hi Jason,
> > > > > > 
> > > > > > I wonder why we need to stop irq offloading, but if we
> > > > > > need to do so, maybe a new uAPI would be more intuitive
> > > > > > to me,
> > > > > > but why and who(user? qemu?) shall initialize this
> > > > > > process, based on what kinda of basis to make the
> > > > > > decision?
> > > > > 
> > > > > 
> > > > > The reason is we may want to fallback to software datapath
> > > > > for some reason (e.g software assisted live migration). In
> > > > > this case we need intercept device write to used ring so we
> > > > > can not offloading virtqueue interrupt in this case.
> > > > so add a VHOST_VDPA_STOP_IRQ_OFFLOADING? Then do we need a
> > > > VHOST_VDPA_START_IRQ_OFFLOADING, then let userspace fully
> > > > control this? Or any better approaches?
> > > 
> > > 
> > > Probably not, it's as simple as allocating another eventfd (but not
> > > irqfd), and pass it to vhost-vdpa. Then the offloading is disabled
> > > since it doesn't have a consumer.
> > OK, sounds like QEMU work, no need to take care in this series, right?
> 
> 
> That's my understanding.
> 
> Thanks

Let's confirm a switch happens atomically so each interrupt
is sent either to eventfd to guest directly though.

> 
> > 
> > Thanks,
> > BR
> > Zhu Lingshan
> > > 
> > > Thanks
> > > 
> > > 

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2020-07-16  6:15 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-12 14:52 [PATCH 0/7] *** IRQ offloading for vDPA *** Zhu Lingshan
2020-07-15 13:43 ` Jason Wang
     [not found]   ` <97032c51-3265-c94a-9ce1-f42fcc6d3075@intel.com>
2020-07-16  2:59     ` Jason Wang
     [not found]       ` <29ab6da8-ed8e-6b91-d658-f3d240543b29@intel.com>
     [not found]         ` <1e91d9dd-d787-beff-2c14-9c76ffc3b285@redhat.com>
     [not found]           ` <a319cba3-8b3d-8968-0fb7-48a1d34042bf@intel.com>
2020-07-16  4:20             ` Jason Wang
2020-07-16  6:15               ` Michael S. Tsirkin [this message]
2020-07-16  8:06                 ` Jason Wang
2020-07-16  6:13     ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200716021435-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=lingshan.zhu@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).