KVM Archive on lore.kernel.org
 help / color / Atom feed
From: Yan Zhao <yan.y.zhao@intel.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: "zhenyuw@linux.intel.com" <zhenyuw@linux.intel.com>,
	"intel-gvt-dev@lists.freedesktop.org" 
	<intel-gvt-dev@lists.freedesktop.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"peterx@redhat.com" <peterx@redhat.com>
Subject: Re: [PATCH v2 2/2] drm/i915/gvt: subsitute kvm_read/write_guest with vfio_dma_rw
Date: Thu, 16 Jan 2020 00:49:41 -0500
Message-ID: <20200116054941.GB1759@joy-OptiPlex-7040> (raw)
In-Reply-To: <20200115130651.29d7e9e0@w520.home>

On Thu, Jan 16, 2020 at 04:06:51AM +0800, Alex Williamson wrote:
> On Tue, 14 Jan 2020 22:54:55 -0500
> Yan Zhao <yan.y.zhao@intel.com> wrote:
> 
> > As a device model, it is better to read/write guest memory using vfio
> > interface, so that vfio is able to maintain dirty info of device IOVAs.
> > 
> > Compared to kvm interfaces kvm_read/write_guest(), vfio_dma_rw() has ~600
> > cycles more overhead on average.
> > 
> > -------------------------------------
> > |    interface     | avg cpu cycles |
> > |-----------------------------------|
> > | kvm_write_guest  |     1554       |
> > | ----------------------------------|
> > | kvm_read_guest   |     707        |
> > |-----------------------------------|
> > | vfio_dma_rw(w)   |     2274       |
> > |-----------------------------------|
> > | vfio_dma_rw(r)   |     1378       |
> > -------------------------------------
> 
> In v1 you had:
> 
> -------------------------------------
> |    interface     | avg cpu cycles |
> |-----------------------------------|
> | kvm_write_guest  |     1546       |
> | ----------------------------------|
> | kvm_read_guest   |     686        |
> |-----------------------------------|
> | vfio_iova_rw(w)  |     2233       |
> |-----------------------------------|
> | vfio_iova_rw(r)  |     1262       |
> -------------------------------------
> 
> So the kvm numbers remained within +0.5-3% while the vfio numbers are
> now +1.8-9.2%.  I would have expected the algorithm change to at least
> not be worse for small accesses and be better for accesses crossing
> page boundaries.  Do you know what happened?
>
I only tested the 4 interfaces in GVT's environment, where most of the
guest memory accesses are less than one page.
And the different fluctuations should be caused by the locks.
vfio_dma_rw contends locks with other vfio accesses which are assumed to
be abundant in the case of GVT.

> > Comparison of benchmarks scores are as blow:
> > ------------------------------------------------------
> > |  avg score  | kvm_read/write_guest  | vfio_dma_rw  |
> > |----------------------------------------------------|
> > |   Glmark2   |         1284          |    1296      |
> > |----------------------------------------------------|
> > |  Lightsmark |         61.24         |    61.27     |
> > |----------------------------------------------------|
> > |  OpenArena  |         140.9         |    137.4     |
> > |----------------------------------------------------|
> > |   Heaven    |          671          |     670      |
> > ------------------------------------------------------
> > No obvious performance downgrade found.
> > 
> > Cc: Kevin Tian <kevin.tian@intel.com>
> > Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
> > ---
> >  drivers/gpu/drm/i915/gvt/kvmgt.c | 26 +++++++-------------------
> >  1 file changed, 7 insertions(+), 19 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > index bd79a9718cc7..17edc9a7ff05 100644
> > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > @@ -1966,31 +1966,19 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
> >  			void *buf, unsigned long len, bool write)
> >  {
> >  	struct kvmgt_guest_info *info;
> > -	struct kvm *kvm;
> > -	int idx, ret;
> > -	bool kthread = current->mm == NULL;
> > +	int ret;
> > +	struct intel_vgpu *vgpu;
> > +	struct device *dev;
> >  
> >  	if (!handle_valid(handle))
> >  		return -ESRCH;
> >  
> >  	info = (struct kvmgt_guest_info *)handle;
> > -	kvm = info->kvm;
> > -
> > -	if (kthread) {
> > -		if (!mmget_not_zero(kvm->mm))
> > -			return -EFAULT;
> > -		use_mm(kvm->mm);
> > -	}
> > -
> > -	idx = srcu_read_lock(&kvm->srcu);
> > -	ret = write ? kvm_write_guest(kvm, gpa, buf, len) :
> > -		      kvm_read_guest(kvm, gpa, buf, len);
> > -	srcu_read_unlock(&kvm->srcu, idx);
> > +	vgpu = info->vgpu;
> > +	dev = mdev_dev(vgpu->vdev.mdev);
> >  
> > -	if (kthread) {
> > -		unuse_mm(kvm->mm);
> > -		mmput(kvm->mm);
> > -	}
> > +	ret = write ? vfio_dma_rw(dev, gpa, buf, len, true) :
> > +			vfio_dma_rw(dev, gpa, buf, len, false);
> 
> As Paolo suggested previously, this can be simplified:
> 
> ret = vfio_dma_rw(dev, gpa, buf, len, write);
>
> >  
> >  	return ret;
> 
> Or even more simple, remove the ret variable:
> 
> return vfio_dma_rw(dev, gpa, buf, len, write);
> 
oh, it seems that I missed Paolo's mail. will change it. thank you!

Thanks
Yan
> 
> >  }
> 

  reply index

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-15  3:41 [PATCH v2 0/2] use vfio_dma_rw to read/write IOVAs from CPU side Yan Zhao
2020-01-15  3:53 ` [PATCH v2 1/2] vfio: introduce vfio_dma_rw to read/write a range of IOVAs Yan Zhao
2020-01-15 20:06   ` Alex Williamson
2020-01-16  2:30     ` Mika Penttilä
2020-01-16  2:59       ` Alex Williamson
2020-01-16  3:15         ` Mika Penttilä
2020-01-16  3:58           ` Alex Williamson
2020-01-16  5:32     ` Yan Zhao
2020-01-15  3:54 ` [PATCH v2 2/2] drm/i915/gvt: subsitute kvm_read/write_guest with vfio_dma_rw Yan Zhao
2020-01-15 20:06   ` Alex Williamson
2020-01-16  5:49     ` Yan Zhao [this message]
2020-01-16 15:37       ` Alex Williamson
2020-01-19 10:06         ` Yan Zhao
2020-01-20 20:01           ` Alex Williamson
2020-01-21  8:12             ` Yan Zhao
2020-01-21 16:51               ` Alex Williamson
2020-01-21 22:10                 ` Yan Zhao
2020-01-22  3:07                   ` Yan Zhao
2020-01-23 10:02                     ` Yan Zhao

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200116054941.GB1759@joy-OptiPlex-7040 \
    --to=yan.y.zhao@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=intel-gvt-dev@lists.freedesktop.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=zhenyuw@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

KVM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \
		kvm@vger.kernel.org
	public-inbox-index kvm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.kvm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git