From: Alex Williamson <alex.williamson@redhat.com>
To: Yan Zhao <yan.y.zhao@intel.com>
Cc: zhenyuw@linux.intel.com, intel-gvt-dev@lists.freedesktop.org,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
pbonzini@redhat.com, kevin.tian@intel.com, peterx@redhat.com
Subject: Re: [PATCH v2 2/2] drm/i915/gvt: subsitute kvm_read/write_guest with vfio_dma_rw
Date: Wed, 15 Jan 2020 13:06:51 -0700 [thread overview]
Message-ID: <20200115130651.29d7e9e0@w520.home> (raw)
In-Reply-To: <20200115035455.12417-1-yan.y.zhao@intel.com>
On Tue, 14 Jan 2020 22:54:55 -0500
Yan Zhao <yan.y.zhao@intel.com> wrote:
> As a device model, it is better to read/write guest memory using vfio
> interface, so that vfio is able to maintain dirty info of device IOVAs.
>
> Compared to kvm interfaces kvm_read/write_guest(), vfio_dma_rw() has ~600
> cycles more overhead on average.
>
> -------------------------------------
> | interface | avg cpu cycles |
> |-----------------------------------|
> | kvm_write_guest | 1554 |
> | ----------------------------------|
> | kvm_read_guest | 707 |
> |-----------------------------------|
> | vfio_dma_rw(w) | 2274 |
> |-----------------------------------|
> | vfio_dma_rw(r) | 1378 |
> -------------------------------------
In v1 you had:
-------------------------------------
| interface | avg cpu cycles |
|-----------------------------------|
| kvm_write_guest | 1546 |
| ----------------------------------|
| kvm_read_guest | 686 |
|-----------------------------------|
| vfio_iova_rw(w) | 2233 |
|-----------------------------------|
| vfio_iova_rw(r) | 1262 |
-------------------------------------
So the kvm numbers remained within +0.5-3% while the vfio numbers are
now +1.8-9.2%. I would have expected the algorithm change to at least
not be worse for small accesses and be better for accesses crossing
page boundaries. Do you know what happened?
> Comparison of benchmarks scores are as blow:
> ------------------------------------------------------
> | avg score | kvm_read/write_guest | vfio_dma_rw |
> |----------------------------------------------------|
> | Glmark2 | 1284 | 1296 |
> |----------------------------------------------------|
> | Lightsmark | 61.24 | 61.27 |
> |----------------------------------------------------|
> | OpenArena | 140.9 | 137.4 |
> |----------------------------------------------------|
> | Heaven | 671 | 670 |
> ------------------------------------------------------
> No obvious performance downgrade found.
>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
> ---
> drivers/gpu/drm/i915/gvt/kvmgt.c | 26 +++++++-------------------
> 1 file changed, 7 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index bd79a9718cc7..17edc9a7ff05 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -1966,31 +1966,19 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
> void *buf, unsigned long len, bool write)
> {
> struct kvmgt_guest_info *info;
> - struct kvm *kvm;
> - int idx, ret;
> - bool kthread = current->mm == NULL;
> + int ret;
> + struct intel_vgpu *vgpu;
> + struct device *dev;
>
> if (!handle_valid(handle))
> return -ESRCH;
>
> info = (struct kvmgt_guest_info *)handle;
> - kvm = info->kvm;
> -
> - if (kthread) {
> - if (!mmget_not_zero(kvm->mm))
> - return -EFAULT;
> - use_mm(kvm->mm);
> - }
> -
> - idx = srcu_read_lock(&kvm->srcu);
> - ret = write ? kvm_write_guest(kvm, gpa, buf, len) :
> - kvm_read_guest(kvm, gpa, buf, len);
> - srcu_read_unlock(&kvm->srcu, idx);
> + vgpu = info->vgpu;
> + dev = mdev_dev(vgpu->vdev.mdev);
>
> - if (kthread) {
> - unuse_mm(kvm->mm);
> - mmput(kvm->mm);
> - }
> + ret = write ? vfio_dma_rw(dev, gpa, buf, len, true) :
> + vfio_dma_rw(dev, gpa, buf, len, false);
As Paolo suggested previously, this can be simplified:
ret = vfio_dma_rw(dev, gpa, buf, len, write);
>
> return ret;
Or even more simple, remove the ret variable:
return vfio_dma_rw(dev, gpa, buf, len, write);
Thanks,
Alex
> }
next prev parent reply other threads:[~2020-01-15 20:06 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-15 3:41 [PATCH v2 0/2] use vfio_dma_rw to read/write IOVAs from CPU side Yan Zhao
2020-01-15 3:53 ` [PATCH v2 1/2] vfio: introduce vfio_dma_rw to read/write a range of IOVAs Yan Zhao
2020-01-15 20:06 ` Alex Williamson
2020-01-16 2:30 ` Mika Penttilä
2020-01-16 2:59 ` Alex Williamson
2020-01-16 3:15 ` Mika Penttilä
2020-01-16 3:58 ` Alex Williamson
2020-01-16 5:32 ` Yan Zhao
2020-01-15 3:54 ` [PATCH v2 2/2] drm/i915/gvt: subsitute kvm_read/write_guest with vfio_dma_rw Yan Zhao
2020-01-15 20:06 ` Alex Williamson [this message]
2020-01-16 5:49 ` Yan Zhao
2020-01-16 15:37 ` Alex Williamson
2020-01-19 10:06 ` Yan Zhao
2020-01-20 20:01 ` Alex Williamson
2020-01-21 8:12 ` Yan Zhao
2020-01-21 16:51 ` Alex Williamson
2020-01-21 22:10 ` Yan Zhao
2020-01-22 3:07 ` Yan Zhao
2020-01-23 10:02 ` Yan Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200115130651.29d7e9e0@w520.home \
--to=alex.williamson@redhat.com \
--cc=intel-gvt-dev@lists.freedesktop.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=yan.y.zhao@intel.com \
--cc=zhenyuw@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).