All of lore.kernel.org
 help / color / mirror / Atom feed
From: Cornelia Huck <cohuck@redhat.com>
To: "xuxiaoyang (C)" <xuxiaoyang2@huawei.com>,
	Eric Farman <farman@linux.ibm.com>
Cc: <linux-kernel@vger.kernel.org>, <kvm@vger.kernel.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	<kwankhede@nvidia.com>, <wu.wubin@huawei.com>,
	<maoming.maoming@huawei.com>, <xieyingtai@huawei.com>,
	<lizhengui@huawei.com>, <wubinfeng@huawei.com>,
	Zhenyu Wang <zhenyuw@linux.intel.com>,
	Zhi Wang <zhi.a.wang@intel.com>
Subject: Re: [PATCH v2] vfio iommu type1: Improve vfio_iommu_type1_pin_pages performance
Date: Wed, 9 Dec 2020 12:54:50 +0100	[thread overview]
Message-ID: <20201209125450.3f5834ab.cohuck@redhat.com> (raw)
In-Reply-To: <4d58b74d-72bb-6473-9523-aeaa392a470e@huawei.com>

On Tue, 8 Dec 2020 21:55:53 +0800
"xuxiaoyang (C)" <xuxiaoyang2@huawei.com> wrote:

> On 2020/11/21 15:58, xuxiaoyang (C) wrote:
> > vfio_pin_pages() accepts an array of unrelated iova pfns and processes
> > each to return the physical pfn.  When dealing with large arrays of
> > contiguous iovas, vfio_iommu_type1_pin_pages is very inefficient because
> > it is processed page by page.In this case, we can divide the iova pfn
> > array into multiple continuous ranges and optimize them.  For example,
> > when the iova pfn array is {1,5,6,7,9}, it will be divided into three
> > groups {1}, {5,6,7}, {9} for processing.  When processing {5,6,7}, the
> > number of calls to pin_user_pages_remote is reduced from 3 times to once.
> > For single page or large array of discontinuous iovas, we still use
> > vfio_pin_page_external to deal with it to reduce the performance loss
> > caused by refactoring.
> > 
> > Signed-off-by: Xiaoyang Xu <xuxiaoyang2@huawei.com>

(...)

> 
> hi Cornelia Huck, Eric Farman, Zhenyu Wang, Zhi Wang
> 
> vfio_pin_pages() accepts an array of unrelated iova pfns and processes
> each to return the physical pfn.  When dealing with large arrays of
> contiguous iovas, vfio_iommu_type1_pin_pages is very inefficient because
> it is processed page by page.  In this case, we can divide the iova pfn
> array into multiple continuous ranges and optimize them.  I have a set
> of performance test data for reference.
> 
> The patch was not applied
>                     1 page           512 pages
> no huge pages:     1638ns           223651ns
> THP:               1668ns           222330ns
> HugeTLB:           1526ns           208151ns
> 
> The patch was applied
>                     1 page           512 pages
> no huge pages       1735ns           167286ns
> THP:               1934ns           126900ns
> HugeTLB:           1713ns           102188ns
> 
> As Alex Williamson said, this patch lacks proof that it works in the
> real world. I think you will have some valuable opinions.

Looking at this from the vfio-ccw angle, I'm not sure how much this
would buy us, as we deal with IDAWs, which are designed so that they
can be non-contiguous. I guess this depends a lot on what the guest
does.

Eric, any opinion? Do you maybe also happen to have a test setup that
mimics workloads actually seen in the real world?


  reply	other threads:[~2020-12-09 11:56 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  7:58 [PATCH v2] vfio iommu type1: Improve vfio_iommu_type1_pin_pages performance xuxiaoyang (C)
2020-12-08 13:55 ` xuxiaoyang (C)
2020-12-09 11:54   ` Cornelia Huck [this message]
2020-12-09 14:42     ` Eric Farman
2020-12-10 13:56       ` xuxiaoyang (C)
2020-12-14 18:58         ` Eric Farman
2020-12-15 13:13           ` xuxiaoyang (C)
2020-12-10 13:54     ` xuxiaoyang (C)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201209125450.3f5834ab.cohuck@redhat.com \
    --to=cohuck@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizhengui@huawei.com \
    --cc=maoming.maoming@huawei.com \
    --cc=wu.wubin@huawei.com \
    --cc=wubinfeng@huawei.com \
    --cc=xieyingtai@huawei.com \
    --cc=xuxiaoyang2@huawei.com \
    --cc=zhenyuw@linux.intel.com \
    --cc=zhi.a.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.