linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "xuxiaoyang (C)" <xuxiaoyang2@huawei.com>
To: Eric Farman <farman@linux.ibm.com>, Cornelia Huck <cohuck@redhat.com>
Cc: <linux-kernel@vger.kernel.org>, <kvm@vger.kernel.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	<kwankhede@nvidia.com>, <wu.wubin@huawei.com>,
	<maoming.maoming@huawei.com>, <xieyingtai@huawei.com>,
	<lizhengui@huawei.com>, <wubinfeng@huawei.com>,
	Zhenyu Wang <zhenyuw@linux.intel.com>,
	Zhi Wang <zhi.a.wang@intel.com>
Subject: Re: [PATCH v2] vfio iommu type1: Improve vfio_iommu_type1_pin_pages performance
Date: Thu, 10 Dec 2020 21:56:00 +0800	[thread overview]
Message-ID: <a585357e-6796-7bf4-ef37-185617e2a865@huawei.com> (raw)
In-Reply-To: <9e37b8d9-3654-5b89-e3b4-5e6ede736320@linux.ibm.com>



On 2020/12/9 22:42, Eric Farman wrote:
> 
> 
> On 12/9/20 6:54 AM, Cornelia Huck wrote:
>> On Tue, 8 Dec 2020 21:55:53 +0800
>> "xuxiaoyang (C)" <xuxiaoyang2@huawei.com> wrote:
>>
>>> On 2020/11/21 15:58, xuxiaoyang (C) wrote:
>>>> vfio_pin_pages() accepts an array of unrelated iova pfns and processes
>>>> each to return the physical pfn.  When dealing with large arrays of
>>>> contiguous iovas, vfio_iommu_type1_pin_pages is very inefficient because
>>>> it is processed page by page.In this case, we can divide the iova pfn
>>>> array into multiple continuous ranges and optimize them.  For example,
>>>> when the iova pfn array is {1,5,6,7,9}, it will be divided into three
>>>> groups {1}, {5,6,7}, {9} for processing.  When processing {5,6,7}, the
>>>> number of calls to pin_user_pages_remote is reduced from 3 times to once.
>>>> For single page or large array of discontinuous iovas, we still use
>>>> vfio_pin_page_external to deal with it to reduce the performance loss
>>>> caused by refactoring.
>>>>
>>>> Signed-off-by: Xiaoyang Xu <xuxiaoyang2@huawei.com>
>>
>> (...)
>>
>>>
>>> hi Cornelia Huck, Eric Farman, Zhenyu Wang, Zhi Wang
>>>
>>> vfio_pin_pages() accepts an array of unrelated iova pfns and processes
>>> each to return the physical pfn.  When dealing with large arrays of
>>> contiguous iovas, vfio_iommu_type1_pin_pages is very inefficient because
>>> it is processed page by page.  In this case, we can divide the iova pfn
>>> array into multiple continuous ranges and optimize them.  I have a set
>>> of performance test data for reference.
>>>
>>> The patch was not applied
>>>                      1 page           512 pages
>>> no huge pages:     1638ns           223651ns
>>> THP:               1668ns           222330ns
>>> HugeTLB:           1526ns           208151ns
>>>
>>> The patch was applied
>>>                      1 page           512 pages
>>> no huge pages       1735ns           167286ns
>>> THP:               1934ns           126900ns
>>> HugeTLB:           1713ns           102188ns
>>>
>>> As Alex Williamson said, this patch lacks proof that it works in the
>>> real world. I think you will have some valuable opinions.
>>
>> Looking at this from the vfio-ccw angle, I'm not sure how much this
>> would buy us, as we deal with IDAWs, which are designed so that they
>> can be non-contiguous. I guess this depends a lot on what the guest
>> does.
> 
> This would be my concern too, but I don't have data off the top of my head to say one way or another...
> 
>>
>> Eric, any opinion? Do you maybe also happen to have a test setup that
>> mimics workloads actually seen in the real world?
>>
> 
> ...I do have some test setups, which I will try to get some data from in a couple days. At the moment I've broken most of those setups trying to implement some other stuff, and can't revert back at the moment. Will get back to this.
> 
> Eric
> .

Thank you for your reply. Looking forward to your test data.

Regards,
Xu

  reply	other threads:[~2020-12-10 14:02 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  7:58 [PATCH v2] vfio iommu type1: Improve vfio_iommu_type1_pin_pages performance xuxiaoyang (C)
2020-12-08 13:55 ` xuxiaoyang (C)
2020-12-09 11:54   ` Cornelia Huck
2020-12-09 14:42     ` Eric Farman
2020-12-10 13:56       ` xuxiaoyang (C) [this message]
2020-12-14 18:58         ` Eric Farman
2020-12-15 13:13           ` xuxiaoyang (C)
2020-12-10 13:54     ` xuxiaoyang (C)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a585357e-6796-7bf4-ef37-185617e2a865@huawei.com \
    --to=xuxiaoyang2@huawei.com \
    --cc=alex.williamson@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizhengui@huawei.com \
    --cc=maoming.maoming@huawei.com \
    --cc=wu.wubin@huawei.com \
    --cc=wubinfeng@huawei.com \
    --cc=xieyingtai@huawei.com \
    --cc=zhenyuw@linux.intel.com \
    --cc=zhi.a.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).