All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Xuquan (Quan Xu)" <xuquan8@huawei.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
	StefanoStabellini <sstabellini@kernel.org>,
	"George.Dunlap@eu.citrix.com" <George.Dunlap@eu.citrix.com>,
	"ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Fanhenglong <fanhenglong@huawei.com>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Subject: Re: question: xen/qemu - mmio mapping issues for device pass-through
Date: Tue, 21 Mar 2017 01:53:42 +0000	[thread overview]
Message-ID: <E0A769A898ADB6449596C41F51EF62C6AF06AA@SZXEMI506-MBX.china.huawei.com> (raw)
In-Reply-To: <58CF94250200007800144C35@prv-mh.provo.novell.com>

On March 20, 2017 3:35 PM, Jan Beulich wrote:
>>>> On 20.03.17 at 02:58, <xuquan8@huawei.com> wrote:
>> On March 16, 2017 11:32 PM, Jan Beulich wrote:
>>>>>> On 16.03.17 at 15:21, <xuquan8@huawei.com> wrote:
>>>> On March 16, 2017 10:06 PM, Jan Beulich wrote:
>>>>>>>> On 16.03.17 at 14:55, <xuquan8@huawei.com> wrote:
>>>>>> I try to pass-through a device with 8G large bar, such as nvidia
>>>>>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>>>>>> update 8G large bar in QEMU::xen_pt_region_update()..
>>>>>> Specifically, it is xc_domain_memory_mapping() in
>>>xen_pt_region_update().
>>>>>>
>>>>>> Digged into xc_domain_memory_mapping(), I find it mainly call
>>>>>> "do_domctl
>>>>>> (…case XEN_DOMCTL_memory_mapping…)"
>>>>>> to mapping mmio region.. of cause, I find out that this mapping
>>>>>> could take a while in the code comment below ' case
>>>>>XEN_DOMCTL_memory_mapping '.
>>>>>>
>>>>>> my questions:
>>>>>> 1. could we make this mapping mmio region quicker?
>>>>>
>>>>
>>>> Thanks for your quick reply.
>>>>
>>>>>Yes, e.g. by using large (2M or 1G) pages. This has been on my todo
>>>>>list for quite a while...
>>>>>
>>>>>> 2. if could not, does it limit by hardware performance?
>>>>>
>>>>>I'm afraid I don't understand the question. If you mean "Is it
>>>>>limited by hw performance", then no, see above. If you mean "Does it
>>>>>limit hw performance", then again no, I don't think so (other than
>>>>>the effect of having more IOMMU translation levels than really
>>>>>necessary for such
>>>large a region).
>>>>>
>>>>
>>>> Sorry, my question is  "Is it limited by hw performance"...
>>>>
>>>> I am much confused. why does this mmio mapping take a while?
>>>> I guessed it takes a lot of time to set up p2m / iommu entry. That's
>>>> why I ask "Is it limited by hw performance".
>>>
>>>Well, just count the number of page table entries and that of the
>>>resulting hypercall continuations. It's the sheer amount of work
>>>that's causing the slowness, together with the need for us to use
>>>continuations to be on the safe side. There may well be redundant TLB
>>>invalidations as well. Since we can do better (by using large
>>>pages) I wouldn't call this "limited by hw performance", but of course
>>>one may.
>>>
>>
>> I agree.
>> So far as I know, xen&qemu upstream doesn't support to pass-through
>> large bar (pci-e bar > 4G) device, such as nvidia M60, However cloud
>> providers may want to leverage this feature for machine learning .etc.
>> Is it on your TODO list?
>
>Is what on my todo list?

support to pass-through large bar (pci-e bar > 4G) device..

> I was assuming large BAR handling to work so far
>(Konrad had done some adjustments there quite a while ago, from all I recall).
>


_iirc_ what Konrad mentioned was using qemu-trad..


Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-03-21  1:53 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-16 13:55 question: xen/qemu - mmio mapping issues for device pass-through Xuquan (Quan Xu)
2017-03-16 14:05 ` Jan Beulich
2017-03-16 14:21   ` Xuquan (Quan Xu)
2017-03-16 15:31     ` Jan Beulich
2017-03-20  1:58       ` Xuquan (Quan Xu)
2017-03-20  7:34         ` Jan Beulich
2017-03-21  1:53           ` Xuquan (Quan Xu) [this message]
2017-03-21  7:38             ` Jan Beulich
2017-03-21 13:18             ` Konrad Rzeszutek Wilk
2017-03-21 13:22               ` Venu Busireddy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E0A769A898ADB6449596C41F51EF62C6AF06AA@SZXEMI506-MBX.china.huawei.com \
    --to=xuquan8@huawei.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=anthony.perard@citrix.com \
    --cc=fanhenglong@huawei.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=kevin.tian@intel.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.