All of lore.kernel.org
 help / color / mirror / Atom feed
* question: xen/qemu - mmio mapping issues for device pass-through
@ 2017-03-16 13:55 Xuquan (Quan Xu)
  2017-03-16 14:05 ` Jan Beulich
  0 siblings, 1 reply; 10+ messages in thread
From: Xuquan (Quan Xu) @ 2017-03-16 13:55 UTC (permalink / raw)
  To: Stefano Stabellini, anthony.perard, ian.jackson, Jan Beulich,
	Kevin Tian, George.Dunlap
  Cc: Fanhenglong, xen-devel

Hello,

I try to pass-through a device with 8G large bar, such as nvidia M60(note1, pci-e info as below). It takes about '__15 sconds__' to update 8G large bar in QEMU::xen_pt_region_update()..
Specifically, it is xc_domain_memory_mapping() in xen_pt_region_update().

Digged into xc_domain_memory_mapping(), I find it mainly call "do_domctl (…case XEN_DOMCTL_memory_mapping…)" 
to mapping mmio region.. of cause, I find out that this mapping could take a while in the code comment below ' case XEN_DOMCTL_memory_mapping '.

my questions:
1. could we make this mapping mmio region quicker?
2. if could not, does it limit by hardware performance?



-------------
Note1:
lspci -v -s 86:00.0  (nvidia M60)
86:00.0 3D controller: NVIDIA Corporation Device 13f2 (rev a1)
        Subsystem: NVIDIA Corporation Device 115e
        Flags: bus master, fast devsel, latency 0, IRQ 7
        Memory at c8000000 (32-bit, non-prefetchable) [size=16M]
        Memory at 3f800000000 (64-bit, prefetchable) [size=8G]
        Memory at 3fa00000000 (64-bit, prefetchable) [size=32M]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [250] Latency Tolerance Reporting
        Capabilities: [258] #1e
        Capabilities: [128] Power Budgeting <?>
        Capabilities: [420] Advanced Error Reporting
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900] #19
        Kernel modules: nvidiafb


Thanks!!
-Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-16 13:55 question: xen/qemu - mmio mapping issues for device pass-through Xuquan (Quan Xu)
@ 2017-03-16 14:05 ` Jan Beulich
  2017-03-16 14:21   ` Xuquan (Quan Xu)
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2017-03-16 14:05 UTC (permalink / raw)
  To: Xuquan (Quan Xu)
  Cc: Kevin Tian, Stefano Stabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, anthony.perard

>>> On 16.03.17 at 14:55, <xuquan8@huawei.com> wrote:
> I try to pass-through a device with 8G large bar, such as nvidia M60(note1, 
> pci-e info as below). It takes about '__15 sconds__' to update 8G large bar in 
> QEMU::xen_pt_region_update()..
> Specifically, it is xc_domain_memory_mapping() in xen_pt_region_update().
> 
> Digged into xc_domain_memory_mapping(), I find it mainly call "do_domctl 
> (…case XEN_DOMCTL_memory_mapping…)" 
> to mapping mmio region.. of cause, I find out that this mapping could take a 
> while in the code comment below ' case XEN_DOMCTL_memory_mapping '.
> 
> my questions:
> 1. could we make this mapping mmio region quicker?

Yes, e.g. by using large (2M or 1G) pages. This has been on my todo
list for quite a while...

> 2. if could not, does it limit by hardware performance?

I'm afraid I don't understand the question. If you mean "Is it
limited by hw performance", then no, see above. If you mean
"Does it limit hw performance", then again no, I don't think so
(other than the effect of having more IOMMU translation levels
than really necessary for such large a region).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-16 14:05 ` Jan Beulich
@ 2017-03-16 14:21   ` Xuquan (Quan Xu)
  2017-03-16 15:31     ` Jan Beulich
  0 siblings, 1 reply; 10+ messages in thread
From: Xuquan (Quan Xu) @ 2017-03-16 14:21 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, anthony.perard

On March 16, 2017 10:06 PM, Jan Beulich wrote:
>>>> On 16.03.17 at 14:55, <xuquan8@huawei.com> wrote:
>> I try to pass-through a device with 8G large bar, such as nvidia
>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>> update 8G large bar in QEMU::xen_pt_region_update()..
>> Specifically, it is xc_domain_memory_mapping() in xen_pt_region_update().
>>
>> Digged into xc_domain_memory_mapping(), I find it mainly call
>> "do_domctl
>> (…case XEN_DOMCTL_memory_mapping…)"
>> to mapping mmio region.. of cause, I find out that this mapping could
>> take a while in the code comment below ' case
>XEN_DOMCTL_memory_mapping '.
>>
>> my questions:
>> 1. could we make this mapping mmio region quicker?
>

Thanks for your quick reply.

>Yes, e.g. by using large (2M or 1G) pages. This has been on my todo list for
>quite a while...
>
>> 2. if could not, does it limit by hardware performance?
>
>I'm afraid I don't understand the question. If you mean "Is it limited by hw
>performance", then no, see above. If you mean "Does it limit hw performance",
>then again no, I don't think so (other than the effect of having more IOMMU
>translation levels than really necessary for such large a region).
>

Sorry, my question is  "Is it limited by hw performance"... 

I am much confused. why does this mmio mapping take a while?
I guessed it takes a lot of time to set up p2m / iommu entry. That's why I ask "Is it limited by hw performance".

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-16 14:21   ` Xuquan (Quan Xu)
@ 2017-03-16 15:31     ` Jan Beulich
  2017-03-20  1:58       ` Xuquan (Quan Xu)
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2017-03-16 15:31 UTC (permalink / raw)
  To: Xuquan (Quan Xu)
  Cc: Kevin Tian, StefanoStabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, anthony.perard

>>> On 16.03.17 at 15:21, <xuquan8@huawei.com> wrote:
> On March 16, 2017 10:06 PM, Jan Beulich wrote:
>>>>> On 16.03.17 at 14:55, <xuquan8@huawei.com> wrote:
>>> I try to pass-through a device with 8G large bar, such as nvidia
>>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>>> update 8G large bar in QEMU::xen_pt_region_update()..
>>> Specifically, it is xc_domain_memory_mapping() in xen_pt_region_update().
>>>
>>> Digged into xc_domain_memory_mapping(), I find it mainly call
>>> "do_domctl
>>> (…case XEN_DOMCTL_memory_mapping…)"
>>> to mapping mmio region.. of cause, I find out that this mapping could
>>> take a while in the code comment below ' case
>>XEN_DOMCTL_memory_mapping '.
>>>
>>> my questions:
>>> 1. could we make this mapping mmio region quicker?
>>
> 
> Thanks for your quick reply.
> 
>>Yes, e.g. by using large (2M or 1G) pages. This has been on my todo list for
>>quite a while...
>>
>>> 2. if could not, does it limit by hardware performance?
>>
>>I'm afraid I don't understand the question. If you mean "Is it limited by hw
>>performance", then no, see above. If you mean "Does it limit hw performance",
>>then again no, I don't think so (other than the effect of having more IOMMU
>>translation levels than really necessary for such large a region).
>>
> 
> Sorry, my question is  "Is it limited by hw performance"... 
> 
> I am much confused. why does this mmio mapping take a while?
> I guessed it takes a lot of time to set up p2m / iommu entry. That's why I 
> ask "Is it limited by hw performance".

Well, just count the number of page table entries and that of the
resulting hypercall continuations. It's the sheer amount of work
that's causing the slowness, together with the need for us to use
continuations to be on the safe side. There may well be redundant
TLB invalidations as well. Since we can do better (by using large
pages) I wouldn't call this "limited by hw performance", but of
course one may.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-16 15:31     ` Jan Beulich
@ 2017-03-20  1:58       ` Xuquan (Quan Xu)
  2017-03-20  7:34         ` Jan Beulich
  0 siblings, 1 reply; 10+ messages in thread
From: Xuquan (Quan Xu) @ 2017-03-20  1:58 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, StefanoStabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, anthony.perard

On March 16, 2017 11:32 PM, Jan Beulich wrote:
>>>> On 16.03.17 at 15:21, <xuquan8@huawei.com> wrote:
>> On March 16, 2017 10:06 PM, Jan Beulich wrote:
>>>>>> On 16.03.17 at 14:55, <xuquan8@huawei.com> wrote:
>>>> I try to pass-through a device with 8G large bar, such as nvidia
>>>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>>>> update 8G large bar in QEMU::xen_pt_region_update()..
>>>> Specifically, it is xc_domain_memory_mapping() in
>xen_pt_region_update().
>>>>
>>>> Digged into xc_domain_memory_mapping(), I find it mainly call
>>>> "do_domctl
>>>> (…case XEN_DOMCTL_memory_mapping…)"
>>>> to mapping mmio region.. of cause, I find out that this mapping
>>>> could take a while in the code comment below ' case
>>>XEN_DOMCTL_memory_mapping '.
>>>>
>>>> my questions:
>>>> 1. could we make this mapping mmio region quicker?
>>>
>>
>> Thanks for your quick reply.
>>
>>>Yes, e.g. by using large (2M or 1G) pages. This has been on my todo
>>>list for quite a while...
>>>
>>>> 2. if could not, does it limit by hardware performance?
>>>
>>>I'm afraid I don't understand the question. If you mean "Is it limited
>>>by hw performance", then no, see above. If you mean "Does it limit hw
>>>performance", then again no, I don't think so (other than the effect
>>>of having more IOMMU translation levels than really necessary for such
>large a region).
>>>
>>
>> Sorry, my question is  "Is it limited by hw performance"...
>>
>> I am much confused. why does this mmio mapping take a while?
>> I guessed it takes a lot of time to set up p2m / iommu entry. That's
>> why I ask "Is it limited by hw performance".
>
>Well, just count the number of page table entries and that of the resulting
>hypercall continuations. It's the sheer amount of work that's causing the
>slowness, together with the need for us to use continuations to be on the safe
>side. There may well be redundant TLB invalidations as well. Since we can do
>better (by using large
>pages) I wouldn't call this "limited by hw performance", but of course one
>may.
>

I agree.
So far as I know, xen&qemu upstream doesn't support to pass-through large bar (pci-e bar > 4G) device, such as nvidia M60,
However cloud providers may want to leverage this feature for machine learning .etc.
Is it on your TODO list?

Quan









_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-20  1:58       ` Xuquan (Quan Xu)
@ 2017-03-20  7:34         ` Jan Beulich
  2017-03-21  1:53           ` Xuquan (Quan Xu)
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Beulich @ 2017-03-20  7:34 UTC (permalink / raw)
  To: Xuquan (Quan Xu)
  Cc: Kevin Tian, StefanoStabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, anthony.perard

>>> On 20.03.17 at 02:58, <xuquan8@huawei.com> wrote:
> On March 16, 2017 11:32 PM, Jan Beulich wrote:
>>>>> On 16.03.17 at 15:21, <xuquan8@huawei.com> wrote:
>>> On March 16, 2017 10:06 PM, Jan Beulich wrote:
>>>>>>> On 16.03.17 at 14:55, <xuquan8@huawei.com> wrote:
>>>>> I try to pass-through a device with 8G large bar, such as nvidia
>>>>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>>>>> update 8G large bar in QEMU::xen_pt_region_update()..
>>>>> Specifically, it is xc_domain_memory_mapping() in
>>xen_pt_region_update().
>>>>>
>>>>> Digged into xc_domain_memory_mapping(), I find it mainly call
>>>>> "do_domctl
>>>>> (…case XEN_DOMCTL_memory_mapping…)"
>>>>> to mapping mmio region.. of cause, I find out that this mapping
>>>>> could take a while in the code comment below ' case
>>>>XEN_DOMCTL_memory_mapping '.
>>>>>
>>>>> my questions:
>>>>> 1. could we make this mapping mmio region quicker?
>>>>
>>>
>>> Thanks for your quick reply.
>>>
>>>>Yes, e.g. by using large (2M or 1G) pages. This has been on my todo
>>>>list for quite a while...
>>>>
>>>>> 2. if could not, does it limit by hardware performance?
>>>>
>>>>I'm afraid I don't understand the question. If you mean "Is it limited
>>>>by hw performance", then no, see above. If you mean "Does it limit hw
>>>>performance", then again no, I don't think so (other than the effect
>>>>of having more IOMMU translation levels than really necessary for such
>>large a region).
>>>>
>>>
>>> Sorry, my question is  "Is it limited by hw performance"...
>>>
>>> I am much confused. why does this mmio mapping take a while?
>>> I guessed it takes a lot of time to set up p2m / iommu entry. That's
>>> why I ask "Is it limited by hw performance".
>>
>>Well, just count the number of page table entries and that of the resulting
>>hypercall continuations. It's the sheer amount of work that's causing the
>>slowness, together with the need for us to use continuations to be on the safe
>>side. There may well be redundant TLB invalidations as well. Since we can do
>>better (by using large
>>pages) I wouldn't call this "limited by hw performance", but of course one
>>may.
>>
> 
> I agree.
> So far as I know, xen&qemu upstream doesn't support to pass-through large bar 
> (pci-e bar > 4G) device, such as nvidia M60,
> However cloud providers may want to leverage this feature for machine learning .etc.
> Is it on your TODO list?

Is what on my todo list? I was assuming large BAR handling to work
so far (Konrad had done some adjustments there quite a while ago,
from all I recall).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-20  7:34         ` Jan Beulich
@ 2017-03-21  1:53           ` Xuquan (Quan Xu)
  2017-03-21  7:38             ` Jan Beulich
  2017-03-21 13:18             ` Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 10+ messages in thread
From: Xuquan (Quan Xu) @ 2017-03-21  1:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, StefanoStabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, anthony.perard

On March 20, 2017 3:35 PM, Jan Beulich wrote:
>>>> On 20.03.17 at 02:58, <xuquan8@huawei.com> wrote:
>> On March 16, 2017 11:32 PM, Jan Beulich wrote:
>>>>>> On 16.03.17 at 15:21, <xuquan8@huawei.com> wrote:
>>>> On March 16, 2017 10:06 PM, Jan Beulich wrote:
>>>>>>>> On 16.03.17 at 14:55, <xuquan8@huawei.com> wrote:
>>>>>> I try to pass-through a device with 8G large bar, such as nvidia
>>>>>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>>>>>> update 8G large bar in QEMU::xen_pt_region_update()..
>>>>>> Specifically, it is xc_domain_memory_mapping() in
>>>xen_pt_region_update().
>>>>>>
>>>>>> Digged into xc_domain_memory_mapping(), I find it mainly call
>>>>>> "do_domctl
>>>>>> (…case XEN_DOMCTL_memory_mapping…)"
>>>>>> to mapping mmio region.. of cause, I find out that this mapping
>>>>>> could take a while in the code comment below ' case
>>>>>XEN_DOMCTL_memory_mapping '.
>>>>>>
>>>>>> my questions:
>>>>>> 1. could we make this mapping mmio region quicker?
>>>>>
>>>>
>>>> Thanks for your quick reply.
>>>>
>>>>>Yes, e.g. by using large (2M or 1G) pages. This has been on my todo
>>>>>list for quite a while...
>>>>>
>>>>>> 2. if could not, does it limit by hardware performance?
>>>>>
>>>>>I'm afraid I don't understand the question. If you mean "Is it
>>>>>limited by hw performance", then no, see above. If you mean "Does it
>>>>>limit hw performance", then again no, I don't think so (other than
>>>>>the effect of having more IOMMU translation levels than really
>>>>>necessary for such
>>>large a region).
>>>>>
>>>>
>>>> Sorry, my question is  "Is it limited by hw performance"...
>>>>
>>>> I am much confused. why does this mmio mapping take a while?
>>>> I guessed it takes a lot of time to set up p2m / iommu entry. That's
>>>> why I ask "Is it limited by hw performance".
>>>
>>>Well, just count the number of page table entries and that of the
>>>resulting hypercall continuations. It's the sheer amount of work
>>>that's causing the slowness, together with the need for us to use
>>>continuations to be on the safe side. There may well be redundant TLB
>>>invalidations as well. Since we can do better (by using large
>>>pages) I wouldn't call this "limited by hw performance", but of course
>>>one may.
>>>
>>
>> I agree.
>> So far as I know, xen&qemu upstream doesn't support to pass-through
>> large bar (pci-e bar > 4G) device, such as nvidia M60, However cloud
>> providers may want to leverage this feature for machine learning .etc.
>> Is it on your TODO list?
>
>Is what on my todo list?

support to pass-through large bar (pci-e bar > 4G) device..

> I was assuming large BAR handling to work so far
>(Konrad had done some adjustments there quite a while ago, from all I recall).
>


_iirc_ what Konrad mentioned was using qemu-trad..


Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-21  1:53           ` Xuquan (Quan Xu)
@ 2017-03-21  7:38             ` Jan Beulich
  2017-03-21 13:18             ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 10+ messages in thread
From: Jan Beulich @ 2017-03-21  7:38 UTC (permalink / raw)
  To: Xuquan (Quan Xu)
  Cc: Kevin Tian, StefanoStabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, anthony.perard

>>> On 21.03.17 at 02:53, <xuquan8@huawei.com> wrote:
> On March 20, 2017 3:35 PM, Jan Beulich wrote:
>>>>> On 20.03.17 at 02:58, <xuquan8@huawei.com> wrote:
>>> On March 16, 2017 11:32 PM, Jan Beulich wrote:
>>>>>>> On 16.03.17 at 15:21, <xuquan8@huawei.com> wrote:
>>>>> On March 16, 2017 10:06 PM, Jan Beulich wrote:
>>>>>>>>> On 16.03.17 at 14:55, <xuquan8@huawei.com> wrote:
>>>>>>> I try to pass-through a device with 8G large bar, such as nvidia
>>>>>>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>>>>>>> update 8G large bar in QEMU::xen_pt_region_update()..
>>>>>>> Specifically, it is xc_domain_memory_mapping() in
>>>>xen_pt_region_update().
>>>>>>>
>>>>>>> Digged into xc_domain_memory_mapping(), I find it mainly call
>>>>>>> "do_domctl
>>>>>>> (…case XEN_DOMCTL_memory_mapping…)"
>>>>>>> to mapping mmio region.. of cause, I find out that this mapping
>>>>>>> could take a while in the code comment below ' case
>>>>>>XEN_DOMCTL_memory_mapping '.
>>>>>>>
>>>>>>> my questions:
>>>>>>> 1. could we make this mapping mmio region quicker?
>>>>>>
>>>>>
>>>>> Thanks for your quick reply.
>>>>>
>>>>>>Yes, e.g. by using large (2M or 1G) pages. This has been on my todo
>>>>>>list for quite a while...
>>>>>>
>>>>>>> 2. if could not, does it limit by hardware performance?
>>>>>>
>>>>>>I'm afraid I don't understand the question. If you mean "Is it
>>>>>>limited by hw performance", then no, see above. If you mean "Does it
>>>>>>limit hw performance", then again no, I don't think so (other than
>>>>>>the effect of having more IOMMU translation levels than really
>>>>>>necessary for such
>>>>large a region).
>>>>>>
>>>>>
>>>>> Sorry, my question is  "Is it limited by hw performance"...
>>>>>
>>>>> I am much confused. why does this mmio mapping take a while?
>>>>> I guessed it takes a lot of time to set up p2m / iommu entry. That's
>>>>> why I ask "Is it limited by hw performance".
>>>>
>>>>Well, just count the number of page table entries and that of the
>>>>resulting hypercall continuations. It's the sheer amount of work
>>>>that's causing the slowness, together with the need for us to use
>>>>continuations to be on the safe side. There may well be redundant TLB
>>>>invalidations as well. Since we can do better (by using large
>>>>pages) I wouldn't call this "limited by hw performance", but of course
>>>>one may.
>>>>
>>>
>>> I agree.
>>> So far as I know, xen&qemu upstream doesn't support to pass-through
>>> large bar (pci-e bar > 4G) device, such as nvidia M60, However cloud
>>> providers may want to leverage this feature for machine learning .etc.
>>> Is it on your TODO list?
>>
>>Is what on my todo list?
> 
> support to pass-through large bar (pci-e bar > 4G) device..
> 
>> I was assuming large BAR handling to work so far
>>(Konrad had done some adjustments there quite a while ago, from all I 
> recall).
>>
> 
> 
> _iirc_ what Konrad mentioned was using qemu-trad..

Quite possible (albeit my memory says hvmloader), but the qemu
side (trad or upstream) isn't my realm anyway.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-21  1:53           ` Xuquan (Quan Xu)
  2017-03-21  7:38             ` Jan Beulich
@ 2017-03-21 13:18             ` Konrad Rzeszutek Wilk
  2017-03-21 13:22               ` Venu Busireddy
  1 sibling, 1 reply; 10+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-03-21 13:18 UTC (permalink / raw)
  To: Xuquan (Quan Xu), Venu Busireddy
  Cc: Kevin Tian, StefanoStabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, Jan Beulich, anthony.perard

. snip..
> support to pass-through large bar (pci-e bar > 4G) device..

Yes it does work. 
> 
> > I was assuming large BAR handling to work so far
> >(Konrad had done some adjustments there quite a while ago, from all I recall).
> >
> 
> 
> _iirc_ what Konrad mentioned was using qemu-trad..

Yes but we also did tests on qemu-xen and it worked. CCing Venu.

Venu, does passing in large BARs work with qemu-xen (aka 'xl')?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: question: xen/qemu - mmio mapping issues for device pass-through
  2017-03-21 13:18             ` Konrad Rzeszutek Wilk
@ 2017-03-21 13:22               ` Venu Busireddy
  0 siblings, 0 replies; 10+ messages in thread
From: Venu Busireddy @ 2017-03-21 13:22 UTC (permalink / raw)
  To: Konrad Wilk, Xuquan (Quan Xu)
  Cc: Kevin Tian, StefanoStabellini, George.Dunlap, ian.jackson,
	xen-devel, Fanhenglong, Jan Beulich, anthony.perard



> -----Original Message-----
> From: Konrad Rzeszutek Wilk
> Sent: Tuesday, March 21, 2017 08:19 AM
> To: Xuquan (Quan Xu); Venu Busireddy
> Cc: Jan Beulich; anthony.perard@citrix.com; George.Dunlap@eu.citrix.com;
> ian.jackson@eu.citrix.com; Fanhenglong; Kevin Tian; StefanoStabellini;
> xen-devel@lists.xen.org
> Subject: Re: question: xen/qemu - mmio mapping issues for device pass-
> through
> 
> .. snip..
> > support to pass-through large bar (pci-e bar > 4G) device..
> 
> Yes it does work.
> >
> > > I was assuming large BAR handling to work so far
> > >(Konrad had done some adjustments there quite a while ago, from all I
> recall).
> > >
> >
> >
> > _iirc_ what Konrad mentioned was using qemu-trad..
> 
> Yes but we also did tests on qemu-xen and it worked. CCing Venu.
> 
> Venu, does passing in large BARs work with qemu-xen (aka 'xl')?

Sorry, I do not know the answer!

Venu


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-03-21 13:22 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-16 13:55 question: xen/qemu - mmio mapping issues for device pass-through Xuquan (Quan Xu)
2017-03-16 14:05 ` Jan Beulich
2017-03-16 14:21   ` Xuquan (Quan Xu)
2017-03-16 15:31     ` Jan Beulich
2017-03-20  1:58       ` Xuquan (Quan Xu)
2017-03-20  7:34         ` Jan Beulich
2017-03-21  1:53           ` Xuquan (Quan Xu)
2017-03-21  7:38             ` Jan Beulich
2017-03-21 13:18             ` Konrad Rzeszutek Wilk
2017-03-21 13:22               ` Venu Busireddy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.