linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
@ 2018-04-20  0:47 Wanpeng Li
  2018-04-20  7:15 ` Cornelia Huck
  0 siblings, 1 reply; 8+ messages in thread
From: Wanpeng Li @ 2018-04-20  0:47 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Radim Krčmář, Tonny Lu, Cornelia Huck

From: Wanpeng Li <wanpengli@tencent.com>

Our virtual machines make use of device assignment by configuring
12 NVMe disks for high I/O performance. Each NVMe device has 129 
MSI-X Table entries:
Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
The windows virtual machines fail to boot since they will map the number of 
MSI-table entries that the NVMe hardware reported to the bus to msi routing 
table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
for all archs, in the future this might be extended again if needed.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Tonny Lu <tonnylu@tencent.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Tonny Lu <tonnylu@tencent.com>
---
v1 -> v2:
 * extend MAX_IRQ_ROUTES to 4096 for all archs 

 include/linux/kvm_host.h | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6930c63..0a5c299 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
 
 #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
 
-#ifdef CONFIG_S390
 #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
-#elif defined(CONFIG_ARM64)
-#define KVM_MAX_IRQ_ROUTES 4096
-#else
-#define KVM_MAX_IRQ_ROUTES 1024
-#endif
 
 bool kvm_arch_can_set_irq_routing(struct kvm *kvm);
 int kvm_set_irq_routing(struct kvm *kvm,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
  2018-04-20  0:47 [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs Wanpeng Li
@ 2018-04-20  7:15 ` Cornelia Huck
  2018-04-20 13:51   ` Wanpeng Li
  0 siblings, 1 reply; 8+ messages in thread
From: Cornelia Huck @ 2018-04-20  7:15 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: linux-kernel, kvm, Paolo Bonzini, Radim Krčmář,
	Tonny Lu, Christian Borntraeger, Janosch Frank

On Thu, 19 Apr 2018 17:47:28 -0700
Wanpeng Li <kernellwp@gmail.com> wrote:

> From: Wanpeng Li <wanpengli@tencent.com>
> 
> Our virtual machines make use of device assignment by configuring
> 12 NVMe disks for high I/O performance. Each NVMe device has 129 
> MSI-X Table entries:
> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> The windows virtual machines fail to boot since they will map the number of 
> MSI-table entries that the NVMe hardware reported to the bus to msi routing 
> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> for all archs, in the future this might be extended again if needed.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Tonny Lu <tonnylu@tencent.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
> ---
> v1 -> v2:
>  * extend MAX_IRQ_ROUTES to 4096 for all archs 
> 
>  include/linux/kvm_host.h | 6 ------
>  1 file changed, 6 deletions(-)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6930c63..0a5c299 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>  
>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>  
> -#ifdef CONFIG_S390
>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...

What about /* might need extension/rework in the future */ instead of
the FIXME?

As far as I understand, 4096 should cover most architectures and the
sane end of s390 configurations, but will not be enough at the scarier
end of s390. (I'm not sure how much it matters in practice.)

Do we want to make this a tuneable in the future? Do some kind of
dynamic allocation? Not sure whether it is worth the trouble.

> -#elif defined(CONFIG_ARM64)
> -#define KVM_MAX_IRQ_ROUTES 4096
> -#else
> -#define KVM_MAX_IRQ_ROUTES 1024
> -#endif
>  
>  bool kvm_arch_can_set_irq_routing(struct kvm *kvm);
>  int kvm_set_irq_routing(struct kvm *kvm,

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
  2018-04-20  7:15 ` Cornelia Huck
@ 2018-04-20 13:51   ` Wanpeng Li
  2018-04-20 14:21     ` Cornelia Huck
  0 siblings, 1 reply; 8+ messages in thread
From: Wanpeng Li @ 2018-04-20 13:51 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: LKML, kvm, Paolo Bonzini, Radim Krčmář,
	Tonny Lu, Christian Borntraeger, Janosch Frank

2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
> On Thu, 19 Apr 2018 17:47:28 -0700
> Wanpeng Li <kernellwp@gmail.com> wrote:
>
>> From: Wanpeng Li <wanpengli@tencent.com>
>>
>> Our virtual machines make use of device assignment by configuring
>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> MSI-X Table entries:
>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> The windows virtual machines fail to boot since they will map the number of
>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>> for all archs, in the future this might be extended again if needed.
>>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>> Cc: Tonny Lu <tonnylu@tencent.com>
>> Cc: Cornelia Huck <cohuck@redhat.com>
>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>> ---
>> v1 -> v2:
>>  * extend MAX_IRQ_ROUTES to 4096 for all archs
>>
>>  include/linux/kvm_host.h | 6 ------
>>  1 file changed, 6 deletions(-)
>>
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index 6930c63..0a5c299 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>
>>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>
>> -#ifdef CONFIG_S390
>>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>
> What about /* might need extension/rework in the future */ instead of
> the FIXME?

Yeah, I guess the maintainers can help to fix it when applying. :)

>
> As far as I understand, 4096 should cover most architectures and the
> sane end of s390 configurations, but will not be enough at the scarier
> end of s390. (I'm not sure how much it matters in practice.)
>
> Do we want to make this a tuneable in the future? Do some kind of
> dynamic allocation? Not sure whether it is worth the trouble.

I think keep as it is currently.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
  2018-04-20 13:51   ` Wanpeng Li
@ 2018-04-20 14:21     ` Cornelia Huck
  2018-04-21  0:38       ` Wanpeng Li
  0 siblings, 1 reply; 8+ messages in thread
From: Cornelia Huck @ 2018-04-20 14:21 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: LKML, kvm, Paolo Bonzini, Radim Krčmář,
	Tonny Lu, Christian Borntraeger, Janosch Frank

On Fri, 20 Apr 2018 21:51:13 +0800
Wanpeng Li <kernellwp@gmail.com> wrote:

> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
> > On Thu, 19 Apr 2018 17:47:28 -0700
> > Wanpeng Li <kernellwp@gmail.com> wrote:
> >  
> >> From: Wanpeng Li <wanpengli@tencent.com>
> >>
> >> Our virtual machines make use of device assignment by configuring
> >> 12 NVMe disks for high I/O performance. Each NVMe device has 129
> >> MSI-X Table entries:
> >> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> >> The windows virtual machines fail to boot since they will map the number of
> >> MSI-table entries that the NVMe hardware reported to the bus to msi routing
> >> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> >> for all archs, in the future this might be extended again if needed.
> >>
> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >> Cc: Radim Krčmář <rkrcmar@redhat.com>
> >> Cc: Tonny Lu <tonnylu@tencent.com>
> >> Cc: Cornelia Huck <cohuck@redhat.com>
> >> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> >> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
> >> ---
> >> v1 -> v2:
> >>  * extend MAX_IRQ_ROUTES to 4096 for all archs
> >>
> >>  include/linux/kvm_host.h | 6 ------
> >>  1 file changed, 6 deletions(-)
> >>
> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >> index 6930c63..0a5c299 100644
> >> --- a/include/linux/kvm_host.h
> >> +++ b/include/linux/kvm_host.h
> >> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
> >>
> >>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
> >>
> >> -#ifdef CONFIG_S390
> >>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...  
> >
> > What about /* might need extension/rework in the future */ instead of
> > the FIXME?  
> 
> Yeah, I guess the maintainers can help to fix it when applying. :)
> 
> >
> > As far as I understand, 4096 should cover most architectures and the
> > sane end of s390 configurations, but will not be enough at the scarier
> > end of s390. (I'm not sure how much it matters in practice.)
> >
> > Do we want to make this a tuneable in the future? Do some kind of
> > dynamic allocation? Not sure whether it is worth the trouble.  
> 
> I think keep as it is currently.

My main question here is how long this is enough... the number of
virtqueues per device is up to 1K from the initial 64, which makes it
possible to hit the 4K limit with fewer virtio devices than before (on
s390, each virtqueue uses a routing table entry). OTOH, we don't want
giant tables everywhere just to accommodate s390.

If the s390 maintainers tell me that nobody is doing the really insane
stuff, I'm happy as well :)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
  2018-04-20 14:21     ` Cornelia Huck
@ 2018-04-21  0:38       ` Wanpeng Li
  2018-04-23 11:50         ` Christian Borntraeger
  0 siblings, 1 reply; 8+ messages in thread
From: Wanpeng Li @ 2018-04-21  0:38 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: LKML, kvm, Paolo Bonzini, Radim Krčmář,
	Tonny Lu, Christian Borntraeger, Janosch Frank

2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
> On Fri, 20 Apr 2018 21:51:13 +0800
> Wanpeng Li <kernellwp@gmail.com> wrote:
>
>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>> > On Thu, 19 Apr 2018 17:47:28 -0700
>> > Wanpeng Li <kernellwp@gmail.com> wrote:
>> >
>> >> From: Wanpeng Li <wanpengli@tencent.com>
>> >>
>> >> Our virtual machines make use of device assignment by configuring
>> >> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> >> MSI-X Table entries:
>> >> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> >> The windows virtual machines fail to boot since they will map the number of
>> >> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> >> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>> >> for all archs, in the future this might be extended again if needed.
>> >>
>> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> >> Cc: Radim Krčmář <rkrcmar@redhat.com>
>> >> Cc: Tonny Lu <tonnylu@tencent.com>
>> >> Cc: Cornelia Huck <cohuck@redhat.com>
>> >> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>> >> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>> >> ---
>> >> v1 -> v2:
>> >>  * extend MAX_IRQ_ROUTES to 4096 for all archs
>> >>
>> >>  include/linux/kvm_host.h | 6 ------
>> >>  1 file changed, 6 deletions(-)
>> >>
>> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> >> index 6930c63..0a5c299 100644
>> >> --- a/include/linux/kvm_host.h
>> >> +++ b/include/linux/kvm_host.h
>> >> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>> >>
>> >>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>> >>
>> >> -#ifdef CONFIG_S390
>> >>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>> >
>> > What about /* might need extension/rework in the future */ instead of
>> > the FIXME?
>>
>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>
>> >
>> > As far as I understand, 4096 should cover most architectures and the
>> > sane end of s390 configurations, but will not be enough at the scarier
>> > end of s390. (I'm not sure how much it matters in practice.)
>> >
>> > Do we want to make this a tuneable in the future? Do some kind of
>> > dynamic allocation? Not sure whether it is worth the trouble.
>>
>> I think keep as it is currently.
>
> My main question here is how long this is enough... the number of
> virtqueues per device is up to 1K from the initial 64, which makes it
> possible to hit the 4K limit with fewer virtio devices than before (on
> s390, each virtqueue uses a routing table entry). OTOH, we don't want
> giant tables everywhere just to accommodate s390.

I suspect there is no real scenario to futher extend for s390 since no
guys report.

> If the s390 maintainers tell me that nobody is doing the really insane
> stuff, I'm happy as well :)

Christian, any thoughts?

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
  2018-04-21  0:38       ` Wanpeng Li
@ 2018-04-23 11:50         ` Christian Borntraeger
  2018-04-23 11:56           ` Wanpeng Li
  2018-04-23 11:57           ` Cornelia Huck
  0 siblings, 2 replies; 8+ messages in thread
From: Christian Borntraeger @ 2018-04-23 11:50 UTC (permalink / raw)
  To: Wanpeng Li, Cornelia Huck
  Cc: LKML, kvm, Paolo Bonzini, Radim Krčmář,
	Tonny Lu, Janosch Frank



On 04/21/2018 02:38 AM, Wanpeng Li wrote:
> 2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>> On Fri, 20 Apr 2018 21:51:13 +0800
>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>
>>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>>>> On Thu, 19 Apr 2018 17:47:28 -0700
>>>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>>>
>>>>> From: Wanpeng Li <wanpengli@tencent.com>
>>>>>
>>>>> Our virtual machines make use of device assignment by configuring
>>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>>>>> MSI-X Table entries:
>>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>>>>> The windows virtual machines fail to boot since they will map the number of
>>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>>>>> for all archs, in the future this might be extended again if needed.
>>>>>
>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>>>>> Cc: Tonny Lu <tonnylu@tencent.com>
>>>>> Cc: Cornelia Huck <cohuck@redhat.com>
>>>>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>>>>> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>>>>> ---
>>>>> v1 -> v2:
>>>>>  * extend MAX_IRQ_ROUTES to 4096 for all archs
>>>>>
>>>>>  include/linux/kvm_host.h | 6 ------
>>>>>  1 file changed, 6 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>>>>> index 6930c63..0a5c299 100644
>>>>> --- a/include/linux/kvm_host.h
>>>>> +++ b/include/linux/kvm_host.h
>>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>>>>
>>>>>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>>>>
>>>>> -#ifdef CONFIG_S390
>>>>>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>>>>
>>>> What about /* might need extension/rework in the future */ instead of
>>>> the FIXME?
>>>
>>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>>
>>>>
>>>> As far as I understand, 4096 should cover most architectures and the
>>>> sane end of s390 configurations, but will not be enough at the scarier
>>>> end of s390. (I'm not sure how much it matters in practice.)
>>>>
>>>> Do we want to make this a tuneable in the future? Do some kind of
>>>> dynamic allocation? Not sure whether it is worth the trouble.
>>>
>>> I think keep as it is currently.
>>
>> My main question here is how long this is enough... the number of
>> virtqueues per device is up to 1K from the initial 64, which makes it
>> possible to hit the 4K limit with fewer virtio devices than before (on
>> s390, each virtqueue uses a routing table entry). OTOH, we don't want
>> giant tables everywhere just to accommodate s390.
> 
> I suspect there is no real scenario to futher extend for s390 since no
> guys report.
> 
>> If the s390 maintainers tell me that nobody is doing the really insane
>> stuff, I'm happy as well :)
> 
> Christian, any thoughts?

For now this patch is a no-op for s390 so as long as nobody complains today we are good.
If it turns out to be "not enough" we can then add a configurable number or whatever. 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
  2018-04-23 11:50         ` Christian Borntraeger
@ 2018-04-23 11:56           ` Wanpeng Li
  2018-04-23 11:57           ` Cornelia Huck
  1 sibling, 0 replies; 8+ messages in thread
From: Wanpeng Li @ 2018-04-23 11:56 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Cornelia Huck, LKML, kvm, Paolo Bonzini,
	Radim Krčmář,
	Tonny Lu, Janosch Frank

2018-04-23 19:50 GMT+08:00 Christian Borntraeger <borntraeger@de.ibm.com>:
>
>
> On 04/21/2018 02:38 AM, Wanpeng Li wrote:
>> 2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>>> On Fri, 20 Apr 2018 21:51:13 +0800
>>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>>
>>>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>>>>> On Thu, 19 Apr 2018 17:47:28 -0700
>>>>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>>>>
>>>>>> From: Wanpeng Li <wanpengli@tencent.com>
>>>>>>
>>>>>> Our virtual machines make use of device assignment by configuring
>>>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>>>>>> MSI-X Table entries:
>>>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>>>>>> The windows virtual machines fail to boot since they will map the number of
>>>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>>>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>>>>>> for all archs, in the future this might be extended again if needed.
>>>>>>
>>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>>>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>>>>>> Cc: Tonny Lu <tonnylu@tencent.com>
>>>>>> Cc: Cornelia Huck <cohuck@redhat.com>
>>>>>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>>>>>> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>>>>>> ---
>>>>>> v1 -> v2:
>>>>>>  * extend MAX_IRQ_ROUTES to 4096 for all archs
>>>>>>
>>>>>>  include/linux/kvm_host.h | 6 ------
>>>>>>  1 file changed, 6 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>>>>>> index 6930c63..0a5c299 100644
>>>>>> --- a/include/linux/kvm_host.h
>>>>>> +++ b/include/linux/kvm_host.h
>>>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>>>>>
>>>>>>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>>>>>
>>>>>> -#ifdef CONFIG_S390
>>>>>>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>>>>>
>>>>> What about /* might need extension/rework in the future */ instead of
>>>>> the FIXME?
>>>>
>>>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>>>
>>>>>
>>>>> As far as I understand, 4096 should cover most architectures and the
>>>>> sane end of s390 configurations, but will not be enough at the scarier
>>>>> end of s390. (I'm not sure how much it matters in practice.)
>>>>>
>>>>> Do we want to make this a tuneable in the future? Do some kind of
>>>>> dynamic allocation? Not sure whether it is worth the trouble.
>>>>
>>>> I think keep as it is currently.
>>>
>>> My main question here is how long this is enough... the number of
>>> virtqueues per device is up to 1K from the initial 64, which makes it
>>> possible to hit the 4K limit with fewer virtio devices than before (on
>>> s390, each virtqueue uses a routing table entry). OTOH, we don't want
>>> giant tables everywhere just to accommodate s390.
>>
>> I suspect there is no real scenario to futher extend for s390 since no
>> guys report.
>>
>>> If the s390 maintainers tell me that nobody is doing the really insane
>>> stuff, I'm happy as well :)
>>
>> Christian, any thoughts?
>
> For now this patch is a no-op for s390 so as long as nobody complains today we are good.
> If it turns out to be "not enough" we can then add a configurable number or whatever.

Thanks Christian. Paolo, could you pick this one w/ "/* might need
extension/rework in the future */ instead of
the FIXME" change or do you need I to send out a new version? :)

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs
  2018-04-23 11:50         ` Christian Borntraeger
  2018-04-23 11:56           ` Wanpeng Li
@ 2018-04-23 11:57           ` Cornelia Huck
  1 sibling, 0 replies; 8+ messages in thread
From: Cornelia Huck @ 2018-04-23 11:57 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Wanpeng Li, LKML, kvm, Paolo Bonzini, Radim Krčmář,
	Tonny Lu, Janosch Frank

On Mon, 23 Apr 2018 13:50:48 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 04/21/2018 02:38 AM, Wanpeng Li wrote:
> > 2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:  
> >> On Fri, 20 Apr 2018 21:51:13 +0800
> >> Wanpeng Li <kernellwp@gmail.com> wrote:
> >>  
> >>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:  
> >>>> On Thu, 19 Apr 2018 17:47:28 -0700
> >>>> Wanpeng Li <kernellwp@gmail.com> wrote:
> >>>>  
> >>>>> From: Wanpeng Li <wanpengli@tencent.com>
> >>>>>
> >>>>> Our virtual machines make use of device assignment by configuring
> >>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
> >>>>> MSI-X Table entries:
> >>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> >>>>> The windows virtual machines fail to boot since they will map the number of
> >>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
> >>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> >>>>> for all archs, in the future this might be extended again if needed.
> >>>>>
> >>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >>>>> Cc: Radim Krčmář <rkrcmar@redhat.com>
> >>>>> Cc: Tonny Lu <tonnylu@tencent.com>
> >>>>> Cc: Cornelia Huck <cohuck@redhat.com>
> >>>>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> >>>>> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
> >>>>> ---
> >>>>> v1 -> v2:
> >>>>>  * extend MAX_IRQ_ROUTES to 4096 for all archs
> >>>>>
> >>>>>  include/linux/kvm_host.h | 6 ------
> >>>>>  1 file changed, 6 deletions(-)
> >>>>>
> >>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >>>>> index 6930c63..0a5c299 100644
> >>>>> --- a/include/linux/kvm_host.h
> >>>>> +++ b/include/linux/kvm_host.h
> >>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
> >>>>>
> >>>>>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
> >>>>>
> >>>>> -#ifdef CONFIG_S390
> >>>>>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...  
> >>>>
> >>>> What about /* might need extension/rework in the future */ instead of
> >>>> the FIXME?  
> >>>
> >>> Yeah, I guess the maintainers can help to fix it when applying. :)
> >>>  
> >>>>
> >>>> As far as I understand, 4096 should cover most architectures and the
> >>>> sane end of s390 configurations, but will not be enough at the scarier
> >>>> end of s390. (I'm not sure how much it matters in practice.)
> >>>>
> >>>> Do we want to make this a tuneable in the future? Do some kind of
> >>>> dynamic allocation? Not sure whether it is worth the trouble.  
> >>>
> >>> I think keep as it is currently.  
> >>
> >> My main question here is how long this is enough... the number of
> >> virtqueues per device is up to 1K from the initial 64, which makes it
> >> possible to hit the 4K limit with fewer virtio devices than before (on
> >> s390, each virtqueue uses a routing table entry). OTOH, we don't want
> >> giant tables everywhere just to accommodate s390.  
> > 
> > I suspect there is no real scenario to futher extend for s390 since no
> > guys report.
> >   
> >> If the s390 maintainers tell me that nobody is doing the really insane
> >> stuff, I'm happy as well :)  
> > 
> > Christian, any thoughts?  
> 
> For now this patch is a no-op for s390 so as long as nobody complains today we are good.
> If it turns out to be "not enough" we can then add a configurable number or whatever. 

OK, then let's deal with the problem once it shows up.

With the comment changed as suggested above,

Reviewed-by: Cornelia Huck <cohuck@redhat.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-04-23 11:57 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-20  0:47 [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs Wanpeng Li
2018-04-20  7:15 ` Cornelia Huck
2018-04-20 13:51   ` Wanpeng Li
2018-04-20 14:21     ` Cornelia Huck
2018-04-21  0:38       ` Wanpeng Li
2018-04-23 11:50         ` Christian Borntraeger
2018-04-23 11:56           ` Wanpeng Li
2018-04-23 11:57           ` Cornelia Huck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).