All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] hv: fix msi affinity when device requests all possible CPU's
@ 2017-06-28 23:22 Stephen Hemminger
       [not found] ` <DM5PR21MB0476CFFA739B74FBB3B6F42DA0D20@DM5PR21MB0476.namprd21.prod.outlook.com>
  2017-07-02 21:38 ` Bjorn Helgaas
  0 siblings, 2 replies; 7+ messages in thread
From: Stephen Hemminger @ 2017-06-28 23:22 UTC (permalink / raw)
  To: kys, bhelgaas; +Cc: linux-pci, devel, Stephen Hemminger

When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV,
the driver requests affinity with all possible CPU's (0-239) even
those CPU's are not online (and will never be). Because of this the device
is unable to correctly get MSI interrupt's setup.

This was caused by the change in 4.12 that converted this affinity
into all possible CPU's (0-31) but then host reports
an error since this is larger than the number of online cpu's.

Previously, this worked (up to 4.12-rc1) because only online cpu's
would be put in mask passed to the host.

This patch applies only to 4.12.
The driver in linux-next needs a a different fix because of the changes
to PCI host protocol version.

Fixes: 433fcf6b7b31 ("PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs")
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
 drivers/pci/host/pci-hyperv.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
index 84936383e269..3cadfcca3ae9 100644
--- a/drivers/pci/host/pci-hyperv.c
+++ b/drivers/pci/host/pci-hyperv.c
@@ -900,10 +900,12 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
 	 * processors because Hyper-V only supports 64 in a guest.
 	 */
 	affinity = irq_data_get_affinity_mask(data);
+	cpumask_and(affinity, affinity, cpu_online_mask);
+
 	if (cpumask_weight(affinity) >= 32) {
 		int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL;
 	} else {
-		for_each_cpu_and(cpu, affinity, cpu_online_mask) {
+		for_each_cpu(cpu, affinity) {
 			int_pkt->int_desc.cpu_mask |=
 				(1ULL << vmbus_cpu_number_to_vp_number(cpu));
 		}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* RE: [PATCH] hv: fix msi affinity when device requests all possible CPU's
       [not found] ` <DM5PR21MB0476CFFA739B74FBB3B6F42DA0D20@DM5PR21MB0476.namprd21.prod.outlook.com>
@ 2017-06-29 22:08   ` Jork Loeser
  2017-06-29 23:57     ` Stephen Hemminger
  0 siblings, 1 reply; 7+ messages in thread
From: Jork Loeser @ 2017-06-29 22:08 UTC (permalink / raw)
  To: stephen, KY Srinivasan, bhelgaas; +Cc: linux-pci, devel, Stephen Hemminger

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, June 28, 2017 4:22 PM
> To: KY Srinivasan <kys@microsoft.com>; bhelgaas@google.com
> Cc: linux-pci@vger.kernel.org; devel@linuxdriverproject.org; Stephen
> Hemminger <sthemmin@microsoft.com>
> Subject: [PATCH] hv: fix msi affinity when device requests all possible C=
PU's
>=20
> When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV, the dr=
iver
> requests affinity with all possible CPU's (0-239) even those CPU's are no=
t online
> (and will never be). Because of this the device is unable to correctly ge=
t MSI
> interrupt's setup.
>=20
> This was caused by the change in 4.12 that converted this affinity into a=
ll
> possible CPU's (0-31) but then host reports an error since this is larger=
 than the
> number of online cpu's.
>=20
> Previously, this worked (up to 4.12-rc1) because only online cpu's would =
be put
> in mask passed to the host.
>=20
> This patch applies only to 4.12.
> The driver in linux-next needs a a different fix because of the changes t=
o PCI
> host protocol version.

The vPCI patch in linux-next has the issue fixed already.

Regards,
Jork

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [PATCH] hv: fix msi affinity when device requests all possible CPU's
  2017-06-29 22:08   ` Jork Loeser
@ 2017-06-29 23:57     ` Stephen Hemminger
  0 siblings, 0 replies; 7+ messages in thread
From: Stephen Hemminger @ 2017-06-29 23:57 UTC (permalink / raw)
  To: Jork Loeser, stephen, KY Srinivasan, bhelgaas; +Cc: linux-pci, devel

Patch still needed for 4.12

-----Original Message-----
From: Jork Loeser=20
Sent: Thursday, June 29, 2017 3:08 PM
To: stephen@networkplumber.org; KY Srinivasan <kys@microsoft.com>; bhelgaas=
@google.com
Cc: linux-pci@vger.kernel.org; devel@linuxdriverproject.org; Stephen Hemmin=
ger <sthemmin@microsoft.com>
Subject: RE: [PATCH] hv: fix msi affinity when device requests all possible=
 CPU's

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, June 28, 2017 4:22 PM
> To: KY Srinivasan <kys@microsoft.com>; bhelgaas@google.com
> Cc: linux-pci@vger.kernel.org; devel@linuxdriverproject.org; Stephen
> Hemminger <sthemmin@microsoft.com>
> Subject: [PATCH] hv: fix msi affinity when device requests all possible C=
PU's
>=20
> When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV, the dr=
iver
> requests affinity with all possible CPU's (0-239) even those CPU's are no=
t online
> (and will never be). Because of this the device is unable to correctly ge=
t MSI
> interrupt's setup.
>=20
> This was caused by the change in 4.12 that converted this affinity into a=
ll
> possible CPU's (0-31) but then host reports an error since this is larger=
 than the
> number of online cpu's.
>=20
> Previously, this worked (up to 4.12-rc1) because only online cpu's would =
be put
> in mask passed to the host.
>=20
> This patch applies only to 4.12.
> The driver in linux-next needs a a different fix because of the changes t=
o PCI
> host protocol version.

The vPCI patch in linux-next has the issue fixed already.

Regards,
Jork

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] hv: fix msi affinity when device requests all possible CPU's
  2017-06-28 23:22 [PATCH] hv: fix msi affinity when device requests all possible CPU's Stephen Hemminger
       [not found] ` <DM5PR21MB0476CFFA739B74FBB3B6F42DA0D20@DM5PR21MB0476.namprd21.prod.outlook.com>
@ 2017-07-02 21:38 ` Bjorn Helgaas
  2017-07-04 21:59   ` Stephen Hemminger
  1 sibling, 1 reply; 7+ messages in thread
From: Bjorn Helgaas @ 2017-07-02 21:38 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: kys, bhelgaas, linux-pci, devel, Stephen Hemminger

On Wed, Jun 28, 2017 at 04:22:04PM -0700, Stephen Hemminger wrote:
> When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV,
> the driver requests affinity with all possible CPU's (0-239) even
> those CPU's are not online (and will never be). Because of this the device
> is unable to correctly get MSI interrupt's setup.
> 
> This was caused by the change in 4.12 that converted this affinity
> into all possible CPU's (0-31) but then host reports
> an error since this is larger than the number of online cpu's.
> 
> Previously, this worked (up to 4.12-rc1) because only online cpu's
> would be put in mask passed to the host.
> 
> This patch applies only to 4.12.
> The driver in linux-next needs a a different fix because of the changes
> to PCI host protocol version.

If Linus decides to postpone v4.12 a week, I can ask him to pull this.  But
I suspect he will release v4.12 today.  In that case, I don't know what to
do with this other than maybe send it to Greg for a -stable release.

> Fixes: 433fcf6b7b31 ("PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs")
> Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
> ---
>  drivers/pci/host/pci-hyperv.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
> index 84936383e269..3cadfcca3ae9 100644
> --- a/drivers/pci/host/pci-hyperv.c
> +++ b/drivers/pci/host/pci-hyperv.c
> @@ -900,10 +900,12 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
>  	 * processors because Hyper-V only supports 64 in a guest.
>  	 */
>  	affinity = irq_data_get_affinity_mask(data);
> +	cpumask_and(affinity, affinity, cpu_online_mask);
> +
>  	if (cpumask_weight(affinity) >= 32) {
>  		int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL;
>  	} else {
> -		for_each_cpu_and(cpu, affinity, cpu_online_mask) {
> +		for_each_cpu(cpu, affinity) {
>  			int_pkt->int_desc.cpu_mask |=
>  				(1ULL << vmbus_cpu_number_to_vp_number(cpu));
>  		}
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] hv: fix msi affinity when device requests all possible CPU's
  2017-07-02 21:38 ` Bjorn Helgaas
@ 2017-07-04 21:59   ` Stephen Hemminger
  2017-07-05 19:49     ` Bjorn Helgaas
  0 siblings, 1 reply; 7+ messages in thread
From: Stephen Hemminger @ 2017-07-04 21:59 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: kys, bhelgaas, linux-pci, devel, Stephen Hemminger

On Sun, 2 Jul 2017 16:38:19 -0500
Bjorn Helgaas <helgaas@kernel.org> wrote:

> On Wed, Jun 28, 2017 at 04:22:04PM -0700, Stephen Hemminger wrote:
> > When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV,
> > the driver requests affinity with all possible CPU's (0-239) even
> > those CPU's are not online (and will never be). Because of this the device
> > is unable to correctly get MSI interrupt's setup.
> > 
> > This was caused by the change in 4.12 that converted this affinity
> > into all possible CPU's (0-31) but then host reports
> > an error since this is larger than the number of online cpu's.
> > 
> > Previously, this worked (up to 4.12-rc1) because only online cpu's
> > would be put in mask passed to the host.
> > 
> > This patch applies only to 4.12.
> > The driver in linux-next needs a a different fix because of the changes
> > to PCI host protocol version.  
> 
> If Linus decides to postpone v4.12 a week, I can ask him to pull this.  But
> I suspect he will release v4.12 today.  In that case, I don't know what to
> do with this other than maybe send it to Greg for a -stable release.

Looks like this will have to be queued for 4.12 stable.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] hv: fix msi affinity when device requests all possible CPU's
  2017-07-04 21:59   ` Stephen Hemminger
@ 2017-07-05 19:49     ` Bjorn Helgaas
  2017-07-05 20:07       ` Stephen Hemminger
  0 siblings, 1 reply; 7+ messages in thread
From: Bjorn Helgaas @ 2017-07-05 19:49 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: kys, bhelgaas, linux-pci, devel, Stephen Hemminger

On Tue, Jul 04, 2017 at 02:59:42PM -0700, Stephen Hemminger wrote:
> On Sun, 2 Jul 2017 16:38:19 -0500
> Bjorn Helgaas <helgaas@kernel.org> wrote:
> 
> > On Wed, Jun 28, 2017 at 04:22:04PM -0700, Stephen Hemminger wrote:
> > > When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV,
> > > the driver requests affinity with all possible CPU's (0-239) even
> > > those CPU's are not online (and will never be). Because of this the device
> > > is unable to correctly get MSI interrupt's setup.
> > > 
> > > This was caused by the change in 4.12 that converted this affinity
> > > into all possible CPU's (0-31) but then host reports
> > > an error since this is larger than the number of online cpu's.
> > > 
> > > Previously, this worked (up to 4.12-rc1) because only online cpu's
> > > would be put in mask passed to the host.
> > > 
> > > This patch applies only to 4.12.
> > > The driver in linux-next needs a a different fix because of the changes
> > > to PCI host protocol version.  
> > 
> > If Linus decides to postpone v4.12 a week, I can ask him to pull this.  But
> > I suspect he will release v4.12 today.  In that case, I don't know what to
> > do with this other than maybe send it to Greg for a -stable release.
> 
> Looks like this will have to be queued for 4.12 stable.

I assume you'll take care of this, right?  It sounds like there's nothing
to do for upstream because it needs a different fix.

Bjorn

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] hv: fix msi affinity when device requests all possible CPU's
  2017-07-05 19:49     ` Bjorn Helgaas
@ 2017-07-05 20:07       ` Stephen Hemminger
  0 siblings, 0 replies; 7+ messages in thread
From: Stephen Hemminger @ 2017-07-05 20:07 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: kys, bhelgaas, linux-pci, devel, Stephen Hemminger

On Wed, 5 Jul 2017 14:49:33 -0500
Bjorn Helgaas <helgaas@kernel.org> wrote:

> On Tue, Jul 04, 2017 at 02:59:42PM -0700, Stephen Hemminger wrote:
> > On Sun, 2 Jul 2017 16:38:19 -0500
> > Bjorn Helgaas <helgaas@kernel.org> wrote:
> >   
> > > On Wed, Jun 28, 2017 at 04:22:04PM -0700, Stephen Hemminger wrote:  
> > > > When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV,
> > > > the driver requests affinity with all possible CPU's (0-239) even
> > > > those CPU's are not online (and will never be). Because of this the device
> > > > is unable to correctly get MSI interrupt's setup.
> > > > 
> > > > This was caused by the change in 4.12 that converted this affinity
> > > > into all possible CPU's (0-31) but then host reports
> > > > an error since this is larger than the number of online cpu's.
> > > > 
> > > > Previously, this worked (up to 4.12-rc1) because only online cpu's
> > > > would be put in mask passed to the host.
> > > > 
> > > > This patch applies only to 4.12.
> > > > The driver in linux-next needs a a different fix because of the changes
> > > > to PCI host protocol version.    
> > > 
> > > If Linus decides to postpone v4.12 a week, I can ask him to pull this.  But
> > > I suspect he will release v4.12 today.  In that case, I don't know what to
> > > do with this other than maybe send it to Greg for a -stable release.  
> > 
> > Looks like this will have to be queued for 4.12 stable.  
> 
> I assume you'll take care of this, right?  It sounds like there's nothing
> to do for upstream because it needs a different fix.
> 
> Bjorn

Already fixed in Linux-next. The code is different for PCI 1.2
version and never had the bug.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-07-05 20:07 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-28 23:22 [PATCH] hv: fix msi affinity when device requests all possible CPU's Stephen Hemminger
     [not found] ` <DM5PR21MB0476CFFA739B74FBB3B6F42DA0D20@DM5PR21MB0476.namprd21.prod.outlook.com>
2017-06-29 22:08   ` Jork Loeser
2017-06-29 23:57     ` Stephen Hemminger
2017-07-02 21:38 ` Bjorn Helgaas
2017-07-04 21:59   ` Stephen Hemminger
2017-07-05 19:49     ` Bjorn Helgaas
2017-07-05 20:07       ` Stephen Hemminger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.