All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] Only include online cpus in cpu_mask_to_apicid_flat
@ 2010-08-31  7:19 Yang, Sheng
  2010-08-31  8:55 ` Jan Beulich
  0 siblings, 1 reply; 7+ messages in thread
From: Yang, Sheng @ 2010-08-31  7:19 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel

[-- Attachment #1: Type: Text/Plain, Size: 2177 bytes --]

The former fix of __bind_irq_vector() can ensure the destination field in IOAPIC 
redirection table is valid by limiting the cpu_mask to the online cpus. But there 
is another issue that some interrupts(timer and serial) would like to be 
delivered to all online cpus with LPR delivery mode. But at the time of 
bind_irq_vector() called, there were only one cpu(BSP) was online, so the 
interrupts would always be delivered to BSP. This method can work but not the best 
one.

In fact, setup_ioapic_dest() would be called to reprogram the IOAPIC redirection 
table to follow "irq_cfg->cpu_mask", after SMP initialization work was 
done. So I think the better choice is to keep the original value in irq_cfg-
>cpu_mask, and just make sure the value we wrote to the IOAPIC redirection table 
is valid. Then modifying cpu_mask_to_apicid_flat() seems like a better idea.

This patch would revert the fix of __bind_irq_vector(), and modify 
cpu_mask_to_apicid_flat() to contain only online CPUs.

--
regards
Yang, Sheng

diff --git a/xen/arch/x86/genapic/delivery.c b/xen/arch/x86/genapic/delivery.c
--- a/xen/arch/x86/genapic/delivery.c
+++ b/xen/arch/x86/genapic/delivery.c
@@ -36,9 +36,10 @@
 	return cpu_online_map;
 }

+/* Cover only online cpus */
 unsigned int cpu_mask_to_apicid_flat(cpumask_t cpumask)
 {
-	return cpus_addr(cpumask)[0]&0xFF;
+	return cpus_addr(cpumask)[0] & cpus_addr(cpu_online_map)[0] & 0xFF;
 }

 /*
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -86,14 +86,14 @@
     cpus_and(online_mask, cpu_mask, cpu_online_map);
     if (cpus_empty(online_mask))
         return -EINVAL;
-    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, online_mask))
+    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, cpu_mask))
         return 0;
     if (cfg->vector != IRQ_VECTOR_UNASSIGNED)
         return -EBUSY;
     for_each_cpu_mask(cpu, online_mask)
         per_cpu(vector_irq, cpu)[vector] = irq;
     cfg->vector = vector;
-    cfg->cpu_mask = online_mask;
+    cfg->cpu_mask = cpu_mask;
     irq_status[irq] = IRQ_USED;
     if (IO_APIC_IRQ(irq))
         irq_vector[irq] = vector;


[-- Attachment #2: dest_fix.patch --]
[-- Type: text/x-patch, Size: 1164 bytes --]

diff --git a/xen/arch/x86/genapic/delivery.c b/xen/arch/x86/genapic/delivery.c
--- a/xen/arch/x86/genapic/delivery.c
+++ b/xen/arch/x86/genapic/delivery.c
@@ -36,9 +36,10 @@
 	return cpu_online_map;
 } 
 
+/* Cover only online cpus */
 unsigned int cpu_mask_to_apicid_flat(cpumask_t cpumask)
 {
-	return cpus_addr(cpumask)[0]&0xFF;
+	return cpus_addr(cpumask)[0] & cpus_addr(cpu_online_map)[0] & 0xFF;
 }
 
 /*
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -86,14 +86,14 @@
     cpus_and(online_mask, cpu_mask, cpu_online_map);
     if (cpus_empty(online_mask))
         return -EINVAL;
-    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, online_mask))
+    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, cpu_mask))
         return 0;
     if (cfg->vector != IRQ_VECTOR_UNASSIGNED) 
         return -EBUSY;
     for_each_cpu_mask(cpu, online_mask)
         per_cpu(vector_irq, cpu)[vector] = irq;
     cfg->vector = vector;
-    cfg->cpu_mask = online_mask;
+    cfg->cpu_mask = cpu_mask;
     irq_status[irq] = IRQ_USED;
     if (IO_APIC_IRQ(irq))
         irq_vector[irq] = vector;

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] Only include online cpus in cpu_mask_to_apicid_flat
  2010-08-31  7:19 [PATCH] Only include online cpus in cpu_mask_to_apicid_flat Yang, Sheng
@ 2010-08-31  8:55 ` Jan Beulich
  2010-08-31 10:46   ` Keir Fraser
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2010-08-31  8:55 UTC (permalink / raw)
  To: Keir Fraser, Sheng Yang; +Cc: xen-devel

>>> On 31.08.10 at 09:19, "Yang, Sheng" <sheng.yang@intel.com> wrote:
> The former fix of __bind_irq_vector() can ensure the destination field in 
> IOAPIC 
> redirection table is valid by limiting the cpu_mask to the online cpus. But 
> there 
> is another issue that some interrupts(timer and serial) would like to be 
> delivered to all online cpus with LPR delivery mode. But at the time of 
> bind_irq_vector() called, there were only one cpu(BSP) was online, so the 
> interrupts would always be delivered to BSP. This method can work but not 
> the best 
> one.
> 
> In fact, setup_ioapic_dest() would be called to reprogram the IOAPIC 
> redirection 
> table to follow "irq_cfg->cpu_mask", after SMP initialization work was 
> done. So I think the better choice is to keep the original value in irq_cfg-
>>cpu_mask, and just make sure the value we wrote to the IOAPIC redirection 
> table 
> is valid. Then modifying cpu_mask_to_apicid_flat() seems like a better idea.

Why would you need to modify only this function, but not the other
variants? If a CPU in the passed in mask can be offline, then
first_cpu() (as used in the other variants) can return an offline CPU,
and you don't want to program such into an RTE.

> This patch would revert the fix of __bind_irq_vector(), and modify 
> cpu_mask_to_apicid_flat() to contain only online CPUs.

Jan

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] Only include online cpus in cpu_mask_to_apicid_flat
  2010-08-31  8:55 ` Jan Beulich
@ 2010-08-31 10:46   ` Keir Fraser
  2010-09-01  3:39     ` Yang, Sheng
  0 siblings, 1 reply; 7+ messages in thread
From: Keir Fraser @ 2010-08-31 10:46 UTC (permalink / raw)
  To: Jan Beulich, Sheng Yang; +Cc: xen-devel

On 31/08/2010 09:55, "Jan Beulich" <JBeulich@novell.com> wrote:

>> In fact, setup_ioapic_dest() would be called to reprogram the IOAPIC
>> redirection 
>> table to follow "irq_cfg->cpu_mask", after SMP initialization work was
>> done. So I think the better choice is to keep the original value in irq_cfg-
>>> cpu_mask, and just make sure the value we wrote to the IOAPIC redirection
>> table 
>> is valid. Then modifying cpu_mask_to_apicid_flat() seems like a better idea.
> 
> Why would you need to modify only this function, but not the other
> variants? If a CPU in the passed in mask can be offline, then
> first_cpu() (as used in the other variants) can return an offline CPU,
> and you don't want to program such into an RTE.

Indeed, also all other assignments to irq_cfg->cpu_mask include only online
CPUs, so the current code is only being consistent in that respect. And in
the general case (even if not specifically for IRQ0) that is important
because IDT vectors are not allocated on offline CPUs, and so we could
otherwise end up with CPUs coming online and finding they are in multiple
irq_cfg's with the same vector! Also the PIT is usually disabled after boot
on Xen and so it being restricted to only CPU0 would really not matter. I
think we should leave the code as is.

 -- Keir

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] Only include online cpus in cpu_mask_to_apicid_flat
  2010-08-31 10:46   ` Keir Fraser
@ 2010-09-01  3:39     ` Yang, Sheng
  2010-09-01  9:14       ` Jan Beulich
  0 siblings, 1 reply; 7+ messages in thread
From: Yang, Sheng @ 2010-09-01  3:39 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel, Jan Beulich

[-- Attachment #1: Type: Text/Plain, Size: 1951 bytes --]

On Tuesday 31 August 2010 18:46:28 Keir Fraser wrote:
> On 31/08/2010 09:55, "Jan Beulich" <JBeulich@novell.com> wrote:
> >> In fact, setup_ioapic_dest() would be called to reprogram the IOAPIC
> >> redirection
> >> table to follow "irq_cfg->cpu_mask", after SMP initialization work was
> >> done. So I think the better choice is to keep the original value in
> >> irq_cfg-
> >> 
> >>> cpu_mask, and just make sure the value we wrote to the IOAPIC
> >>> redirection
> >> 
> >> table
> >> is valid. Then modifying cpu_mask_to_apicid_flat() seems like a better
> >> idea.
> > 
> > Why would you need to modify only this function, but not the other
> > variants? If a CPU in the passed in mask can be offline, then
> > first_cpu() (as used in the other variants) can return an offline CPU,
> > and you don't want to program such into an RTE.

Yes, here is the patch with modification of other variants.
> 
> Indeed, also all other assignments to irq_cfg->cpu_mask include only online
> CPUs, so the current code is only being consistent in that respect. 

After reading the code, I think it may not that consistent. For example, seems 
set_desc_affinity() and __clear_irq_vector()(as well as many other functions) won't 
assume cfg->cpu_mask contained cpus are all online. 

> And in
> the general case (even if not specifically for IRQ0) that is important
> because IDT vectors are not allocated on offline CPUs, and so we could
> otherwise end up with CPUs coming online and finding they are in multiple
> irq_cfg's with the same vector!

I think this still apply, because we still don't allocate vectors for offline CPUs. 
When we allocate vectors, we would check cpu_online_map.

> Also the PIT is usually disabled after boot
> on Xen and so it being restricted to only CPU0 would really not matter. I
> think we should leave the code as is.

But HPET would still being used, which replaced PIT and using IRQ0.

--
regards
Yang, Sheng

> 
>  -- Keir

[-- Attachment #2: dest_fix.patch --]
[-- Type: text/x-patch, Size: 2260 bytes --]

diff --git a/xen/arch/x86/genapic/delivery.c b/xen/arch/x86/genapic/delivery.c
--- a/xen/arch/x86/genapic/delivery.c
+++ b/xen/arch/x86/genapic/delivery.c
@@ -36,9 +36,10 @@
 	return cpu_online_map;
 } 
 
+/* Cover only online cpus */
 unsigned int cpu_mask_to_apicid_flat(cpumask_t cpumask)
 {
-	return cpus_addr(cpumask)[0]&0xFF;
+	return cpus_addr(cpumask)[0] & cpus_addr(cpu_online_map)[0] & 0xFF;
 }
 
 /*
@@ -71,6 +72,11 @@
 
 unsigned int cpu_mask_to_apicid_phys(cpumask_t cpumask)
 {
+	int cpu;
 	/* As we are using single CPU as destination, pick only one CPU here */
-	return cpu_physical_id(first_cpu(cpumask));
+	for_each_cpu_mask(cpu, cpumask) {
+		if (cpu_online(cpu))
+			break;
+	}
+	return cpu_physical_id(cpu);
 }
diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -101,12 +101,26 @@
 
 unsigned int cpu_mask_to_apicid_x2apic_phys(cpumask_t cpumask)
 {
-    return cpu_physical_id(first_cpu(cpumask));
+    int cpu;
+
+    for_each_cpu_mask( cpu, cpumask ) 
+    {
+        if ( cpu_online(cpu) )
+            break;
+    }
+    return cpu_physical_id(cpu);
 }
 
 unsigned int cpu_mask_to_apicid_x2apic_cluster(cpumask_t cpumask)
 {
-    return cpu_2_logical_apicid[first_cpu(cpumask)];
+    int cpu;
+
+    for_each_cpu_mask( cpu, cpumask ) 
+    {
+        if ( cpu_online(cpu) )
+            break;
+    }
+    return cpu_2_logical_apicid[cpu];
 }
 
 static void __send_IPI_mask_x2apic(
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -86,14 +86,14 @@
     cpus_and(online_mask, cpu_mask, cpu_online_map);
     if (cpus_empty(online_mask))
         return -EINVAL;
-    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, online_mask))
+    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, cpu_mask))
         return 0;
     if (cfg->vector != IRQ_VECTOR_UNASSIGNED) 
         return -EBUSY;
     for_each_cpu_mask(cpu, online_mask)
         per_cpu(vector_irq, cpu)[vector] = irq;
     cfg->vector = vector;
-    cfg->cpu_mask = online_mask;
+    cfg->cpu_mask = cpu_mask;
     irq_status[irq] = IRQ_USED;
     if (IO_APIC_IRQ(irq))
         irq_vector[irq] = vector;

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] Only include online cpus in cpu_mask_to_apicid_flat
  2010-09-01  3:39     ` Yang, Sheng
@ 2010-09-01  9:14       ` Jan Beulich
  2010-09-01 10:02         ` Yang, Sheng
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2010-09-01  9:14 UTC (permalink / raw)
  To: Sheng Yang; +Cc: xen-devel, Keir Fraser

>>> On 01.09.10 at 05:39, "Yang, Sheng" <sheng.yang@intel.com> wrote:
> Yes, here is the patch with modification of other variants.

If indeed an adjustment like this is needed, then this (and other similar
instances)

>@@ -71,6 +72,11 @@
> 
> unsigned int cpu_mask_to_apicid_phys(cpumask_t cpumask)
> {
>+	int cpu;
> 	/* As we are using single CPU as destination, pick only one CPU here */
>-	return cpu_physical_id(first_cpu(cpumask));
>+	for_each_cpu_mask(cpu, cpumask) {
>+		if (cpu_online(cpu))
>+			break;
>+	}
>+	return cpu_physical_id(cpu);
> }

is both insufficient: You need to handle the case where you don't
find any online CPU in the mask (at least by adding a respective
BUG_ON()).

But I tend to agree with Keir that this shouldn't be done here -
these functions are simple accessors, which shouldn't enforce
any policy. Higher level code, if it doesn't already, should be
adjusted to never allow offline CPUs to slip through.

Jan

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] Only include online cpus in cpu_mask_to_apicid_flat
  2010-09-01  9:14       ` Jan Beulich
@ 2010-09-01 10:02         ` Yang, Sheng
  2010-09-06  5:56           ` Yang, Sheng
  0 siblings, 1 reply; 7+ messages in thread
From: Yang, Sheng @ 2010-09-01 10:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Keir Fraser

[-- Attachment #1: Type: Text/Plain, Size: 1218 bytes --]

On Wednesday 01 September 2010 17:14:52 Jan Beulich wrote:
> >>> On 01.09.10 at 05:39, "Yang, Sheng" <sheng.yang@intel.com> wrote:
> > Yes, here is the patch with modification of other variants.
> 
> If indeed an adjustment like this is needed, then this (and other similar
> instances)
> 
> >@@ -71,6 +72,11 @@
> >
> > unsigned int cpu_mask_to_apicid_phys(cpumask_t cpumask)
> > {
> >
> >+	int cpu;
> >
> > 	/* As we are using single CPU as destination, pick only one CPU here */
> >
> >-	return cpu_physical_id(first_cpu(cpumask));
> >+	for_each_cpu_mask(cpu, cpumask) {
> >+		if (cpu_online(cpu))
> >+			break;
> >+	}
> >+	return cpu_physical_id(cpu);
> >
> > }
> 
> is both insufficient: You need to handle the case where you don't
> find any online CPU in the mask (at least by adding a respective
> BUG_ON()).

Yes, BUG_ON() is needed.
> 
> But I tend to agree with Keir that this shouldn't be done here -
> these functions are simple accessors, which shouldn't enforce
> any policy. Higher level code, if it doesn't already, should be
> adjusted to never allow offline CPUs to slip through.

Well, I think it's acceptable to add a wrap function for it. So how about this 
one?

--
regards
Yang, Sheng

> 
> Jan

[-- Attachment #2: dest_fix.patch --]
[-- Type: text/x-patch, Size: 3703 bytes --]

diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -490,7 +490,7 @@
     cpus_copy(desc->affinity, mask);
     cpus_and(dest_mask, desc->affinity, cfg->cpu_mask);
 
-    return cpu_mask_to_apicid(dest_mask);
+    return cpu_mask_to_apicid_online(dest_mask);
 }
 
 static void
@@ -1003,7 +1003,7 @@
             }
             cfg = irq_cfg(irq);
             SET_DEST(entry.dest.dest32, entry.dest.logical.logical_dest,
-                cpu_mask_to_apicid(cfg->cpu_mask));
+                cpu_mask_to_apicid_online(cfg->cpu_mask));
             spin_lock_irqsave(&ioapic_lock, flags);
             io_apic_write(apic, 0x11+2*pin, *(((int *)&entry)+1));
             io_apic_write(apic, 0x10+2*pin, *(((int *)&entry)+0));
@@ -1038,7 +1038,7 @@
     entry.dest_mode = INT_DEST_MODE;
     entry.mask = 0;					/* unmask IRQ now */
     SET_DEST(entry.dest.dest32, entry.dest.logical.logical_dest,
-        cpu_mask_to_apicid(TARGET_CPUS));
+        cpu_mask_to_apicid_online(TARGET_CPUS));
     entry.delivery_mode = INT_DELIVERY_MODE;
     entry.polarity = 0;
     entry.trigger = 0;
@@ -2253,7 +2253,7 @@
     entry.delivery_mode = INT_DELIVERY_MODE;
     entry.dest_mode = INT_DEST_MODE;
     SET_DEST(entry.dest.dest32, entry.dest.logical.logical_dest,
-        cpu_mask_to_apicid(TARGET_CPUS));
+        cpu_mask_to_apicid_online(TARGET_CPUS));
     entry.trigger = edge_level;
     entry.polarity = active_high_low;
     entry.mask  = 1;
@@ -2446,7 +2446,7 @@
     rte.vector = cfg->vector;
 
     SET_DEST(rte.dest.dest32, rte.dest.logical.logical_dest,
-        cpu_mask_to_apicid(cfg->cpu_mask));
+        cpu_mask_to_apicid_online(cfg->cpu_mask));
 
     io_apic_write(apic, 0x10 + 2 * pin, *(((int *)&rte) + 0));
     io_apic_write(apic, 0x11 + 2 * pin, *(((int *)&rte) + 1));
@@ -2584,3 +2584,12 @@
     printk(XENLOG_INFO "IRQ limits: %u GSI, %u MSI/MSI-X\n",
            nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
 }
+
+unsigned int cpu_mask_to_apicid_online(cpumask_t cpu_mask)
+{
+    cpumask_t dest;
+    cpus_and(dest, cpu_mask, cpu_online_map);
+    BUG_ON(cpus_empty(dest));
+    return cpu_mask_to_apicid(dest);
+}
+
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -86,14 +86,14 @@
     cpus_and(online_mask, cpu_mask, cpu_online_map);
     if (cpus_empty(online_mask))
         return -EINVAL;
-    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, online_mask))
+    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, cpu_mask))
         return 0;
     if (cfg->vector != IRQ_VECTOR_UNASSIGNED) 
         return -EBUSY;
     for_each_cpu_mask(cpu, online_mask)
         per_cpu(vector_irq, cpu)[vector] = irq;
     cfg->vector = vector;
-    cfg->cpu_mask = online_mask;
+    cfg->cpu_mask = cpu_mask;
     irq_status[irq] = IRQ_USED;
     if (IO_APIC_IRQ(irq))
         irq_vector[irq] = vector;
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -133,7 +133,7 @@
 
     if ( vector ) {
 
-        dest = cpu_mask_to_apicid(domain);
+        dest = cpu_mask_to_apicid_online(domain);
 
         msg->address_hi = MSI_ADDR_BASE_HI;
         msg->address_lo =
diff --git a/xen/include/asm-x86/genapic.h b/xen/include/asm-x86/genapic.h
--- a/xen/include/asm-x86/genapic.h
+++ b/xen/include/asm-x86/genapic.h
@@ -52,6 +52,8 @@
 extern const struct genapic apic_x2apic_phys;
 extern const struct genapic apic_x2apic_cluster;
 
+unsigned int cpu_mask_to_apicid_online(cpumask_t cpu_mask);
+
 void init_apic_ldr_flat(void);
 void clustered_apic_check_flat(void);
 cpumask_t target_cpus_flat(void);

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] Only include online cpus in cpu_mask_to_apicid_flat
  2010-09-01 10:02         ` Yang, Sheng
@ 2010-09-06  5:56           ` Yang, Sheng
  0 siblings, 0 replies; 7+ messages in thread
From: Yang, Sheng @ 2010-09-06  5:56 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel, Jan Beulich

[-- Attachment #1: Type: Text/Plain, Size: 2152 bytes --]

On Wednesday 01 September 2010 18:02:13 Yang, Sheng wrote:
> On Wednesday 01 September 2010 17:14:52 Jan Beulich wrote:
> > >>> On 01.09.10 at 05:39, "Yang, Sheng" <sheng.yang@intel.com> wrote:
> > > Yes, here is the patch with modification of other variants.
> > 
> > If indeed an adjustment like this is needed, then this (and other similar
> > instances)
> > 
> > >@@ -71,6 +72,11 @@
> > >
> > > unsigned int cpu_mask_to_apicid_phys(cpumask_t cpumask)
> > > {
> > >
> > >+	int cpu;
> > >
> > > 	/* As we are using single CPU as destination, pick only one CPU here
> > > 	*/
> > >
> > >-	return cpu_physical_id(first_cpu(cpumask));
> > >+	for_each_cpu_mask(cpu, cpumask) {
> > >+		if (cpu_online(cpu))
> > >+			break;
> > >+	}
> > >+	return cpu_physical_id(cpu);
> > >
> > > }
> > 
> > is both insufficient: You need to handle the case where you don't
> > find any online CPU in the mask (at least by adding a respective
> > BUG_ON()).
> 
> Yes, BUG_ON() is needed.
> 
> > But I tend to agree with Keir that this shouldn't be done here -
> > these functions are simple accessors, which shouldn't enforce
> > any policy. Higher level code, if it doesn't already, should be
> > adjusted to never allow offline CPUs to slip through.
> 
> Well, I think it's acceptable to add a wrap function for it. So how about
> this one?

Keir & Jan, how do you think about this patchset?

If you still think we should never allow offline CPUs in the cpu_mask, then at least 
we need one patch to fix serial port IRQ's cpu_mask which was CPU_MASK_ALL(this fix 
would also result the serial interrupt delivery to CPU0). 

--
regards
Yang, Sheng

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -1015,7 +1015,7 @@
         irq_vector[irq] = FIRST_HIPRIORITY_VECTOR + seridx + 1;
         per_cpu(vector_irq, cpu)[FIRST_HIPRIORITY_VECTOR + seridx + 1] = irq;
         irq_cfg[irq].vector = FIRST_HIPRIORITY_VECTOR + seridx + 1;
-        irq_cfg[irq].cpu_mask = (cpumask_t)CPU_MASK_ALL;
+        irq_cfg[irq].cpu_mask = cpu_online_map;
     }

     /* IPI for cleanuping vectors after irq move */

[-- Attachment #2: dest_fix.patch --]
[-- Type: text/x-patch, Size: 3703 bytes --]

diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -490,7 +490,7 @@
     cpus_copy(desc->affinity, mask);
     cpus_and(dest_mask, desc->affinity, cfg->cpu_mask);
 
-    return cpu_mask_to_apicid(dest_mask);
+    return cpu_mask_to_apicid_online(dest_mask);
 }
 
 static void
@@ -1003,7 +1003,7 @@
             }
             cfg = irq_cfg(irq);
             SET_DEST(entry.dest.dest32, entry.dest.logical.logical_dest,
-                cpu_mask_to_apicid(cfg->cpu_mask));
+                cpu_mask_to_apicid_online(cfg->cpu_mask));
             spin_lock_irqsave(&ioapic_lock, flags);
             io_apic_write(apic, 0x11+2*pin, *(((int *)&entry)+1));
             io_apic_write(apic, 0x10+2*pin, *(((int *)&entry)+0));
@@ -1038,7 +1038,7 @@
     entry.dest_mode = INT_DEST_MODE;
     entry.mask = 0;					/* unmask IRQ now */
     SET_DEST(entry.dest.dest32, entry.dest.logical.logical_dest,
-        cpu_mask_to_apicid(TARGET_CPUS));
+        cpu_mask_to_apicid_online(TARGET_CPUS));
     entry.delivery_mode = INT_DELIVERY_MODE;
     entry.polarity = 0;
     entry.trigger = 0;
@@ -2253,7 +2253,7 @@
     entry.delivery_mode = INT_DELIVERY_MODE;
     entry.dest_mode = INT_DEST_MODE;
     SET_DEST(entry.dest.dest32, entry.dest.logical.logical_dest,
-        cpu_mask_to_apicid(TARGET_CPUS));
+        cpu_mask_to_apicid_online(TARGET_CPUS));
     entry.trigger = edge_level;
     entry.polarity = active_high_low;
     entry.mask  = 1;
@@ -2446,7 +2446,7 @@
     rte.vector = cfg->vector;
 
     SET_DEST(rte.dest.dest32, rte.dest.logical.logical_dest,
-        cpu_mask_to_apicid(cfg->cpu_mask));
+        cpu_mask_to_apicid_online(cfg->cpu_mask));
 
     io_apic_write(apic, 0x10 + 2 * pin, *(((int *)&rte) + 0));
     io_apic_write(apic, 0x11 + 2 * pin, *(((int *)&rte) + 1));
@@ -2584,3 +2584,12 @@
     printk(XENLOG_INFO "IRQ limits: %u GSI, %u MSI/MSI-X\n",
            nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
 }
+
+unsigned int cpu_mask_to_apicid_online(cpumask_t cpu_mask)
+{
+    cpumask_t dest;
+    cpus_and(dest, cpu_mask, cpu_online_map);
+    BUG_ON(cpus_empty(dest));
+    return cpu_mask_to_apicid(dest);
+}
+
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -86,14 +86,14 @@
     cpus_and(online_mask, cpu_mask, cpu_online_map);
     if (cpus_empty(online_mask))
         return -EINVAL;
-    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, online_mask))
+    if ((cfg->vector == vector) && cpus_equal(cfg->cpu_mask, cpu_mask))
         return 0;
     if (cfg->vector != IRQ_VECTOR_UNASSIGNED) 
         return -EBUSY;
     for_each_cpu_mask(cpu, online_mask)
         per_cpu(vector_irq, cpu)[vector] = irq;
     cfg->vector = vector;
-    cfg->cpu_mask = online_mask;
+    cfg->cpu_mask = cpu_mask;
     irq_status[irq] = IRQ_USED;
     if (IO_APIC_IRQ(irq))
         irq_vector[irq] = vector;
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -133,7 +133,7 @@
 
     if ( vector ) {
 
-        dest = cpu_mask_to_apicid(domain);
+        dest = cpu_mask_to_apicid_online(domain);
 
         msg->address_hi = MSI_ADDR_BASE_HI;
         msg->address_lo =
diff --git a/xen/include/asm-x86/genapic.h b/xen/include/asm-x86/genapic.h
--- a/xen/include/asm-x86/genapic.h
+++ b/xen/include/asm-x86/genapic.h
@@ -52,6 +52,8 @@
 extern const struct genapic apic_x2apic_phys;
 extern const struct genapic apic_x2apic_cluster;
 
+unsigned int cpu_mask_to_apicid_online(cpumask_t cpu_mask);
+
 void init_apic_ldr_flat(void);
 void clustered_apic_check_flat(void);
 cpumask_t target_cpus_flat(void);

[-- Attachment #3: serial_fix.patch --]
[-- Type: text/x-patch, Size: 523 bytes --]

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -1015,7 +1015,7 @@
         irq_vector[irq] = FIRST_HIPRIORITY_VECTOR + seridx + 1;
         per_cpu(vector_irq, cpu)[FIRST_HIPRIORITY_VECTOR + seridx + 1] = irq;
         irq_cfg[irq].vector = FIRST_HIPRIORITY_VECTOR + seridx + 1;
-        irq_cfg[irq].cpu_mask = (cpumask_t)CPU_MASK_ALL;
+        irq_cfg[irq].cpu_mask = cpu_online_map;
     }
 
     /* IPI for cleanuping vectors after irq move */

[-- Attachment #4: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-09-06  5:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-31  7:19 [PATCH] Only include online cpus in cpu_mask_to_apicid_flat Yang, Sheng
2010-08-31  8:55 ` Jan Beulich
2010-08-31 10:46   ` Keir Fraser
2010-09-01  3:39     ` Yang, Sheng
2010-09-01  9:14       ` Jan Beulich
2010-09-01 10:02         ` Yang, Sheng
2010-09-06  5:56           ` Yang, Sheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.