* [PATCH] asm,x86: Set max CPUs to 512 instead of 256.
@ 2015-01-22 16:52 Konrad Rzeszutek Wilk
2015-01-22 17:04 ` [PATCH] asm, x86: " Andrew Cooper
2015-01-23 11:25 ` Jan Beulich
0 siblings, 2 replies; 5+ messages in thread
From: Konrad Rzeszutek Wilk @ 2015-01-22 16:52 UTC (permalink / raw)
To: xen-devel, JBeulich, andrew.cooper3; +Cc: Konrad Rzeszutek Wilk
Contemporary servers sport now 480 CPUs or such. We should crank
up the default amount of CPUs to a higher level to take advantage
of this without having the distro to use 'max_phys_cpus' override.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
xen/include/asm-x86/config.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 2fbd68d..d450696 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -64,7 +64,7 @@
#ifdef MAX_PHYS_CPUS
#define NR_CPUS MAX_PHYS_CPUS
#else
-#define NR_CPUS 256
+#define NR_CPUS 512
#endif
/* Linkage for x86 */
--
2.1.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] asm, x86: Set max CPUs to 512 instead of 256.
2015-01-22 16:52 [PATCH] asm,x86: Set max CPUs to 512 instead of 256 Konrad Rzeszutek Wilk
@ 2015-01-22 17:04 ` Andrew Cooper
2015-01-22 19:03 ` Konrad Rzeszutek Wilk
2015-01-22 20:04 ` Konrad Rzeszutek Wilk
2015-01-23 11:25 ` Jan Beulich
1 sibling, 2 replies; 5+ messages in thread
From: Andrew Cooper @ 2015-01-22 17:04 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk, xen-devel, JBeulich
On 22/01/15 16:52, Konrad Rzeszutek Wilk wrote:
> Contemporary servers sport now 480 CPUs or such. We should crank
> up the default amount of CPUs to a higher level to take advantage
> of this without having the distro to use 'max_phys_cpus' override.
>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
/me would really like to try something that big out, but I have not had
the opportunity yet to hit the 256 limit.
I wonder which variables grow as a result of this change. We might want
to see about making more things dynamically allocated after reading the
apci tables, if we can.
~Andrew
> ---
> xen/include/asm-x86/config.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> index 2fbd68d..d450696 100644
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -64,7 +64,7 @@
> #ifdef MAX_PHYS_CPUS
> #define NR_CPUS MAX_PHYS_CPUS
> #else
> -#define NR_CPUS 256
> +#define NR_CPUS 512
> #endif
>
> /* Linkage for x86 */
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] asm, x86: Set max CPUs to 512 instead of 256.
2015-01-22 17:04 ` [PATCH] asm, x86: " Andrew Cooper
@ 2015-01-22 19:03 ` Konrad Rzeszutek Wilk
2015-01-22 20:04 ` Konrad Rzeszutek Wilk
1 sibling, 0 replies; 5+ messages in thread
From: Konrad Rzeszutek Wilk @ 2015-01-22 19:03 UTC (permalink / raw)
To: Andrew Cooper; +Cc: xen-devel, JBeulich
[-- Attachment #1: Type: text/plain, Size: 1676 bytes --]
On Thu, Jan 22, 2015 at 05:04:12PM +0000, Andrew Cooper wrote:
> On 22/01/15 16:52, Konrad Rzeszutek Wilk wrote:
> > Contemporary servers sport now 480 CPUs or such. We should crank
> > up the default amount of CPUs to a higher level to take advantage
> > of this without having the distro to use 'max_phys_cpus' override.
> >
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>
> /me would really like to try something that big out, but I have not had
> the opportunity yet to hit the 256 limit.
>
> I wonder which variables grow as a result of this change. We might want
> to see about making more things dynamically allocated after reading the
> apci tables, if we can.
I am not sure if that is possible as there is a lot of DEFINE_PER_CPU
which are cannot grow.
The structures that grow are:
struct cpumask
struct kernel_param
struct rangeset
struct csched2_runqueue_data
struct csched2_private
struct rt_vcpu
struct stopmachine_data
struct free_ptr
struct rcu_data
struct physid_mask
struct acpi_table_header
struct calibration_rendezvous
struct bug_frame
(for fun see attached diff of pahole between 256 and 512 CPUs)
>
> ~Andrew
>
> > ---
> > xen/include/asm-x86/config.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> > index 2fbd68d..d450696 100644
> > --- a/xen/include/asm-x86/config.h
> > +++ b/xen/include/asm-x86/config.h
> > @@ -64,7 +64,7 @@
> > #ifdef MAX_PHYS_CPUS
> > #define NR_CPUS MAX_PHYS_CPUS
> > #else
> > -#define NR_CPUS 256
> > +#define NR_CPUS 512
> > #endif
> >
> > /* Linkage for x86 */
>
[-- Attachment #2: 256vs512 --]
[-- Type: text/plain, Size: 9672 bytes --]
--- 256 2015-01-22 14:01:55.200283080 -0500
+++ 512 2015-01-22 13:59:08.588811566 -0500
@@ -1,8 +1,8 @@
struct cpumask {
- long unsigned int bits[4]; /* 0 32 */
+ long unsigned int bits[8]; /* 0 64 */
+ /* --- cacheline 1 boundary (64 bytes) --- */
- /* size: 32, cachelines: 1, members: 1 */
- /* last cacheline: 32 bytes */
+ /* size: 64, cachelines: 1, members: 1 */
};
struct kernel_param {
const char * name; /* 0 8 */
@@ -4853,10 +4853,11 @@
long unsigned int start; /* 8 8 */
long unsigned int per_cpu_sz; /* 16 8 */
long unsigned int rem; /* 24 8 */
- cpumask_t cpus; /* 32 32 */
- /* --- cacheline 1 boundary (64 bytes) --- */
+ cpumask_t cpus; /* 32 64 */
+ /* --- cacheline 1 boundary (64 bytes) was 32 bytes ago --- */
- /* size: 64, cachelines: 1, members: 5 */
+ /* size: 96, cachelines: 2, members: 5 */
+ /* last cacheline: 32 bytes */
};
struct rangeset {
struct list_head rangeset_list; /* 0 16 */
@@ -4968,46 +4969,49 @@
struct csched2_runqueue_data {
int id; /* 0 4 */
spinlock_t lock; /* 4 4 */
- cpumask_t active; /* 8 32 */
- struct list_head runq; /* 40 16 */
- struct list_head svc; /* 56 16 */
+ cpumask_t active; /* 8 64 */
/* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */
- unsigned int max_weight; /* 72 4 */
+ struct list_head runq; /* 72 16 */
+ struct list_head svc; /* 88 16 */
+ unsigned int max_weight; /* 104 4 */
/* XXX 4 bytes hole, try to pack */
- cpumask_t idle; /* 80 32 */
- cpumask_t tickled; /* 112 32 */
- /* --- cacheline 2 boundary (128 bytes) was 16 bytes ago --- */
- int load; /* 144 4 */
+ cpumask_t idle; /* 112 64 */
+ /* --- cacheline 2 boundary (128 bytes) was 48 bytes ago --- */
+ cpumask_t tickled; /* 176 64 */
+ /* --- cacheline 3 boundary (192 bytes) was 48 bytes ago --- */
+ int load; /* 240 4 */
/* XXX 4 bytes hole, try to pack */
- s_time_t load_last_update; /* 152 8 */
- s_time_t avgload; /* 160 8 */
- s_time_t b_avgload; /* 168 8 */
+ s_time_t load_last_update; /* 248 8 */
+ /* --- cacheline 4 boundary (256 bytes) --- */
+ s_time_t avgload; /* 256 8 */
+ s_time_t b_avgload; /* 264 8 */
- /* size: 176, cachelines: 3, members: 12 */
- /* sum members: 168, holes: 2, sum holes: 8 */
- /* last cacheline: 48 bytes */
+ /* size: 272, cachelines: 5, members: 12 */
+ /* sum members: 264, holes: 2, sum holes: 8 */
+ /* last cacheline: 16 bytes */
};
struct csched2_private {
spinlock_t lock; /* 0 4 */
/* XXX 4 bytes hole, try to pack */
- cpumask_t initialized; /* 8 32 */
- struct list_head sdom; /* 40 16 */
- int runq_map[256]; /* 56 1024 */
- /* --- cacheline 16 boundary (1024 bytes) was 56 bytes ago --- */
- cpumask_t active_queues; /* 1080 32 */
- /* --- cacheline 17 boundary (1088 bytes) was 24 bytes ago --- */
- struct csched2_runqueue_data rqd[256]; /* 1112 45056 */
- /* --- cacheline 721 boundary (46144 bytes) was 24 bytes ago --- */
- int load_window_shift; /* 46168 4 */
+ cpumask_t initialized; /* 8 64 */
+ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */
+ struct list_head sdom; /* 72 16 */
+ int runq_map[512]; /* 88 2048 */
+ /* --- cacheline 33 boundary (2112 bytes) was 24 bytes ago --- */
+ cpumask_t active_queues; /* 2136 64 */
+ /* --- cacheline 34 boundary (2176 bytes) was 24 bytes ago --- */
+ struct csched2_runqueue_data rqd[512]; /* 2200 139264 */
+ /* --- cacheline 2210 boundary (141440 bytes) was 24 bytes ago --- */
+ int load_window_shift; /* 141464 4 */
- /* size: 46176, cachelines: 722, members: 7 */
- /* sum members: 46168, holes: 1, sum holes: 4 */
+ /* size: 141472, cachelines: 2211, members: 7 */
+ /* sum members: 141464, holes: 1, sum holes: 4 */
/* padding: 4 */
/* last cacheline: 32 bytes */
};
@@ -5149,12 +5153,12 @@
struct list_head sdom; /* 8 16 */
struct list_head runq; /* 24 16 */
struct list_head depletedq; /* 40 16 */
- cpumask_t tickled; /* 56 32 */
- /* --- cacheline 1 boundary (64 bytes) was 24 bytes ago --- */
+ cpumask_t tickled; /* 56 64 */
+ /* --- cacheline 1 boundary (64 bytes) was 56 bytes ago --- */
- /* size: 88, cachelines: 2, members: 5 */
- /* sum members: 84, holes: 1, sum holes: 4 */
- /* last cacheline: 24 bytes */
+ /* size: 120, cachelines: 2, members: 5 */
+ /* sum members: 116, holes: 1, sum holes: 4 */
+ /* last cacheline: 56 bytes */
};
struct rt_vcpu {
struct list_head q_elem; /* 0 16 */
@@ -5232,11 +5236,12 @@
/* XXX 4 bytes hole, try to pack */
- cpumask_t selected; /* 24 32 */
+ cpumask_t selected; /* 24 64 */
+ /* --- cacheline 1 boundary (64 bytes) was 24 bytes ago --- */
- /* size: 56, cachelines: 1, members: 4 */
- /* sum members: 52, holes: 1, sum holes: 4 */
- /* last cacheline: 56 bytes */
+ /* size: 88, cachelines: 2, members: 4 */
+ /* sum members: 84, holes: 1, sum holes: 4 */
+ /* last cacheline: 24 bytes */
};
struct stopmachine_data {
unsigned int nr_cpus; /* 0 4 */
@@ -5702,13 +5707,13 @@
struct vcpu * vcpu; /* 16 8 */
void * esp; /* 24 8 */
char * stack; /* 32 8 */
- cpumask_t saved_affinity; /* 40 32 */
- /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */
- unsigned int wakeup_cpu; /* 72 4 */
+ cpumask_t saved_affinity; /* 40 64 */
+ /* --- cacheline 1 boundary (64 bytes) was 40 bytes ago --- */
+ unsigned int wakeup_cpu; /* 104 4 */
- /* size: 80, cachelines: 2, members: 6 */
+ /* size: 112, cachelines: 2, members: 6 */
/* padding: 4 */
- /* last cacheline: 16 bytes */
+ /* last cacheline: 48 bytes */
};
struct free_ptr {
struct bhdr * prev; /* 0 8 */
@@ -5774,11 +5779,12 @@
/* XXX 4 bytes hole, try to pack */
- cpumask_t cpumask; /* 136 32 */
+ cpumask_t cpumask; /* 136 64 */
+ /* --- cacheline 3 boundary (192 bytes) was 8 bytes ago --- */
/* size: 256, cachelines: 4, members: 5 */
- /* sum members: 56, holes: 2, sum holes: 112 */
- /* padding: 88 */
+ /* sum members: 88, holes: 2, sum holes: 112 */
+ /* padding: 56 */
};
struct rcu_data {
long int quiescbatch; /* 0 8 */
@@ -7630,10 +7636,10 @@
/* last cacheline: 8 bytes */
};
struct physid_mask {
- long unsigned int mask[16]; /* 0 128 */
- /* --- cacheline 2 boundary (128 bytes) --- */
+ long unsigned int mask[32]; /* 0 256 */
+ /* --- cacheline 4 boundary (256 bytes) --- */
- /* size: 128, cachelines: 2, members: 1 */
+ /* size: 256, cachelines: 4, members: 1 */
};
struct acpi_table_header {
char signature[4]; /* 0 4 */
@@ -10672,17 +10678,18 @@
/* last cacheline: 24 bytes */
};
struct calibration_rendezvous {
- cpumask_t cpu_calibration_map; /* 0 32 */
- atomic_t semaphore; /* 32 4 */
+ cpumask_t cpu_calibration_map; /* 0 64 */
+ /* --- cacheline 1 boundary (64 bytes) --- */
+ atomic_t semaphore; /* 64 4 */
/* XXX 4 bytes hole, try to pack */
- s_time_t master_stime; /* 40 8 */
- u64 master_tsc_stamp; /* 48 8 */
+ s_time_t master_stime; /* 72 8 */
+ u64 master_tsc_stamp; /* 80 8 */
- /* size: 56, cachelines: 1, members: 4 */
- /* sum members: 52, holes: 1, sum holes: 4 */
- /* last cacheline: 56 bytes */
+ /* size: 88, cachelines: 2, members: 4 */
+ /* sum members: 84, holes: 1, sum holes: 4 */
+ /* last cacheline: 24 bytes */
};
struct bug_frame {
int loc_disp:24; /* 0: 8 4 */
[-- Attachment #3: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] asm, x86: Set max CPUs to 512 instead of 256.
2015-01-22 17:04 ` [PATCH] asm, x86: " Andrew Cooper
2015-01-22 19:03 ` Konrad Rzeszutek Wilk
@ 2015-01-22 20:04 ` Konrad Rzeszutek Wilk
1 sibling, 0 replies; 5+ messages in thread
From: Konrad Rzeszutek Wilk @ 2015-01-22 20:04 UTC (permalink / raw)
To: Andrew Cooper; +Cc: xen-devel, JBeulich
On Thu, Jan 22, 2015 at 05:04:12PM +0000, Andrew Cooper wrote:
> On 22/01/15 16:52, Konrad Rzeszutek Wilk wrote:
> > Contemporary servers sport now 480 CPUs or such. We should crank
> > up the default amount of CPUs to a higher level to take advantage
> > of this without having the distro to use 'max_phys_cpus' override.
> >
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>
> /me would really like to try something that big out, but I have not had
> the opportunity yet to hit the 256 limit.
Here is what bloat-o-meter says (256 vs 512):
add/remove: 5/0 grow/shrink: 118/25 up/down: 230953/-711 (230242)
function old new delta
_csched2_priv 46176 141472 +95296
cpu_data 65536 131072 +65536
irq_stat 32768 65536 +32768
cpu_msrs 4096 8192 +4096
cpu_bit_bitmap 2080 4160 +2080
x86_acpiid_to_apicid 2048 4096 +2048
stack_base 2048 4096 +2048
saved_lvtpc 2048 4096 +2048
region 4096 6144 +2048
processor_powers 2048 4096 +2048
processor_pminfo 2048 4096 +2048
node_to_cpumask 2048 4096 +2048
idt_tables 2048 4096 +2048
idle_vcpu 2048 4096 +2048
cpufreq_drv_data 2048 4096 +2048
__per_cpu_offset 2048 4096 +2048
x86_cpu_to_apicid 1024 2048 +1024
prev_nmi_count 1024 2048 +1024
core_parking_cpunum 1024 2048 +1024
apicid_to_node 1024 2048 +1024
apic_version 1024 2048 +1024
cpu_to_node 256 512 +256
sched_move_domain 940 1105 +165
sched_init_vcpu 614 774 +160
phys_id_present_map 128 256 +128
phys_cpu_present_map 128 256 +128
apic_id_map 128 256 +128
cpu_disable_scheduler 596 711 +115
rcu_start_batch.clone - 106 +106
setup_IO_APIC 5553 5657 +104
init_one_irq_desc 205 307 +102
destroy_irq 347 435 +88
init_trace_bufs 160 240 +80
cpumask_clear - 80 +80
scrub_heap_pages 1843 1910 +67
init_IRQ 310 376 +66
set_nr_cpu_ids 101 160 +59
csched2_schedule 3006 3063 +57
__get_page_type 5663 5720 +57
do_domctl 6753 6808 +55
__cpu_disable 577 628 +51
domain_update_node_affinity 498 547 +49
alloc_heap_pages 1746 1794 +48
runq_tickle 1302 1349 +47
check_wakeup_from_wait 251 290 +39
cpumask_copy - 38 +38
cpumask_and - 38 +38
waiting_to_crash 32 64 +32
tsc_sync_cpu_mask 32 64 +32
tsc_check_cpumask 32 64 +32
tb_cpu_mask 32 64 +32
read_clocks_cpumask 32 64 +32
pit_broadcast_mask 32 64 +32
per_cpu__batch_mask 32 64 +32
mce_fatal_cpus 32 64 +32
init_mask 32 64 +32
frozen_cpus 32 64 +32
flush_cpumask 32 64 +32
dump_execstate_mask 32 64 +32
crash_saved_cpus 32 64 +32
cpupool_locked_cpus 32 64 +32
cpupool_free_cpus 32 64 +32
cpuidle_mwait_flags 32 64 +32
cpu_sibling_setup_map 32 64 +32
cpu_present_map 32 64 +32
cpu_online_map 32 64 +32
cpu_initialized 32 64 +32
call_data 56 88 +32
alloc_vcpu 685 717 +32
_rt_priv 88 120 +32
context_switch 4030 4056 +26
update_clusterinfo 298 322 +24
powernow_cpufreq_target 526 550 +24
arch_init_one_irq_desc 124 142 +18
smp_prepare_cpus 485 501 +16
send_IPI_mask_x2apic_cluster 445 461 +16
nmi_mce_softirq 178 194 +16
irq_move_cleanup_interrupt 632 648 +16
handle_hpet_broadcast 460 476 +16
csched_init 433 449 +16
csched_balance_cpumask 159 175 +16
cpu_smpboot_callback 621 637 +16
acpi_cpufreq_target 799 815 +16
_csched_cpu_pick 1358 1374 +16
__runq_pick 312 328 +16
__do_update_va_mapping 987 1003 +16
cpufreq_add_cpu 1238 1250 +12
xenctl_bitmap_to_cpumask 119 129 +10
csched_alloc_pdata 434 443 +9
shadow_alloc 794 802 +8
p2m_init_one 337 345 +8
msi_cpu_callback 121 129 +8
move_masked_irq 122 130 +8
invalidate_shadow_ldt 345 353 +8
init_irq_data 278 286 +8
hpet_broadcast_init 1072 1080 +8
find_non_smt 355 363 +8
desc_guest_eoi 243 251 +8
csched2_dump 401 409 +8
irq_guest_eoi_timer_fn 390 397 +7
core_parking_power 628 635 +7
core_parking_performance 628 635 +7
ept_p2m_init 160 166 +6
cpu_raise_softirq_batch_finish 205 211 +6
vcpu_reset 232 237 +5
__assign_irq_vector 1061 1066 +5
vcpu_set_affinity 225 229 +4
smp_scrub_heap_pages 435 439 +4
set_desc_affinity 216 220 +4
nr_cpumask_bits - 4 +4
mod_l4_entry 1235 1239 +4
irq_set_affinity 53 57 +4
csched2_vcpu_wake 337 341 +4
csched2_vcpu_insert 280 284 +4
timer_interrupt 338 341 +3
free_domain_pirqs 138 140 +2
vcpu_set_hard_affinity 138 139 +1
smp_call_function 144 145 +1
sedf_pick_cpu 163 164 +1
new_tlbflush_clock_period 102 103 +1
cpuidle_wakeup_mwait 165 166 +1
call_rcu 220 221 +1
alloc_cpu_id 84 85 +1
rt_init 164 163 -1
prepare_to_wait 493 492 -1
time_calibration 89 87 -2
enable_nonboot_cpus 183 180 -3
arch_memory_op 2632 2629 -3
vcpumask_to_pcpumask 495 491 -4
irq_complete_move 160 155 -5
smp_intr_init 250 244 -6
csched_vcpu_wake 1159 1153 -6
on_selected_cpus 226 218 -8
msi_compose_msg 343 335 -8
fixup_irqs 693 685 -8
dump_registers 253 245 -8
clear_irq_vector 560 552 -8
numa_initmem_init 374 365 -9
bind_irq_vector 469 457 -12
stop_machine_run 642 627 -15
map_ldt_shadow_page 719 703 -16
__pirq_guest_unbind 658 642 -16
cpupool_create 425 407 -18
shadow_write_p2m_entry 1015 988 -27
rcu_process_callbacks 493 438 -55
cpu_quiet.clone 151 62 -89
do_mmuext_op 7023 6848 -175
io_apic_get_unique_id 794 586 -208
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] asm, x86: Set max CPUs to 512 instead of 256.
2015-01-22 16:52 [PATCH] asm,x86: Set max CPUs to 512 instead of 256 Konrad Rzeszutek Wilk
2015-01-22 17:04 ` [PATCH] asm, x86: " Andrew Cooper
@ 2015-01-23 11:25 ` Jan Beulich
1 sibling, 0 replies; 5+ messages in thread
From: Jan Beulich @ 2015-01-23 11:25 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk; +Cc: andrew.cooper3, xen-devel
>>> On 22.01.15 at 17:52, <konrad.wilk@oracle.com> wrote:
> Contemporary servers sport now 480 CPUs or such. We should crank
> up the default amount of CPUs to a higher level to take advantage
> of this without having the distro to use 'max_phys_cpus' override.
I do not agree with this reasoning. Distro builds get set up once,
and will want to control the number of CPUs they build for anyway
(rather than taking whatever we default to). A reason for such a
change would be if a meaningful percentage of all systems Xen gets
run on is that big, which I heavily doubt. In fact I think 256 is
already too large as a default (and in my own patch set I routinely
lower this to BITS_PER_LONG, having a separate 4095-CPU config
that I actively test all the time).
Jan
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2015-01-23 11:25 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-22 16:52 [PATCH] asm,x86: Set max CPUs to 512 instead of 256 Konrad Rzeszutek Wilk
2015-01-22 17:04 ` [PATCH] asm, x86: " Andrew Cooper
2015-01-22 19:03 ` Konrad Rzeszutek Wilk
2015-01-22 20:04 ` Konrad Rzeszutek Wilk
2015-01-23 11:25 ` Jan Beulich
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.