linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andi Kleen <ak@suse.de>
To: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: mingo@elte.hu, nickpiggin@yahoo.com.au,
	linux-kernel@vger.kernel.org, rohit.seth@intel.com,
	asit.k.mallick@intel.com
Subject: Re: [Patch] sched: new sched domain for representing multi-core
Date: Fri, 27 Jan 2006 05:42:11 +0100	[thread overview]
Message-ID: <200601270542.12404.ak@suse.de> (raw)
In-Reply-To: <20060126015132.A8521@unix-os.sc.intel.com>

On Thursday 26 January 2006 10:51, Siddha, Suresh B wrote:

With this patch does the new distance checking code in the scheduler 
from Ingo automatically discover all the relevant distances?

> +#ifdef CONFIG_SMP
> +	unsigned int cpu = (c == &boot_cpu_data) ? 0 : (c - cpu_data);
> +#endif

Wouldn't it be better to just put that information into the cpuinfo_x86?
We're having too many per CPU arrays already.


> +int cpu_llc_id[NR_CPUS] __read_mostly = {[0 ... NR_CPUS-1] = BAD_APICID};

This needs a comment on what a LLC actually is.

> +
>  /* representing HT siblings of each logical CPU */
>  cpumask_t cpu_sibling_map[NR_CPUS] __read_mostly;
>  EXPORT_SYMBOL(cpu_sibling_map);
> @@ -84,6 +86,8 @@ EXPORT_SYMBOL(cpu_core_map);
>  cpumask_t cpu_online_map __read_mostly;
>  EXPORT_SYMBOL(cpu_online_map);
>  
> +cpumask_t cpu_llc_shared_map[NR_CPUS] __read_mostly;

Dito.

> +u8 cpu_llc_id[NR_CPUS] __read_mostly = {[0 ... NR_CPUS-1] = BAD_APICID};

This could be __cpuinitdata, no?

Actually it would be better to pass this information in some other way
to smpboot.c than to add more and more arrays like this.  It's only
needed for the current CPU, because for the others the information
is in cpu_llc_shared_map

Perhaps SMP boot up should pass around a pointer to temporary data like this?
Or discover it in smpboot.c with a function call?

> -#ifdef CONFIG_SCHED_SMT
> +#if defined(CONFIG_SCHED_SMT)
>  		sd = &per_cpu(cpu_domains, i);
> +#elif defined(CONFIG_SCHED_MC)

elif? What happens where there are both shared caches and SMT? 

> +		sd = &per_cpu(core_domains, i);
>  #else
>  		sd = &per_cpu(phys_domains, i);
>  #endif


-Andi

  parent reply	other threads:[~2006-01-27  4:42 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-01-26  9:51 [Patch] sched: new sched domain for representing multi-core Siddha, Suresh B
2006-01-27  0:08 ` Ingo Molnar
2006-01-27  3:51   ` Siddha, Suresh B
2006-01-28  0:00     ` Andrew Morton
2006-01-31  1:28       ` Siddha, Suresh B
2006-02-01  1:12         ` Andrew Morton
2006-02-01  1:48           ` Siddha, Suresh B
2006-02-01  2:21             ` Andrew Morton
2006-02-01  2:52               ` Siddha, Suresh B
2006-01-27  4:42 ` Andi Kleen [this message]
2006-01-28  1:45   ` Siddha, Suresh B
2006-01-29 16:56 ` Pavel Machek
2006-01-31  1:31   ` Siddha, Suresh B
2006-02-09  9:59 Samuel Thibault
2006-02-11  0:51 ` Siddha, Suresh B

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200601270542.12404.ak@suse.de \
    --to=ak@suse.de \
    --cc=asit.k.mallick@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=nickpiggin@yahoo.com.au \
    --cc=rohit.seth@intel.com \
    --cc=suresh.b.siddha@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).