All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Ian Pratt <Ian.Pratt@eu.citrix.com>,
	"Kamble, Nitin A" <nitin.a.kamble@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: RE: Host Numa informtion in dom0
Date: Tue, 9 Feb 2010 14:56:45 -0800	[thread overview]
Message-ID: <54B2EB610B7F1340BB6A0D4CA04A4F1012BABB2E@orsmsx505.amr.corp.intel.com> (raw)
In-Reply-To: <4FA716B1526C7C4DB0375C6DADBC4EA342D83D9EBC@LONPMAILBOX01.citrite.net>

Ian Pratt wrote on Fri, 5 Feb 2010 at 09:39:09:

>>    Attached is the patch which exposes the host numa information to
> dom0.
>> With the patch "xm info" command now also gives the cpu topology & host
>> numa information. This will be later used to build guest numa support.
>> 
>> The patch basically changes physinfo sysctl, and adds topology_info &
>> numa_info sysctls, and also changes the python & libxc code
> accordingly.
> 
> 
> It would be good to have a discussion about how we should expose NUMA
> information to guests.
> 
> I believe we can control the desired allocation of memory from nodes and
> creation of guest NUMA tables using VCPU affinity masks combined with a
> new boolean option to enable exposure of NUMA information to guests.
> 

I agree. 

> For each guest VCPU, we should inspect its affinity mask to see which
> nodes the VCPU is able to run on, thus building a set of 'allowed node'
> masks. We should then compare all the 'allowed node' masks to see how
> many unique node masks there are -- this corresponds to the number of
> NUMA nodes that we wish to expose to the guest if this guest has NUMA
> enabled. We would aportion the guest's pseudo-physical memory equally
> between these virtual NUMA nodes.
> 

Right.

> If guest NUMA is disabled, we just use a single node mask which is the
> union of the per-VCPU node masks.
> 
> Where allowed node masks span more than one physical node, we should
> allocate memory to the guest's virtual node by pseudo randomly striping
> memory allocations (in 2MB chunks) from across the specified physical
> nodes. [pseudo random is probably better than round robin]

Do we really want to support this? I don't think the allowed node masks should span more than one physical NUMA node. We also need to look at I/O devices as well.

> 
> Make sense? I can provide some worked exampled.
> 

Examples are appreciated.

Thanks,
Jun
___
Intel Open Source Technology Center

  parent reply	other threads:[~2010-02-09 22:56 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <AcqhN4DFeKntxDxZTHGQrdKilppA4Q==>
2010-01-29 23:05 ` Host Numa informtion in dom0 Kamble, Nitin A
2010-01-30  8:09   ` Keir Fraser
2010-02-01  2:21     ` Kamble, Nitin A
2010-02-01 10:23   ` Andre Przywara
2010-02-01 17:53     ` Dulloor
2010-02-01 21:39       ` Andre Przywara
2010-02-01 23:21         ` Kamble, Nitin A
2010-02-05 17:39   ` Ian Pratt
2010-02-05 20:33     ` Dan Magenheimer
2010-02-09 22:03       ` Nakajima, Jun
2010-02-10  3:25         ` Dan Magenheimer
2010-02-09 22:56     ` Nakajima, Jun [this message]
2010-02-11 15:21       ` Ian Pratt
2010-05-26 17:31   ` Bruce Edge

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54B2EB610B7F1340BB6A0D4CA04A4F1012BABB2E@orsmsx505.amr.corp.intel.com \
    --to=jun.nakajima@intel.com \
    --cc=Ian.Pratt@eu.citrix.com \
    --cc=nitin.a.kamble@intel.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.