All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: ufimtseva@gmail.com, andrew.cooper3@citrix.com,
	dario.faggioli@citrix.com, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org, JBeulich@suse.com
Subject: Re: [PATCH v4 15/21] libxc: allocate memory with vNUMA information for HVM guest
Date: Wed, 28 Jan 2015 16:36:56 +0000	[thread overview]
Message-ID: <1422463016.5187.58.camel@citrix.com> (raw)
In-Reply-To: <1422011632-22018-16-git-send-email-wei.liu2@citrix.com>

On Fri, 2015-01-23 at 11:13 +0000, Wei Liu wrote:
> The algorithm is more or less the same as the one used for PV guest.

Any reason the code can't be shared then? :-D
>  
> +    if ( nr_pages > target_pages )
> +        memflags |= XENMEMF_populate_on_demand;

OOI how does vNUMA and PoD interact? Do you prefer to fill a node as
much as possible (leaving other nodes entirely PoD) or do you prefer to
balance the PoD pages between nodes?

I think the former?

I suspect this depends a lot on the guest behaviour, so there probably
isn't a right answer. Are there corner cases where this might go wrong
though?

> +
> +    if ( args->nr_vnuma_info == 0 )
> +    {
> +        /* Build dummy vnode information */
> +        dummy_vnuma_info.vnode = 0;
> +        dummy_vnuma_info.pnode = XC_VNUMA_NO_NODE;
> +        dummy_vnuma_info.pages = args->mem_size >> PAGE_SHIFT;
> +        args->nr_vnuma_info = 1;
> +        args->vnuma_info = &dummy_vnuma_info;

You could have done this in the PV case too, I nearly suggested it then
but realised I already had, so I figured there was a reason not to?

> +    }
> +    else
> +    {
> +        if ( nr_pages > target_pages )
> +        {
> +            PERROR("Cannot enable vNUMA and PoD at the same time");

And there's the answer to my question above ;-)

> +            goto error_out;
>  
> +    for ( i = 0; i < args->nr_vnuma_info; i++ )
>      {

Reindenting in a precursor patch made this diff a lot easier to read,
thanks.

>              if ( count > max_pages )
> @@ -388,19 +440,20 @@ static int setup_guest(xc_interface *xch,
>                  unsigned long nr_extents = count >> SUPERPAGE_1GB_SHIFT;
>                  xen_pfn_t sp_extents[nr_extents];
>  
> -                for ( i = 0; i < nr_extents; i++ )
> -                    sp_extents[i] =
> -                        page_array[cur_pages+(i<<SUPERPAGE_1GB_SHIFT)];
> +                for ( j = 0; j < nr_extents; j++ )
> +                    sp_extents[j] =
> +                        page_array[cur_pages+(j<<SUPERPAGE_1GB_SHIFT)];

You might condider s/i/j/ for a precursor patch too. Or given the scope
of the outermost loop is quite large a more specific name than i (like
vnid) might be more appropriate.

Ian.

  reply	other threads:[~2015-01-28 16:36 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-23 11:13 [PATCH v4 00/21] Virtual NUMA for PV and HVM Wei Liu
2015-01-23 11:13 ` [PATCH v4 01/21] xen: dump vNUMA information with debug key "u" Wei Liu
2015-01-23 13:03   ` Jan Beulich
2015-01-23 13:23     ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 02/21] xen: make two memory hypercalls vNUMA-aware Wei Liu
2015-01-23 13:16   ` Jan Beulich
2015-01-23 14:46     ` Wei Liu
2015-01-23 15:37       ` Jan Beulich
2015-01-23 15:43         ` Wei Liu
2015-01-23 16:06           ` Wei Liu
2015-01-23 16:17             ` Jan Beulich
2015-01-23 11:13 ` [PATCH v4 03/21] libxc: allocate memory with vNUMA information for PV guest Wei Liu
2015-01-28 16:02   ` Ian Campbell
2015-01-23 11:13 ` [PATCH v4 04/21] libxl: introduce vNUMA types Wei Liu
2015-01-28 16:04   ` Ian Campbell
2015-01-28 21:51     ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 05/21] libxl: add vmemrange to libxl__domain_build_state Wei Liu
2015-01-28 16:05   ` Ian Campbell
2015-01-23 11:13 ` [PATCH v4 06/21] libxl: introduce libxl__vnuma_config_check Wei Liu
2015-01-28 16:13   ` Ian Campbell
2015-01-28 21:51     ` Wei Liu
2015-01-29 11:04       ` Ian Campbell
2015-01-29 16:01         ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 07/21] libxl: x86: factor out e820_host_sanitize Wei Liu
2015-01-28 16:14   ` Ian Campbell
2015-01-23 11:13 ` [PATCH v4 08/21] libxl: functions to build vmemranges for PV guest Wei Liu
2015-01-28 16:27   ` Ian Campbell
2015-01-28 21:59     ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 09/21] libxl: build, check and pass vNUMA info to Xen " Wei Liu
2015-01-28 16:29   ` Ian Campbell
2015-01-23 11:13 ` [PATCH v4 10/21] xen: handle XENMEM_get_vnumainfo in compat_memory_op Wei Liu
2015-01-23 11:13 ` [PATCH v4 11/21] hvmloader: retrieve vNUMA information from hypervisor Wei Liu
2015-01-23 13:27   ` Jan Beulich
2015-01-23 14:17     ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 12/21] hvmloader: construct SRAT Wei Liu
2015-01-23 11:13 ` [PATCH v4 13/21] hvmloader: construct SLIT Wei Liu
2015-01-23 11:13 ` [PATCH v4 14/21] libxc: indentation change to xc_hvm_build_x86.c Wei Liu
2015-01-28 16:30   ` Ian Campbell
2015-01-23 11:13 ` [PATCH v4 15/21] libxc: allocate memory with vNUMA information for HVM guest Wei Liu
2015-01-28 16:36   ` Ian Campbell [this message]
2015-01-28 22:07     ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 16/21] libxl: build, check and pass vNUMA info to Xen " Wei Liu
2015-01-28 16:41   ` Ian Campbell
2015-01-28 22:14     ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 17/21] libxl: disallow memory relocation when vNUMA is enabled Wei Liu
2015-01-28 16:41   ` Ian Campbell
2015-01-28 22:22     ` Wei Liu
2015-01-29 11:06       ` Ian Campbell
2015-01-29 16:04         ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 18/21] libxlu: rework internal representation of setting Wei Liu
2015-01-28 16:41   ` Ian Campbell
2015-02-11 16:12   ` Ian Jackson
2015-02-12 10:58     ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 19/21] libxlu: nested list support Wei Liu
2015-02-11 16:13   ` Ian Jackson
2015-01-23 11:13 ` [PATCH v4 20/21] libxlu: introduce new APIs Wei Liu
2015-02-11 16:17   ` Ian Jackson
2015-02-12 10:57     ` Wei Liu
2015-01-23 11:13 ` [PATCH v4 21/21] xl: vNUMA support Wei Liu
2015-01-28 16:46   ` Ian Campbell
2015-01-28 22:52     ` Wei Liu
2015-01-29 11:10       ` Ian Campbell
2015-01-29 17:46         ` Wei Liu
2015-02-24 16:15           ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1422463016.5187.58.camel@citrix.com \
    --to=ian.campbell@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=ufimtseva@gmail.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.