All of lore.kernel.org
 help / color / mirror / Atom feed
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Matt Wilson <msw@linux.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Li Yechen <lccycc123@gmail.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH v6 01/10] xen: vnuma topology and subop hypercalls
Date: Sun, 20 Jul 2014 09:16:11 -0400	[thread overview]
Message-ID: <CAEr7rXifuLnHECG6pCBThgRwzv2K92pSgRaPp95m51Y+GoVSpw@mail.gmail.com> (raw)
In-Reply-To: <20140718103034.GA7142@zion.uk.xensource.com>

On Fri, Jul 18, 2014 at 6:30 AM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Jul 18, 2014 at 01:50:00AM -0400, Elena Ufimtseva wrote:
> [...]
>> +/*
>> + * Allocate memory and construct one vNUMA node,
>> + * set default parameters, assign all memory and
>> + * vcpus to this node, set distance to 10.
>> + */
>> +static long vnuma_fallback(const struct domain *d,
>> +                          struct vnuma_info **vnuma)
>> +{
>> +    struct vnuma_info *v;
>> +    long ret;
>> +
>> +
>> +    /* Will not destroy vNUMA here, destroy before calling this. */
>> +    if ( vnuma && *vnuma )
>> +        return -EINVAL;
>> +
>> +    v = *vnuma;
>> +    ret = vnuma_alloc(&v, 1, d->max_vcpus, 1);
>> +    if ( ret )
>> +        return ret;
>> +
>> +    v->vmemrange[0].start = 0;
>> +    v->vmemrange[0].end = d->max_pages << PAGE_SHIFT;
>> +    v->vdistance[0] = 10;
>> +    v->vnode_to_pnode[0] = NUMA_NO_NODE;
>> +    memset(v->vcpu_to_vnode, 0, d->max_vcpus);
>> +    v->nr_vnodes = 1;
>> +
>> +    *vnuma = v;
>> +
>> +    return 0;
>> +}
>> +
>
> I have question about this strategy. Is there any reason to choose to
> fallback to this one node? In that case the toolstack will have
> different view of the guest than the hypervisor. Toolstack still thinks
> this guest has several nodes while this guest has only one. The can
> cause problem when migrating a guest. Consider this, toolstack on the
> remote end still builds two nodes given the fact that it's what it
> knows, then the guest originally has one node notices the change in
> underlying memory topology and crashes.
>
> IMHO we should just fail in this case. It's not that common to fail a
> small array allocation anyway. This approach can also save you from
> writing this function. :-)

I see and agree )

Do you mean fail as to not set any vnuma for domain? If yes, it sort of
contradicts with statement 'every pv domain has at least one vnuma node'.
Would be it reasonable on failed call to xc_domain_setvnuma from libxl
to fallback to
one node in tool stack as well?

>
>> +/*
>> + * construct vNUMA topology form u_vnuma struct and return
>> + * it in dst.
>> + */
> [...]
>> +
>> +    /* On failure, set only one vNUMA node and its success. */
>> +    ret = 0;
>> +
>> +    if ( copy_from_guest(v->vdistance, u_vnuma->vdistance, dist_size) )
>> +        goto vnuma_onenode;
>> +    if ( copy_from_guest(v->vmemrange, u_vnuma->vmemrange, nr_vnodes) )
>> +        goto vnuma_onenode;
>> +    if ( copy_from_guest(v->vcpu_to_vnode, u_vnuma->vcpu_to_vnode,
>> +        d->max_vcpus) )
>> +        goto vnuma_onenode;
>> +    if ( copy_from_guest(v->vnode_to_pnode, u_vnuma->vnode_to_pnode,
>> +        nr_vnodes) )
>> +        goto vnuma_onenode;
>> +
>> +    v->nr_vnodes = nr_vnodes;
>> +    *dst = v;
>> +
>> +    return ret;
>> +
>> +vnuma_onenode:
>> +    vnuma_destroy(v);
>> +    return vnuma_fallback(d, dst);
>> +}
>> +
>>  long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>  {
>>      long ret = 0;
>> @@ -967,6 +1105,35 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>      }
>>      break;
>>
> [...]
>> +/*
>> + * vNUMA topology specifies vNUMA node number, distance table,
>> + * memory ranges and vcpu mapping provided for guests.
>> + * XENMEM_get_vnumainfo hypercall expects to see from guest
>> + * nr_vnodes and nr_vcpus to indicate available memory. After
>> + * filling guests structures, nr_vnodes and nr_vcpus copied
>> + * back to guest.
>> + */
>> +struct vnuma_topology_info {
>> +    /* IN */
>> +    domid_t domid;
>> +    /* IN/OUT */
>> +    unsigned int nr_vnodes;
>> +    unsigned int nr_vcpus;
>> +    /* OUT */
>> +    union {
>> +        XEN_GUEST_HANDLE(uint) h;
>> +        uint64_t pad;
>> +    } vdistance;
>> +    union {
>> +        XEN_GUEST_HANDLE(uint) h;
>> +        uint64_t pad;
>> +    } vcpu_to_vnode;
>> +    union {
>> +        XEN_GUEST_HANDLE(vmemrange_t) h;
>> +        uint64_t pad;
>> +    } vmemrange;
>
> Why do you need to use union? The other interface you introduce in this
> patch doesn't use union.

This is one is for making sure on 32 and 64 bits the structures are of
the same size.


>
> Wei.



-- 
Elena

  reply	other threads:[~2014-07-20 13:16 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-18  5:49 [PATCH v6 00/10] vnuma introduction Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 01/10] xen: vnuma topology and subop hypercalls Elena Ufimtseva
2014-07-18 10:30   ` Wei Liu
2014-07-20 13:16     ` Elena Ufimtseva [this message]
2014-07-20 15:59       ` Wei Liu
2014-07-22 15:18         ` Dario Faggioli
2014-07-23  5:33           ` Elena Ufimtseva
2014-07-18 13:49   ` Konrad Rzeszutek Wilk
2014-07-20 13:26     ` Elena Ufimtseva
2014-07-22 15:14   ` Dario Faggioli
2014-07-23  5:22     ` Elena Ufimtseva
2014-07-23 14:06   ` Jan Beulich
2014-07-25  4:52     ` Elena Ufimtseva
2014-07-25  7:33       ` Jan Beulich
2014-07-18  5:50 ` [PATCH v6 02/10] xsm bits for vNUMA hypercalls Elena Ufimtseva
2014-07-18 13:50   ` Konrad Rzeszutek Wilk
2014-07-18 15:26     ` Daniel De Graaf
2014-07-20 13:48       ` Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 03/10] vnuma hook to debug-keys u Elena Ufimtseva
2014-07-23 14:10   ` Jan Beulich
2014-07-18  5:50 ` [PATCH v6 04/10] libxc: Introduce xc_domain_setvnuma to set vNUMA Elena Ufimtseva
2014-07-18 10:33   ` Wei Liu
2014-07-29 10:33   ` Ian Campbell
2014-07-18  5:50 ` [PATCH v6 05/10] libxl: vnuma topology configuration parser and doc Elena Ufimtseva
2014-07-18 10:53   ` Wei Liu
2014-07-20 14:04     ` Elena Ufimtseva
2014-07-29 10:38   ` Ian Campbell
2014-07-29 10:42   ` Ian Campbell
2014-08-06  4:46     ` Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 06/10] libxc: move code to arch_boot_alloc func Elena Ufimtseva
2014-07-29 10:38   ` Ian Campbell
2014-07-18  5:50 ` [PATCH v6 07/10] libxc: allocate domain memory for vnuma enabled Elena Ufimtseva
2014-07-29 10:43   ` Ian Campbell
2014-08-06  4:48     ` Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 08/10] libxl: build numa nodes memory blocks Elena Ufimtseva
2014-07-18 11:01   ` Wei Liu
2014-07-20 12:58     ` Elena Ufimtseva
2014-07-20 15:59       ` Wei Liu
2014-07-18  5:50 ` [PATCH v6 09/10] libxl: vnuma nodes placement bits Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 10/10] libxl: set vnuma for domain Elena Ufimtseva
2014-07-18 10:58   ` Wei Liu
2014-07-29 10:45   ` Ian Campbell
2014-08-12  3:52     ` Elena Ufimtseva
2014-08-12  9:42       ` Wei Liu
2014-08-12 17:10         ` Dario Faggioli
2014-08-12 17:13           ` Wei Liu
2014-08-12 17:24             ` Elena Ufimtseva
2014-07-18  6:16 ` [PATCH v6 00/10] vnuma introduction Elena Ufimtseva
2014-07-18  9:53 ` Wei Liu
2014-07-18 10:13   ` Dario Faggioli
2014-07-18 11:48     ` Wei Liu
2014-07-20 14:57       ` Elena Ufimtseva
2014-07-22 15:49         ` Dario Faggioli
2014-07-22 14:03       ` Dario Faggioli
2014-07-22 14:48         ` Wei Liu
2014-07-22 15:06           ` Dario Faggioli
2014-07-22 16:47             ` Wei Liu
2014-07-22 19:43         ` Is: cpuid creation of PV guests is not correct. Was:Re: " Konrad Rzeszutek Wilk
2014-07-22 22:34           ` Is: cpuid creation of PV guests is not correct Andrew Cooper
2014-07-22 22:53           ` Is: cpuid creation of PV guests is not correct. Was:Re: [PATCH v6 00/10] vnuma introduction Dario Faggioli
2014-07-23  6:00             ` Elena Ufimtseva
2014-07-22 12:49 ` Dario Faggioli
2014-07-23  5:59   ` Elena Ufimtseva

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEr7rXifuLnHECG6pCBThgRwzv2K92pSgRaPp95m51Y+GoVSpw@mail.gmail.com \
    --to=ufimtseva@gmail.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=keir@xen.org \
    --cc=lccycc123@gmail.com \
    --cc=msw@linux.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.