All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Cc: Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Li Yechen <lccycc123@gmail.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Matt Wilson <msw@linux.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH v6 01/10] xen: vnuma topology and subop hypercalls
Date: Fri, 25 Jul 2014 08:33:45 +0100	[thread overview]
Message-ID: <53D224790200007800025D27@mail.emea.novell.com> (raw)
In-Reply-To: <CAEr7rXjaj7ZL0n=6d+_j-qxvCf2ZTOdjdK6kZyxxHNf=E4DPHw@mail.gmail.com>

>>> On 25.07.14 at 06:52, <ufimtseva@gmail.com> wrote:
> On Wed, Jul 23, 2014 at 10:06 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 18.07.14 at 07:50, <ufimtseva@gmail.com> wrote:
>>> +static int vnuma_alloc(struct vnuma_info **vnuma,
>>> +                       unsigned int nr_vnodes,
>>> +                       unsigned int nr_vcpus,
>>> +                       unsigned int dist_size)
>>> +{
>>> +    struct vnuma_info *v;
>>> +
>>> +    if ( vnuma && *vnuma )
>>> +        return -EINVAL;
>>> +
>>> +    v = *vnuma;
>>> +    /*
>>> +     * check if any of xmallocs exeeds PAGE_SIZE.
>>> +     * If yes, consider it as an error for now.
>>> +     */
>>> +    if ( nr_vnodes > PAGE_SIZE / sizeof(nr_vnodes)       ||
>>> +        nr_vcpus > PAGE_SIZE / sizeof(nr_vcpus)          ||
>>> +        nr_vnodes > PAGE_SIZE / sizeof(struct vmemrange) ||
>>> +        dist_size > PAGE_SIZE / sizeof(dist_size) )
>>
>> Three of the four checks are rather bogus - the types of the
>> variables just happen to match the types of the respective
>> array elements. Best to switch all of them to sizeof(*v->...).
>> Plus I'm not sure about the dist_size check - in its current shape
>> it's redundant with the nr_vnodes one (and really the function
>> parameter seems pointless, i.e. could be calculated here), and
>> it's questionable whether limiting that table against PAGE_SIZE
>> isn't too restrictive. Also indentation seems broken here.
> 
> I agree on distance table memory allocation limit.
> The max vdistance (in current interface) table dimension will be
> effectively 256 after nr_vnodes size check,
> so the max vNUMA nodes. Thus vdistance table will need to allocate 2
> pages of 4K size.
> Will be that viewed as a potential candidate to a list of affected
> hypercalls in XSA-77?

That list isn't permitted to be extended, so the multi-page allocation
needs to be avoided. And a 2-page allocation wouldn't mean a
security problem (after all the allocation still has a deterministic upper
bound) - it's a functionality one. Just allocate separate pages and
vmap() them.

>>> +
>>> +        /*
>>> +         * guest passes nr_vnodes and nr_vcpus thus
>>> +         * we know how much memory guest has allocated.
>>> +         */
>>> +        if ( copy_from_guest(&topology, arg, 1) ||
>>> +            guest_handle_is_null(topology.vmemrange.h) ||
>>> +            guest_handle_is_null(topology.vdistance.h) ||
>>> +            guest_handle_is_null(topology.vcpu_to_vnode.h) )
>>> +            return -EFAULT;
>>> +
>>> +        if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
>>> +            return -ESRCH;
>>> +
>>> +        rc = -EOPNOTSUPP;
>>> +        if ( d->vnuma == NULL )
>>> +            goto vnumainfo_out;
>>> +
>>> +        if ( d->vnuma->nr_vnodes == 0 )
>>> +            goto vnumainfo_out;
>>
>> Can this second condition validly (other than due to a race) be true if
>> the first one wasn't? (And of course there's synchronization missing
>> here, to avoid the race.)
> 
> My idea of using pair domain_lock and rcu_lock_domain_by_any_id was to
> avoid that race.

rcu_lock_domain_by_any_id() only guarantees the domain to not
go away under your feet. It means nothing towards a racing
update of d->vnuma.

> I used domain_lock in domctl hypercall when the pointer to vnuma of a
> domain is being set.

Right, but only protecting the writer side isn't providing any
synchronization.

> XENMEM_get_vnumainfo reads the values and hold the reader lock of that 
> domain.
> As setting vnuma happens once on booting domain,  domain_lock seemed
> to be ok here.
> Would be a spinlock more appropriate here?

Or an rw lock (if no lockless mechanism can be found).

Jan

  reply	other threads:[~2014-07-25  7:33 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-18  5:49 [PATCH v6 00/10] vnuma introduction Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 01/10] xen: vnuma topology and subop hypercalls Elena Ufimtseva
2014-07-18 10:30   ` Wei Liu
2014-07-20 13:16     ` Elena Ufimtseva
2014-07-20 15:59       ` Wei Liu
2014-07-22 15:18         ` Dario Faggioli
2014-07-23  5:33           ` Elena Ufimtseva
2014-07-18 13:49   ` Konrad Rzeszutek Wilk
2014-07-20 13:26     ` Elena Ufimtseva
2014-07-22 15:14   ` Dario Faggioli
2014-07-23  5:22     ` Elena Ufimtseva
2014-07-23 14:06   ` Jan Beulich
2014-07-25  4:52     ` Elena Ufimtseva
2014-07-25  7:33       ` Jan Beulich [this message]
2014-07-18  5:50 ` [PATCH v6 02/10] xsm bits for vNUMA hypercalls Elena Ufimtseva
2014-07-18 13:50   ` Konrad Rzeszutek Wilk
2014-07-18 15:26     ` Daniel De Graaf
2014-07-20 13:48       ` Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 03/10] vnuma hook to debug-keys u Elena Ufimtseva
2014-07-23 14:10   ` Jan Beulich
2014-07-18  5:50 ` [PATCH v6 04/10] libxc: Introduce xc_domain_setvnuma to set vNUMA Elena Ufimtseva
2014-07-18 10:33   ` Wei Liu
2014-07-29 10:33   ` Ian Campbell
2014-07-18  5:50 ` [PATCH v6 05/10] libxl: vnuma topology configuration parser and doc Elena Ufimtseva
2014-07-18 10:53   ` Wei Liu
2014-07-20 14:04     ` Elena Ufimtseva
2014-07-29 10:38   ` Ian Campbell
2014-07-29 10:42   ` Ian Campbell
2014-08-06  4:46     ` Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 06/10] libxc: move code to arch_boot_alloc func Elena Ufimtseva
2014-07-29 10:38   ` Ian Campbell
2014-07-18  5:50 ` [PATCH v6 07/10] libxc: allocate domain memory for vnuma enabled Elena Ufimtseva
2014-07-29 10:43   ` Ian Campbell
2014-08-06  4:48     ` Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 08/10] libxl: build numa nodes memory blocks Elena Ufimtseva
2014-07-18 11:01   ` Wei Liu
2014-07-20 12:58     ` Elena Ufimtseva
2014-07-20 15:59       ` Wei Liu
2014-07-18  5:50 ` [PATCH v6 09/10] libxl: vnuma nodes placement bits Elena Ufimtseva
2014-07-18  5:50 ` [PATCH v6 10/10] libxl: set vnuma for domain Elena Ufimtseva
2014-07-18 10:58   ` Wei Liu
2014-07-29 10:45   ` Ian Campbell
2014-08-12  3:52     ` Elena Ufimtseva
2014-08-12  9:42       ` Wei Liu
2014-08-12 17:10         ` Dario Faggioli
2014-08-12 17:13           ` Wei Liu
2014-08-12 17:24             ` Elena Ufimtseva
2014-07-18  6:16 ` [PATCH v6 00/10] vnuma introduction Elena Ufimtseva
2014-07-18  9:53 ` Wei Liu
2014-07-18 10:13   ` Dario Faggioli
2014-07-18 11:48     ` Wei Liu
2014-07-20 14:57       ` Elena Ufimtseva
2014-07-22 15:49         ` Dario Faggioli
2014-07-22 14:03       ` Dario Faggioli
2014-07-22 14:48         ` Wei Liu
2014-07-22 15:06           ` Dario Faggioli
2014-07-22 16:47             ` Wei Liu
2014-07-22 19:43         ` Is: cpuid creation of PV guests is not correct. Was:Re: " Konrad Rzeszutek Wilk
2014-07-22 22:34           ` Is: cpuid creation of PV guests is not correct Andrew Cooper
2014-07-22 22:53           ` Is: cpuid creation of PV guests is not correct. Was:Re: [PATCH v6 00/10] vnuma introduction Dario Faggioli
2014-07-23  6:00             ` Elena Ufimtseva
2014-07-22 12:49 ` Dario Faggioli
2014-07-23  5:59   ` Elena Ufimtseva

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53D224790200007800025D27@mail.emea.novell.com \
    --to=jbeulich@suse.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=keir@xen.org \
    --cc=lccycc123@gmail.com \
    --cc=msw@linux.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=ufimtseva@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.