All of lore.kernel.org
 help / color / mirror / Atom feed
* XENMEM_maximum_gpfn return value
@ 2013-02-25 14:33 Jan Beulich
  2013-02-26 15:27 ` Tim Deegan
  0 siblings, 1 reply; 3+ messages in thread
From: Jan Beulich @ 2013-02-25 14:33 UTC (permalink / raw)
  To: xen-devel

This coming from a shared info field, it has to be assumed to possibly
have any value. In particular, our kernels don't set up this field at all
if running as Dom0 and kexec isn't enabled (along with not setting up
the P2M frame lists, as that's simply wasted memory in that case).

So, with this being a guest provided value, we apparently have two
problems: do_memory_op()'s "rc" variable is only of type "int", thus
potentially truncating the result of domain_get_maximum_gpfn()
(considering that we now support 16Tb, an 8Tb+ guest ought to
be possible, and such would have a max GPFN with 32 significant
bits). And zero or a very large number being returned by the latter
will generally be mistaken as an error code by the caller of the
hypercall.

So, along with promoting "rc" to long, I'm considering enforcing a
sane lower bound on domain_get_maximum_gpfn()'s return value,
using d->tot_pages (under the assumption that each page would
have a representation in the physical map, and hence the
physical map can't reasonably be smaller than this). That would
overcome the subtraction of 1 done there for PV guests to
convert 0 to -1 (i.e. -EPERM). Similarly, a sane upper bound
ought to be enforced (to avoid collisions with the -E range).

Other thoughts? Is such a behavioral change acceptable for an
existing interface?

Jan

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: XENMEM_maximum_gpfn return value
  2013-02-25 14:33 XENMEM_maximum_gpfn return value Jan Beulich
@ 2013-02-26 15:27 ` Tim Deegan
  2013-02-27 10:49   ` Jan Beulich
  0 siblings, 1 reply; 3+ messages in thread
From: Tim Deegan @ 2013-02-26 15:27 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

At 14:33 +0000 on 25 Feb (1361802812), Jan Beulich wrote:
> This coming from a shared info field, it has to be assumed to possibly
> have any value. In particular, our kernels don't set up this field at all
> if running as Dom0 and kexec isn't enabled (along with not setting up
> the P2M frame lists, as that's simply wasted memory in that case).
> 
> So, with this being a guest provided value, we apparently have two
> problems: do_memory_op()'s "rc" variable is only of type "int", thus
> potentially truncating the result of domain_get_maximum_gpfn()
> (considering that we now support 16Tb, an 8Tb+ guest ought to
> be possible, and such would have a max GPFN with 32 significant
> bits). And zero or a very large number being returned by the latter
> will generally be mistaken as an error code by the caller of the
> hypercall.
> 
> So, along with promoting "rc" to long, I'm considering enforcing a
> sane lower bound on domain_get_maximum_gpfn()'s return value,
> using d->tot_pages (under the assumption that each page would
> have a representation in the physical map, and hence the
> physical map can't reasonably be smaller than this). That would
> overcome the subtraction of 1 done there for PV guests to
> convert 0 to -1 (i.e. -EPERM). Similarly, a sane upper bound
> ought to be enforced (to avoid collisions with the -E range).
> 
> Other thoughts? Is such a behavioral change acceptable for an
> existing interface?

The new behaviour seems sensible but I'm a little worried about changing
the behaviour of an existing call.  I'd be inclined to add a new
hypercall that DTRT and leave the old one, deprecated.

Tim.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: XENMEM_maximum_gpfn return value
  2013-02-26 15:27 ` Tim Deegan
@ 2013-02-27 10:49   ` Jan Beulich
  0 siblings, 0 replies; 3+ messages in thread
From: Jan Beulich @ 2013-02-27 10:49 UTC (permalink / raw)
  To: Tim Deegan; +Cc: xen-devel

>>> On 26.02.13 at 16:27, Tim Deegan <tim@xen.org> wrote:
> At 14:33 +0000 on 25 Feb (1361802812), Jan Beulich wrote:
>> This coming from a shared info field, it has to be assumed to possibly
>> have any value. In particular, our kernels don't set up this field at all
>> if running as Dom0 and kexec isn't enabled (along with not setting up
>> the P2M frame lists, as that's simply wasted memory in that case).
>> 
>> So, with this being a guest provided value, we apparently have two
>> problems: do_memory_op()'s "rc" variable is only of type "int", thus
>> potentially truncating the result of domain_get_maximum_gpfn()
>> (considering that we now support 16Tb, an 8Tb+ guest ought to
>> be possible, and such would have a max GPFN with 32 significant
>> bits). And zero or a very large number being returned by the latter
>> will generally be mistaken as an error code by the caller of the
>> hypercall.
>> 
>> So, along with promoting "rc" to long, I'm considering enforcing a
>> sane lower bound on domain_get_maximum_gpfn()'s return value,
>> using d->tot_pages (under the assumption that each page would
>> have a representation in the physical map, and hence the
>> physical map can't reasonably be smaller than this). That would
>> overcome the subtraction of 1 done there for PV guests to
>> convert 0 to -1 (i.e. -EPERM). Similarly, a sane upper bound
>> ought to be enforced (to avoid collisions with the -E range).
>> 
>> Other thoughts? Is such a behavioral change acceptable for an
>> existing interface?
> 
> The new behaviour seems sensible but I'm a little worried about changing
> the behaviour of an existing call.  I'd be inclined to add a new
> hypercall that DTRT and leave the old one, deprecated.

So I think this should be two steps then - fix the current call (so
that it doesn't return -1 (== -EPERM) anymore when the field is
zero, and so it doesn't truncate big values anymore) and, should
it ever turn out necessary, add a new one returning a sanitized
value.

I'll send a patch for the first part in a minute, but I'll leave the
second step open as I think the context in which the problem
was observed (xen-mceinj) is actually in need of fixing itself (i.e.
should not be using this call).

Jan

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-02-27 10:49 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-25 14:33 XENMEM_maximum_gpfn return value Jan Beulich
2013-02-26 15:27 ` Tim Deegan
2013-02-27 10:49   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.