Xen-Devel Archive on lore.kernel.org
 help / color / Atom feed
* [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware
@ 2019-07-30 14:42 Andrew Cooper
  2019-08-05 12:52 ` Jan Beulich
  2019-08-23  2:37 ` Tian, Kevin
  0 siblings, 2 replies; 5+ messages in thread
From: Andrew Cooper @ 2019-07-30 14:42 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Jan Beulich, Wei Liu, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

c/s e9986b0dd "x86/vvmx: Simplify per-CPU memory allocations" had the wrong
indirection on its pointer check in nvmx_cpu_up_prepare(), causing the
VMCS-shadowing buffer never be allocated.  Fix it.

This in turn results in a massive quantity of logspam, as every virtual
vmentry/exit hits both gdprintk()s in the *_bulk() functions.

Switch these to using printk_once().  The size of the buffer is chosen at
compile time, so complaining about it repeatedly is of no benefit.

Finally, drop the runtime NULL pointer checks.  It is not terribly appropriate
to be repeatedly checking infrastructure which is set up from start-of-day,
and in this case, actually hid the above bug.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 332623d006..f38f3a9930 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -43,7 +43,7 @@ int nvmx_cpu_up_prepare(unsigned int cpu)
     uint64_t **vvmcs_buf;
 
     if ( cpu_has_vmx_vmcs_shadowing &&
-         (vvmcs_buf = &per_cpu(vvmcs_buf, cpu)) == NULL )
+         *(vvmcs_buf = &per_cpu(vvmcs_buf, cpu)) == NULL )
     {
         void *ptr = xzalloc_array(uint64_t, VMCS_BUF_SIZE);
 
@@ -922,11 +922,10 @@ static void vvmcs_to_shadow_bulk(struct vcpu *v, unsigned int n,
     if ( !cpu_has_vmx_vmcs_shadowing )
         goto fallback;
 
-    if ( !value || n > VMCS_BUF_SIZE )
+    if ( n > VMCS_BUF_SIZE )
     {
-        gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, "
-                 "buffer: %p, buffer size: %d, fields number: %d.\n",
-                 value, VMCS_BUF_SIZE, n);
+        printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n",
+                    v, n);
         goto fallback;
     }
 
@@ -962,11 +961,10 @@ static void shadow_to_vvmcs_bulk(struct vcpu *v, unsigned int n,
     if ( !cpu_has_vmx_vmcs_shadowing )
         goto fallback;
 
-    if ( !value || n > VMCS_BUF_SIZE )
+    if ( n > VMCS_BUF_SIZE )
     {
-        gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, "
-                 "buffer: %p, buffer size: %d, fields number: %d.\n",
-                 value, VMCS_BUF_SIZE, n);
+        printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n",
+                    v, n);
         goto fallback;
     }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware
  2019-07-30 14:42 [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware Andrew Cooper
@ 2019-08-05 12:52 ` Jan Beulich
  2019-08-05 13:14   ` Andrew Cooper
  2019-08-23  2:37 ` Tian, Kevin
  1 sibling, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2019-08-05 12:52 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Xen-devel, Kevin Tian, Wei Liu, Jun Nakajima, Roger Pau Monné

On 30.07.2019 16:42, Andrew Cooper wrote:
> c/s e9986b0dd "x86/vvmx: Simplify per-CPU memory allocations" had the wrong
> indirection on its pointer check in nvmx_cpu_up_prepare(), causing the
> VMCS-shadowing buffer never be allocated.  Fix it.
> 
> This in turn results in a massive quantity of logspam, as every virtual
> vmentry/exit hits both gdprintk()s in the *_bulk() functions.

The "in turn" here applies to the original bug (which gets fixed here)
aiui, i.e. there isn't any log spam with the fix in place anymore, is
there? If so, ...

> Switch these to using printk_once().  The size of the buffer is chosen at
> compile time, so complaining about it repeatedly is of no benefit.

... I'm not sure I'd agree with this move: Why would it be of interest
only the first time that we (would have) overrun the buffer? After all
it's not only the compile time choice of buffer size that matters here,
but also the runtime aspect of what value "n" has got passed into the
functions. If this is on the assumption that we'd want to know merely
of the fact, not how often it occurs, then I'd think this ought to
remain a debugging printk().

> Finally, drop the runtime NULL pointer checks.  It is not terribly appropriate
> to be repeatedly checking infrastructure which is set up from start-of-day,
> and in this case, actually hid the above bug.

I don't see how the repeated checking would have hidden any bug: Due
to the lack of the extra indirection the pointer would have remained
NULL, and hence the log message would have appeared (as also
mentioned above) _until_ you had fixed the indirection mistake. (This
isn't to mean I'm against dropping the check, I'd just like to
understand the why.)

> @@ -922,11 +922,10 @@ static void vvmcs_to_shadow_bulk(struct vcpu *v, unsigned int n,
>       if ( !cpu_has_vmx_vmcs_shadowing )
>           goto fallback;
>   
> -    if ( !value || n > VMCS_BUF_SIZE )
> +    if ( n > VMCS_BUF_SIZE )
>       {
> -        gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, "
> -                 "buffer: %p, buffer size: %d, fields number: %d.\n",
> -                 value, VMCS_BUF_SIZE, n);
> +        printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n",
> +                    v, n);
>           goto fallback;
>       }
>   
> @@ -962,11 +961,10 @@ static void shadow_to_vvmcs_bulk(struct vcpu *v, unsigned int n,
>       if ( !cpu_has_vmx_vmcs_shadowing )
>           goto fallback;
>   
> -    if ( !value || n > VMCS_BUF_SIZE )
> +    if ( n > VMCS_BUF_SIZE )
>       {
> -        gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, "
> -                 "buffer: %p, buffer size: %d, fields number: %d.\n",
> -                 value, VMCS_BUF_SIZE, n);
> +        printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n",
> +                    v, n);

Would you mind taking the opportunity and also disambiguate the two
log messages, so that from observing one it is clear which instance
it was that got triggered?

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware
  2019-08-05 12:52 ` Jan Beulich
@ 2019-08-05 13:14   ` Andrew Cooper
  2019-08-05 13:32     ` Jan Beulich
  0 siblings, 1 reply; 5+ messages in thread
From: Andrew Cooper @ 2019-08-05 13:14 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Xen-devel, Kevin Tian, Wei Liu, Jun Nakajima, Roger Pau Monné

On 05/08/2019 13:52, Jan Beulich wrote:
> On 30.07.2019 16:42, Andrew Cooper wrote:
>> c/s e9986b0dd "x86/vvmx: Simplify per-CPU memory allocations" had the wrong
>> indirection on its pointer check in nvmx_cpu_up_prepare(), causing the
>> VMCS-shadowing buffer never be allocated.  Fix it.
>>
>> This in turn results in a massive quantity of logspam, as every virtual
>> vmentry/exit hits both gdprintk()s in the *_bulk() functions.
> The "in turn" here applies to the original bug (which gets fixed here)
> aiui,

Correct.

>  i.e. there isn't any log spam with the fix in place anymore, is
> there?

Incorrect, because...

>  If so, ...
>
>> Switch these to using printk_once().  The size of the buffer is chosen at
>> compile time, so complaining about it repeatedly is of no benefit.
> ... I'm not sure I'd agree with this move: Why would it be of interest
> only the first time that we (would have) overrun the buffer?

... we will either never overrun it, or overrun it on every virtual
vmentry/exit.

> After all
> it's not only the compile time choice of buffer size that matters here,
> but also the runtime aspect of what value "n" has got passed into the
> functions.

The few choices of "n" are fixed at compile time as well, which is why...

> If this is on the assumption that we'd want to know merely
> of the fact, not how often it occurs, then I'd think this ought to
> remain a debugging printk().

... this still ends up as a completely unusable system, when the problem
occurs.

>
>> Finally, drop the runtime NULL pointer checks.  It is not terribly appropriate
>> to be repeatedly checking infrastructure which is set up from start-of-day,
>> and in this case, actually hid the above bug.
> I don't see how the repeated checking would have hidden any bug:

Without this check, Xen would have crashed with a NULL deference on the
original change, and highlighted the fact that the change was totally
broken.

I didn't spot the issue because it was tested with a release build,
which is another reason why the replacement printk() is deliberately not
a debug-time-only.

>  Due
> to the lack of the extra indirection the pointer would have remained
> NULL, and hence the log message would have appeared (as also
> mentioned above) _until_ you had fixed the indirection mistake. (This
> isn't to mean I'm against dropping the check, I'd just like to
> understand the why.)
>
>> @@ -922,11 +922,10 @@ static void vvmcs_to_shadow_bulk(struct vcpu *v, unsigned int n,
>>       if ( !cpu_has_vmx_vmcs_shadowing )
>>           goto fallback;
>>   
>> -    if ( !value || n > VMCS_BUF_SIZE )
>> +    if ( n > VMCS_BUF_SIZE )
>>       {
>> -        gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, "
>> -                 "buffer: %p, buffer size: %d, fields number: %d.\n",
>> -                 value, VMCS_BUF_SIZE, n);
>> +        printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n",
>> +                    v, n);
>>           goto fallback;
>>       }
>>   
>> @@ -962,11 +961,10 @@ static void shadow_to_vvmcs_bulk(struct vcpu *v, unsigned int n,
>>       if ( !cpu_has_vmx_vmcs_shadowing )
>>           goto fallback;
>>   
>> -    if ( !value || n > VMCS_BUF_SIZE )
>> +    if ( n > VMCS_BUF_SIZE )
>>       {
>> -        gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, "
>> -                 "buffer: %p, buffer size: %d, fields number: %d.\n",
>> -                 value, VMCS_BUF_SIZE, n);
>> +        printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n",
>> +                    v, n);
> Would you mind taking the opportunity and also disambiguate the two
> log messages, so that from observing one it is clear which instance
> it was that got triggered?

I'm really not sure it matters.  The use of these functions are
symmetric, so in practice you'll get both printks() in very quick
succession.  Also, the required fix is the same whichever one fires first.

I'm also not convinced that any of this logic is going to survive a
concerted effort to get nested-virt usable.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware
  2019-08-05 13:14   ` Andrew Cooper
@ 2019-08-05 13:32     ` Jan Beulich
  0 siblings, 0 replies; 5+ messages in thread
From: Jan Beulich @ 2019-08-05 13:32 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Xen-devel, Kevin Tian, Wei Liu, Jun Nakajima, Roger Pau Monné

On 05.08.2019 15:14, Andrew Cooper wrote:
> On 05/08/2019 13:52, Jan Beulich wrote:
>> On 30.07.2019 16:42, Andrew Cooper wrote:
>>> c/s e9986b0dd "x86/vvmx: Simplify per-CPU memory allocations" had the wrong
>>> indirection on its pointer check in nvmx_cpu_up_prepare(), causing the
>>> VMCS-shadowing buffer never be allocated.  Fix it.
>>>
>>> This in turn results in a massive quantity of logspam, as every virtual
>>> vmentry/exit hits both gdprintk()s in the *_bulk() functions.
>> The "in turn" here applies to the original bug (which gets fixed here)
>> aiui,
> 
> Correct.
> 
>>   i.e. there isn't any log spam with the fix in place anymore, is
>> there?
> 
> Incorrect, because...
> 
>>   If so, ...
>>
>>> Switch these to using printk_once().  The size of the buffer is chosen at
>>> compile time, so complaining about it repeatedly is of no benefit.
>> ... I'm not sure I'd agree with this move: Why would it be of interest
>> only the first time that we (would have) overrun the buffer?
> 
> ... we will either never overrun it, or overrun it on every virtual
> vmentry/exit.
> 
>> After all
>> it's not only the compile time choice of buffer size that matters here,
>> but also the runtime aspect of what value "n" has got passed into the
>> functions.
> 
> The few choices of "n" are fixed at compile time as well, which is why...

Oh - I should have looked at the callers. It's all ARRAY_SIZE(),
indeed.

>> If this is on the assumption that we'd want to know merely
>> of the fact, not how often it occurs, then I'd think this ought to
>> remain a debugging printk().
> 
> ... this still ends up as a completely unusable system, when the problem
> occurs.
> 
>>
>>> Finally, drop the runtime NULL pointer checks.  It is not terribly appropriate
>>> to be repeatedly checking infrastructure which is set up from start-of-day,
>>> and in this case, actually hid the above bug.
>> I don't see how the repeated checking would have hidden any bug:
> 
> Without this check, Xen would have crashed with a NULL deference on the
> original change, and highlighted the fact that the change was totally
> broken.
> 
> I didn't spot the issue because it was tested with a release build,
> which is another reason why the replacement printk() is deliberately not
> a debug-time-only.

Taking this as a basis, there shouldn't be any debug-only printk()s.

Anyway, with your explanations
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware
  2019-07-30 14:42 [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware Andrew Cooper
  2019-08-05 12:52 ` Jan Beulich
@ 2019-08-23  2:37 ` Tian, Kevin
  1 sibling, 0 replies; 5+ messages in thread
From: Tian, Kevin @ 2019-08-23  2:37 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Nakajima, Jun, Wei Liu, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Tuesday, July 30, 2019 10:43 PM
> 
> c/s e9986b0dd "x86/vvmx: Simplify per-CPU memory allocations" had the
> wrong
> indirection on its pointer check in nvmx_cpu_up_prepare(), causing the
> VMCS-shadowing buffer never be allocated.  Fix it.
> 
> This in turn results in a massive quantity of logspam, as every virtual
> vmentry/exit hits both gdprintk()s in the *_bulk() functions.
> 
> Switch these to using printk_once().  The size of the buffer is chosen at
> compile time, so complaining about it repeatedly is of no benefit.
> 
> Finally, drop the runtime NULL pointer checks.  It is not terribly appropriate
> to be repeatedly checking infrastructure which is set up from start-of-day,
> and in this case, actually hid the above bug.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, back to index

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-30 14:42 [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware Andrew Cooper
2019-08-05 12:52 ` Jan Beulich
2019-08-05 13:14   ` Andrew Cooper
2019-08-05 13:32     ` Jan Beulich
2019-08-23  2:37 ` Tian, Kevin

Xen-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/xen-devel/0 xen-devel/git/0.git
	git clone --mirror https://lore.kernel.org/xen-devel/1 xen-devel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 xen-devel xen-devel/ https://lore.kernel.org/xen-devel \
		xen-devel@lists.xenproject.org xen-devel@archiver.kernel.org
	public-inbox-index xen-devel


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.xenproject.lists.xen-devel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox