linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] KVM: selftests: Detect max PA width from cpuid
@ 2019-08-26  7:57 Peter Xu
  2019-08-26  8:11 ` Thomas Huth
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Peter Xu @ 2019-08-26  7:57 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: peterx, Paolo Bonzini, Andrew Jones, Radim Krčmář,
	Thomas Huth

The dirty_log_test is failing on some old machines like Xeon E3-1220
with tripple faults when writting to the tracked memory region:

  Test iterations: 32, interval: 10 (ms)
  Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
  guest physical test memory offset: 0x7fbffef000
  ==== Test Assertion Failure ====
  dirty_log_test.c:138: false
  pid=6137 tid=6139 - Success
     1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
     2  0x00007f3dd9e392dd: ?? ??:0
     3  0x00007f3dd9b6a132: ?? ??:0
  Invalid guest sync status: exit_reason=SHUTDOWN

It's because previously we moved the testing memory region from a
static place (1G) to the top of the system's physical address space,
meanwhile we stick to 39 bits PA for all the x86_64 machines.  That's
not true for machines like Xeon E3-1220 where it only supports 36.

Let's unbreak this test by dynamically detect PA width from CPUID
0x80000008.  Meanwhile, even allow kvm_get_supported_cpuid_index() to
fail.  I don't know whether that could be useful because I think
0x80000008 should be there for all x86_64 hosts, but I also think it's
not really helpful to assert in the kvm_get_supported_cpuid_index().

Fixes: b442324b581556e
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Andrew Jones <drjones@redhat.com>
CC: Radim Krčmář <rkrcmar@redhat.com>
CC: Thomas Huth <thuth@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c  | 22 +++++++++++++------
 .../selftests/kvm/lib/x86_64/processor.c      |  3 ---
 2 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index ceb52b952637..111592f3a1d7 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -274,18 +274,26 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
 #ifdef __x86_64__
-	/*
-	 * FIXME
-	 * The x86_64 kvm selftests framework currently only supports a
-	 * single PML4 which restricts the number of physical address
-	 * bits we can change to 39.
-	 */
-	guest_pa_bits = 39;
+	{
+		struct kvm_cpuid_entry2 *entry;
+
+		entry = kvm_get_supported_cpuid_entry(0x80000008);
+		/*
+		 * Supported PA width can be smaller than 52 even if
+		 * we're with VM_MODE_P52V48_4K mode.  Fetch it from
+		 * the host to update the default value (SDM 4.1.4).
+		 */
+		if (entry)
+			guest_pa_bits = entry->eax & 0xff;
+		else
+			guest_pa_bits = 32;
+	}
 #endif
 #ifdef __aarch64__
 	if (guest_pa_bits != 40)
 		type = KVM_VM_TYPE_ARM_IPA_SIZE(guest_pa_bits);
 #endif
+	printf("Supported guest physical address width: %d\n", guest_pa_bits);
 	max_gfn = (1ul << (guest_pa_bits - guest_page_shift)) - 1;
 	guest_page_size = (1ul << guest_page_shift);
 	/*
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 6cb34a0fa200..9de2fd310ac8 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -760,9 +760,6 @@ kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
 			break;
 		}
 	}
-
-	TEST_ASSERT(entry, "Guest CPUID entry not found: (EAX=%x, ECX=%x).",
-		    function, index);
 	return entry;
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26  7:57 [PATCH] KVM: selftests: Detect max PA width from cpuid Peter Xu
@ 2019-08-26  8:11 ` Thomas Huth
  2019-08-26  8:18   ` Peter Xu
  2019-08-26  8:25 ` Vitaly Kuznetsov
  2019-08-26 11:09 ` Andrew Jones
  2 siblings, 1 reply; 10+ messages in thread
From: Thomas Huth @ 2019-08-26  8:11 UTC (permalink / raw)
  To: Peter Xu, linux-kernel, kvm
  Cc: Paolo Bonzini, Andrew Jones, Radim Krčmář

On 26/08/2019 09.57, Peter Xu wrote:
> The dirty_log_test is failing on some old machines like Xeon E3-1220
> with tripple faults when writting to the tracked memory region:
> 
>   Test iterations: 32, interval: 10 (ms)
>   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
>   guest physical test memory offset: 0x7fbffef000
>   ==== Test Assertion Failure ====
>   dirty_log_test.c:138: false
>   pid=6137 tid=6139 - Success
>      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
>      2  0x00007f3dd9e392dd: ?? ??:0
>      3  0x00007f3dd9b6a132: ?? ??:0
>   Invalid guest sync status: exit_reason=SHUTDOWN
> 
> It's because previously we moved the testing memory region from a
> static place (1G) to the top of the system's physical address space,
> meanwhile we stick to 39 bits PA for all the x86_64 machines.  That's
> not true for machines like Xeon E3-1220 where it only supports 36.
> 
> Let's unbreak this test by dynamically detect PA width from CPUID
> 0x80000008.  Meanwhile, even allow kvm_get_supported_cpuid_index() to
> fail.  I don't know whether that could be useful because I think
> 0x80000008 should be there for all x86_64 hosts, but I also think it's
> not really helpful to assert in the kvm_get_supported_cpuid_index().
[...]
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 6cb34a0fa200..9de2fd310ac8 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -760,9 +760,6 @@ kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
>  			break;
>  		}
>  	}
> -
> -	TEST_ASSERT(entry, "Guest CPUID entry not found: (EAX=%x, ECX=%x).",
> -		    function, index);
>  	return entry;
>  }

You should also adjust the comment of the function. It currently says
"Never returns NULL". Not it can return NULL.

And maybe add a TEST_ASSERT() to the other callers instead, which do not
expect a NULL to be returned?

 Thomas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26  8:11 ` Thomas Huth
@ 2019-08-26  8:18   ` Peter Xu
  0 siblings, 0 replies; 10+ messages in thread
From: Peter Xu @ 2019-08-26  8:18 UTC (permalink / raw)
  To: Thomas Huth
  Cc: linux-kernel, kvm, Paolo Bonzini, Andrew Jones,
	Radim Krčmář

On Mon, Aug 26, 2019 at 10:11:34AM +0200, Thomas Huth wrote:
> On 26/08/2019 09.57, Peter Xu wrote:
> > The dirty_log_test is failing on some old machines like Xeon E3-1220
> > with tripple faults when writting to the tracked memory region:
> > 
> >   Test iterations: 32, interval: 10 (ms)
> >   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> >   guest physical test memory offset: 0x7fbffef000
> >   ==== Test Assertion Failure ====
> >   dirty_log_test.c:138: false
> >   pid=6137 tid=6139 - Success
> >      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
> >      2  0x00007f3dd9e392dd: ?? ??:0
> >      3  0x00007f3dd9b6a132: ?? ??:0
> >   Invalid guest sync status: exit_reason=SHUTDOWN
> > 
> > It's because previously we moved the testing memory region from a
> > static place (1G) to the top of the system's physical address space,
> > meanwhile we stick to 39 bits PA for all the x86_64 machines.  That's
> > not true for machines like Xeon E3-1220 where it only supports 36.
> > 
> > Let's unbreak this test by dynamically detect PA width from CPUID
> > 0x80000008.  Meanwhile, even allow kvm_get_supported_cpuid_index() to
> > fail.  I don't know whether that could be useful because I think
> > 0x80000008 should be there for all x86_64 hosts, but I also think it's
> > not really helpful to assert in the kvm_get_supported_cpuid_index().
> [...]
> > diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> > index 6cb34a0fa200..9de2fd310ac8 100644
> > --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> > +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> > @@ -760,9 +760,6 @@ kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
> >  			break;
> >  		}
> >  	}
> > -
> > -	TEST_ASSERT(entry, "Guest CPUID entry not found: (EAX=%x, ECX=%x).",
> > -		    function, index);
> >  	return entry;
> >  }
> 
> You should also adjust the comment of the function. It currently says
> "Never returns NULL". Not it can return NULL.

Yeh that's better.

> 
> And maybe add a TEST_ASSERT() to the other callers instead, which do not
> expect a NULL to be returned?

I think it's fine because it's the same as moving the assert from here
to the callers because when the caller uses entry->xxx it'll assert. :)

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26  7:57 [PATCH] KVM: selftests: Detect max PA width from cpuid Peter Xu
  2019-08-26  8:11 ` Thomas Huth
@ 2019-08-26  8:25 ` Vitaly Kuznetsov
  2019-08-26 10:47   ` Peter Xu
  2019-08-26 11:09 ` Andrew Jones
  2 siblings, 1 reply; 10+ messages in thread
From: Vitaly Kuznetsov @ 2019-08-26  8:25 UTC (permalink / raw)
  To: Peter Xu
  Cc: Paolo Bonzini, Andrew Jones, Radim Krčmář,
	Thomas Huth, linux-kernel, kvm

Peter Xu <peterx@redhat.com> writes:

> The dirty_log_test is failing on some old machines like Xeon E3-1220
> with tripple faults when writting to the tracked memory region:

s,writting,writing,

>
>   Test iterations: 32, interval: 10 (ms)
>   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
>   guest physical test memory offset: 0x7fbffef000
>   ==== Test Assertion Failure ====
>   dirty_log_test.c:138: false
>   pid=6137 tid=6139 - Success
>      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
>      2  0x00007f3dd9e392dd: ?? ??:0
>      3  0x00007f3dd9b6a132: ?? ??:0
>   Invalid guest sync status: exit_reason=SHUTDOWN
>

This patch breaks on my AMD machine with

# cpuid -1 -l 0x80000008
CPU:
   Physical Address and Linear Address Size (0x80000008/eax):
      maximum physical address bits         = 0x30 (48)
      maximum linear (virtual) address bits = 0x30 (48)
      maximum guest physical address bits   = 0x0 (0)


Pre-patch:

# ./dirty_log_test 
Test iterations: 32, interval: 10 (ms)
Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
guest physical test memory offset: 0x7fbffef000
Dirtied 139264 pages
Total bits checked: dirty (135251), clear (7991709), track_next (29789)

Post-patch:

# ./dirty_log_test 
Test iterations: 32, interval: 10 (ms)
Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
Supported guest physical address width: 48
guest physical test memory offset: 0xffffbffef000
==== Test Assertion Failure ====
  dirty_log_test.c:141: false
  pid=77983 tid=77985 - Success
     1	0x0000000000401d12: vcpu_worker at dirty_log_test.c:138
     2	0x00007f636374358d: ?? ??:0
     3	0x00007f63636726a2: ?? ??:0
  Invalid guest sync status: exit_reason=SHUTDOWN



> It's because previously we moved the testing memory region from a
> static place (1G) to the top of the system's physical address space,
> meanwhile we stick to 39 bits PA for all the x86_64 machines.  That's
> not true for machines like Xeon E3-1220 where it only supports 36.
>
> Let's unbreak this test by dynamically detect PA width from CPUID
> 0x80000008.  Meanwhile, even allow kvm_get_supported_cpuid_index() to
> fail.  I don't know whether that could be useful because I think
> 0x80000008 should be there for all x86_64 hosts, but I also think it's
> not really helpful to assert in the kvm_get_supported_cpuid_index().
>
> Fixes: b442324b581556e
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Andrew Jones <drjones@redhat.com>
> CC: Radim Krčmář <rkrcmar@redhat.com>
> CC: Thomas Huth <thuth@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  tools/testing/selftests/kvm/dirty_log_test.c  | 22 +++++++++++++------
>  .../selftests/kvm/lib/x86_64/processor.c      |  3 ---
>  2 files changed, 15 insertions(+), 10 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index ceb52b952637..111592f3a1d7 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -274,18 +274,26 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
>  	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>  
>  #ifdef __x86_64__
> -	/*
> -	 * FIXME
> -	 * The x86_64 kvm selftests framework currently only supports a
> -	 * single PML4 which restricts the number of physical address
> -	 * bits we can change to 39.
> -	 */
> -	guest_pa_bits = 39;
> +	{
> +		struct kvm_cpuid_entry2 *entry;
> +
> +		entry = kvm_get_supported_cpuid_entry(0x80000008);
> +		/*
> +		 * Supported PA width can be smaller than 52 even if
> +		 * we're with VM_MODE_P52V48_4K mode.  Fetch it from
> +		 * the host to update the default value (SDM 4.1.4).
> +		 */
> +		if (entry)
> +			guest_pa_bits = entry->eax & 0xff;
> +		else
> +			guest_pa_bits = 32;
> +	}
>  #endif
>  #ifdef __aarch64__
>  	if (guest_pa_bits != 40)
>  		type = KVM_VM_TYPE_ARM_IPA_SIZE(guest_pa_bits);
>  #endif
> +	printf("Supported guest physical address width: %d\n", guest_pa_bits);
>  	max_gfn = (1ul << (guest_pa_bits - guest_page_shift)) - 1;
>  	guest_page_size = (1ul << guest_page_shift);
>  	/*
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 6cb34a0fa200..9de2fd310ac8 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -760,9 +760,6 @@ kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
>  			break;
>  		}
>  	}
> -
> -	TEST_ASSERT(entry, "Guest CPUID entry not found: (EAX=%x, ECX=%x).",
> -		    function, index);
>  	return entry;
>  }

-- 
Vitaly

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26  8:25 ` Vitaly Kuznetsov
@ 2019-08-26 10:47   ` Peter Xu
  2019-08-26 11:34     ` Vitaly Kuznetsov
  2019-08-26 11:39     ` Peter Xu
  0 siblings, 2 replies; 10+ messages in thread
From: Peter Xu @ 2019-08-26 10:47 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Andrew Jones, Radim Krčmář,
	Thomas Huth, linux-kernel, kvm

On Mon, Aug 26, 2019 at 10:25:55AM +0200, Vitaly Kuznetsov wrote:
> Peter Xu <peterx@redhat.com> writes:
> 
> > The dirty_log_test is failing on some old machines like Xeon E3-1220
> > with tripple faults when writting to the tracked memory region:
> 
> s,writting,writing,
> 
> >
> >   Test iterations: 32, interval: 10 (ms)
> >   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> >   guest physical test memory offset: 0x7fbffef000
> >   ==== Test Assertion Failure ====
> >   dirty_log_test.c:138: false
> >   pid=6137 tid=6139 - Success
> >      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
> >      2  0x00007f3dd9e392dd: ?? ??:0
> >      3  0x00007f3dd9b6a132: ?? ??:0
> >   Invalid guest sync status: exit_reason=SHUTDOWN
> >
> 
> This patch breaks on my AMD machine with
> 
> # cpuid -1 -l 0x80000008
> CPU:
>    Physical Address and Linear Address Size (0x80000008/eax):
>       maximum physical address bits         = 0x30 (48)
>       maximum linear (virtual) address bits = 0x30 (48)
>       maximum guest physical address bits   = 0x0 (0)
> 
> 
> Pre-patch:
> 
> # ./dirty_log_test 
> Test iterations: 32, interval: 10 (ms)
> Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> guest physical test memory offset: 0x7fbffef000
> Dirtied 139264 pages
> Total bits checked: dirty (135251), clear (7991709), track_next (29789)
> 
> Post-patch:
> 
> # ./dirty_log_test 
> Test iterations: 32, interval: 10 (ms)
> Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> Supported guest physical address width: 48
> guest physical test memory offset: 0xffffbffef000
> ==== Test Assertion Failure ====
>   dirty_log_test.c:141: false
>   pid=77983 tid=77985 - Success
>      1	0x0000000000401d12: vcpu_worker at dirty_log_test.c:138
>      2	0x00007f636374358d: ?? ??:0
>      3	0x00007f63636726a2: ?? ??:0
>   Invalid guest sync status: exit_reason=SHUTDOWN

Vitaly,

Are you using shadow paging?  If so, could you try NPT=off?

I finally found a AMD host and I also found that it's passing with
shadow MMU mode which is strange.  If so I would suspect it's a real
bug in AMD NTP path but I'd like to see whether it's also happening on
your side.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26  7:57 [PATCH] KVM: selftests: Detect max PA width from cpuid Peter Xu
  2019-08-26  8:11 ` Thomas Huth
  2019-08-26  8:25 ` Vitaly Kuznetsov
@ 2019-08-26 11:09 ` Andrew Jones
  2019-08-26 11:22   ` Peter Xu
  2 siblings, 1 reply; 10+ messages in thread
From: Andrew Jones @ 2019-08-26 11:09 UTC (permalink / raw)
  To: Peter Xu
  Cc: linux-kernel, kvm, Paolo Bonzini, Radim Krčmář,
	Thomas Huth

On Mon, Aug 26, 2019 at 03:57:28PM +0800, Peter Xu wrote:
> The dirty_log_test is failing on some old machines like Xeon E3-1220
> with tripple faults when writting to the tracked memory region:
> 
>   Test iterations: 32, interval: 10 (ms)
>   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
>   guest physical test memory offset: 0x7fbffef000
>   ==== Test Assertion Failure ====
>   dirty_log_test.c:138: false
>   pid=6137 tid=6139 - Success
>      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
>      2  0x00007f3dd9e392dd: ?? ??:0
>      3  0x00007f3dd9b6a132: ?? ??:0
>   Invalid guest sync status: exit_reason=SHUTDOWN
> 
> It's because previously we moved the testing memory region from a
> static place (1G) to the top of the system's physical address space,
> meanwhile we stick to 39 bits PA for all the x86_64 machines.  That's
> not true for machines like Xeon E3-1220 where it only supports 36.
> 
> Let's unbreak this test by dynamically detect PA width from CPUID
> 0x80000008.  Meanwhile, even allow kvm_get_supported_cpuid_index() to
> fail.  I don't know whether that could be useful because I think
> 0x80000008 should be there for all x86_64 hosts, but I also think it's
> not really helpful to assert in the kvm_get_supported_cpuid_index().
> 
> Fixes: b442324b581556e
> CC: Paolo Bonzini <pbonzini@redhat.com>
> CC: Andrew Jones <drjones@redhat.com>
> CC: Radim Krčmář <rkrcmar@redhat.com>
> CC: Thomas Huth <thuth@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  tools/testing/selftests/kvm/dirty_log_test.c  | 22 +++++++++++++------
>  .../selftests/kvm/lib/x86_64/processor.c      |  3 ---
>  2 files changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index ceb52b952637..111592f3a1d7 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -274,18 +274,26 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
>  	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>  
>  #ifdef __x86_64__
> -	/*
> -	 * FIXME
> -	 * The x86_64 kvm selftests framework currently only supports a
> -	 * single PML4 which restricts the number of physical address
> -	 * bits we can change to 39.
> -	 */
> -	guest_pa_bits = 39;
> +	{
> +		struct kvm_cpuid_entry2 *entry;
> +
> +		entry = kvm_get_supported_cpuid_entry(0x80000008);
> +		/*
> +		 * Supported PA width can be smaller than 52 even if
> +		 * we're with VM_MODE_P52V48_4K mode.  Fetch it from

It seems like x86_64 should create modes that actually work, rather than
always using one named 'P52', but then needing to probe for the actual
number of supported physical bits. Indeed testing all x86_64 supported
modes, like aarch64 does, would even make more sense in this test.


> +		 * the host to update the default value (SDM 4.1.4).
> +		 */
> +		if (entry)
> +			guest_pa_bits = entry->eax & 0xff;

Are we sure > 39 bits will work with this test framework? I can't
recall what led me to the FIXME above, other than things not working.
It seems I was convinced we couldn't have more bits due to how pml4's
were allocated, but maybe I misinterpreted it.

> +		else
> +			guest_pa_bits = 32;
> +	}
>  #endif
>  #ifdef __aarch64__
>  	if (guest_pa_bits != 40)
>  		type = KVM_VM_TYPE_ARM_IPA_SIZE(guest_pa_bits);
>  #endif
> +	printf("Supported guest physical address width: %d\n", guest_pa_bits);
>  	max_gfn = (1ul << (guest_pa_bits - guest_page_shift)) - 1;
>  	guest_page_size = (1ul << guest_page_shift);
>  	/*
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 6cb34a0fa200..9de2fd310ac8 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -760,9 +760,6 @@ kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
>  			break;
>  		}
>  	}
> -
> -	TEST_ASSERT(entry, "Guest CPUID entry not found: (EAX=%x, ECX=%x).",
> -		    function, index);
>  	return entry;
>  }
>  
> -- 
> 2.21.0
> 

Thanks,
drew

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26 11:09 ` Andrew Jones
@ 2019-08-26 11:22   ` Peter Xu
  2019-08-26 11:43     ` Andrew Jones
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Xu @ 2019-08-26 11:22 UTC (permalink / raw)
  To: Andrew Jones
  Cc: linux-kernel, kvm, Paolo Bonzini, Radim Krčmář,
	Thomas Huth

On Mon, Aug 26, 2019 at 01:09:58PM +0200, Andrew Jones wrote:
> On Mon, Aug 26, 2019 at 03:57:28PM +0800, Peter Xu wrote:
> > The dirty_log_test is failing on some old machines like Xeon E3-1220
> > with tripple faults when writting to the tracked memory region:
> > 
> >   Test iterations: 32, interval: 10 (ms)
> >   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> >   guest physical test memory offset: 0x7fbffef000
> >   ==== Test Assertion Failure ====
> >   dirty_log_test.c:138: false
> >   pid=6137 tid=6139 - Success
> >      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
> >      2  0x00007f3dd9e392dd: ?? ??:0
> >      3  0x00007f3dd9b6a132: ?? ??:0
> >   Invalid guest sync status: exit_reason=SHUTDOWN
> > 
> > It's because previously we moved the testing memory region from a
> > static place (1G) to the top of the system's physical address space,
> > meanwhile we stick to 39 bits PA for all the x86_64 machines.  That's
> > not true for machines like Xeon E3-1220 where it only supports 36.
> > 
> > Let's unbreak this test by dynamically detect PA width from CPUID
> > 0x80000008.  Meanwhile, even allow kvm_get_supported_cpuid_index() to
> > fail.  I don't know whether that could be useful because I think
> > 0x80000008 should be there for all x86_64 hosts, but I also think it's
> > not really helpful to assert in the kvm_get_supported_cpuid_index().
> > 
> > Fixes: b442324b581556e
> > CC: Paolo Bonzini <pbonzini@redhat.com>
> > CC: Andrew Jones <drjones@redhat.com>
> > CC: Radim Krčmář <rkrcmar@redhat.com>
> > CC: Thomas Huth <thuth@redhat.com>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  tools/testing/selftests/kvm/dirty_log_test.c  | 22 +++++++++++++------
> >  .../selftests/kvm/lib/x86_64/processor.c      |  3 ---
> >  2 files changed, 15 insertions(+), 10 deletions(-)
> > 
> > diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> > index ceb52b952637..111592f3a1d7 100644
> > --- a/tools/testing/selftests/kvm/dirty_log_test.c
> > +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> > @@ -274,18 +274,26 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
> >  	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
> >  
> >  #ifdef __x86_64__
> > -	/*
> > -	 * FIXME
> > -	 * The x86_64 kvm selftests framework currently only supports a
> > -	 * single PML4 which restricts the number of physical address
> > -	 * bits we can change to 39.
> > -	 */
> > -	guest_pa_bits = 39;
> > +	{
> > +		struct kvm_cpuid_entry2 *entry;
> > +
> > +		entry = kvm_get_supported_cpuid_entry(0x80000008);
> > +		/*
> > +		 * Supported PA width can be smaller than 52 even if
> > +		 * we're with VM_MODE_P52V48_4K mode.  Fetch it from
> 
> It seems like x86_64 should create modes that actually work, rather than
> always using one named 'P52', but then needing to probe for the actual
> number of supported physical bits. Indeed testing all x86_64 supported
> modes, like aarch64 does, would even make more sense in this test.

Should be true.  I'll think it over again...

> 
> 
> > +		 * the host to update the default value (SDM 4.1.4).
> > +		 */
> > +		if (entry)
> > +			guest_pa_bits = entry->eax & 0xff;
> 
> Are we sure > 39 bits will work with this test framework? I can't
> recall what led me to the FIXME above, other than things not working.
> It seems I was convinced we couldn't have more bits due to how pml4's
> were allocated, but maybe I misinterpreted it.

As mentioned in the IRC - I think I've got a "success case" of
that... :)  Please see below:

virtlab423:~ $ lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              16
On-line CPU(s) list: 0-15
Thread(s) per core:  1
Core(s) per socket:  8
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               63
Model name:          Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz
Stepping:            2
CPU MHz:             2597.168
BogoMIPS:            5194.31
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            20480K
NUMA node0 CPU(s):   0,2,4,6,8,10,12,14
NUMA node1 CPU(s):   1,3,5,7,9,11,13,15
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln pts
virtlab423:~ $ ./dirty_log_test 
Test iterations: 32, interval: 10 (ms)
Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
Supported guest physical address width: 46
guest physical test memory offset: 0x3fffbffef000
Dirtied 216064 pages
Total bits checked: dirty (204841), clear (7922119), track_next (60730)

So on above E5-2640 I got PA width==46 and it worked well.  Does this
mean that 39bits is not really a PA restriction anywhere?  Actually
that also matches with the other fact that if we look into
virt_pg_map() it's indeed allocating PML4 entries rather than having
only one.

Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26 10:47   ` Peter Xu
@ 2019-08-26 11:34     ` Vitaly Kuznetsov
  2019-08-26 11:39     ` Peter Xu
  1 sibling, 0 replies; 10+ messages in thread
From: Vitaly Kuznetsov @ 2019-08-26 11:34 UTC (permalink / raw)
  To: Peter Xu
  Cc: Paolo Bonzini, Andrew Jones, Radim Krčmář,
	Thomas Huth, linux-kernel, kvm

Peter Xu <peterx@redhat.com> writes:

> On Mon, Aug 26, 2019 at 10:25:55AM +0200, Vitaly Kuznetsov wrote:
>> Peter Xu <peterx@redhat.com> writes:
>> 
>> > The dirty_log_test is failing on some old machines like Xeon E3-1220
>> > with tripple faults when writting to the tracked memory region:
>> 
>> s,writting,writing,
>> 
>> >
>> >   Test iterations: 32, interval: 10 (ms)
>> >   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
>> >   guest physical test memory offset: 0x7fbffef000
>> >   ==== Test Assertion Failure ====
>> >   dirty_log_test.c:138: false
>> >   pid=6137 tid=6139 - Success
>> >      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
>> >      2  0x00007f3dd9e392dd: ?? ??:0
>> >      3  0x00007f3dd9b6a132: ?? ??:0
>> >   Invalid guest sync status: exit_reason=SHUTDOWN
>> >
>> 
>> This patch breaks on my AMD machine with
>> 
>> # cpuid -1 -l 0x80000008
>> CPU:
>>    Physical Address and Linear Address Size (0x80000008/eax):
>>       maximum physical address bits         = 0x30 (48)
>>       maximum linear (virtual) address bits = 0x30 (48)
>>       maximum guest physical address bits   = 0x0 (0)
>> 
>> 
>> Pre-patch:
>> 
>> # ./dirty_log_test 
>> Test iterations: 32, interval: 10 (ms)
>> Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
>> guest physical test memory offset: 0x7fbffef000
>> Dirtied 139264 pages
>> Total bits checked: dirty (135251), clear (7991709), track_next (29789)
>> 
>> Post-patch:
>> 
>> # ./dirty_log_test 
>> Test iterations: 32, interval: 10 (ms)
>> Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
>> Supported guest physical address width: 48
>> guest physical test memory offset: 0xffffbffef000
>> ==== Test Assertion Failure ====
>>   dirty_log_test.c:141: false
>>   pid=77983 tid=77985 - Success
>>      1	0x0000000000401d12: vcpu_worker at dirty_log_test.c:138
>>      2	0x00007f636374358d: ?? ??:0
>>      3	0x00007f63636726a2: ?? ??:0
>>   Invalid guest sync status: exit_reason=SHUTDOWN
>
> Vitaly,
>
> Are you using shadow paging?  If so, could you try NPT=off?
>

Yep,

test passes with shadow paging, fails when NPT is enabled.


> I finally found a AMD host and I also found that it's passing with
> shadow MMU mode which is strange.  If so I would suspect it's a real
> bug in AMD NTP path but I'd like to see whether it's also happening on
> your side.

Sounds like a bug indeed.

-- 
Vitaly

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26 10:47   ` Peter Xu
  2019-08-26 11:34     ` Vitaly Kuznetsov
@ 2019-08-26 11:39     ` Peter Xu
  1 sibling, 0 replies; 10+ messages in thread
From: Peter Xu @ 2019-08-26 11:39 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Andrew Jones, Radim Krčmář,
	Thomas Huth, linux-kernel, kvm

On Mon, Aug 26, 2019 at 06:47:57PM +0800, Peter Xu wrote:
> On Mon, Aug 26, 2019 at 10:25:55AM +0200, Vitaly Kuznetsov wrote:
> > Peter Xu <peterx@redhat.com> writes:
> > 
> > > The dirty_log_test is failing on some old machines like Xeon E3-1220
> > > with tripple faults when writting to the tracked memory region:
> > 
> > s,writting,writing,
> > 
> > >
> > >   Test iterations: 32, interval: 10 (ms)
> > >   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> > >   guest physical test memory offset: 0x7fbffef000
> > >   ==== Test Assertion Failure ====
> > >   dirty_log_test.c:138: false
> > >   pid=6137 tid=6139 - Success
> > >      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
> > >      2  0x00007f3dd9e392dd: ?? ??:0
> > >      3  0x00007f3dd9b6a132: ?? ??:0
> > >   Invalid guest sync status: exit_reason=SHUTDOWN
> > >
> > 
> > This patch breaks on my AMD machine with
> > 
> > # cpuid -1 -l 0x80000008
> > CPU:
> >    Physical Address and Linear Address Size (0x80000008/eax):
> >       maximum physical address bits         = 0x30 (48)
> >       maximum linear (virtual) address bits = 0x30 (48)
> >       maximum guest physical address bits   = 0x0 (0)
> > 
> > 
> > Pre-patch:
> > 
> > # ./dirty_log_test 
> > Test iterations: 32, interval: 10 (ms)
> > Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> > guest physical test memory offset: 0x7fbffef000
> > Dirtied 139264 pages
> > Total bits checked: dirty (135251), clear (7991709), track_next (29789)
> > 
> > Post-patch:
> > 
> > # ./dirty_log_test 
> > Test iterations: 32, interval: 10 (ms)
> > Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> > Supported guest physical address width: 48
> > guest physical test memory offset: 0xffffbffef000
> > ==== Test Assertion Failure ====
> >   dirty_log_test.c:141: false
> >   pid=77983 tid=77985 - Success
> >      1	0x0000000000401d12: vcpu_worker at dirty_log_test.c:138
> >      2	0x00007f636374358d: ?? ??:0
> >      3	0x00007f63636726a2: ?? ??:0
> >   Invalid guest sync status: exit_reason=SHUTDOWN
> 
> Vitaly,
> 
> Are you using shadow paging?  If so, could you try NPT=off?

Sorry, it should be s/shadow paging/NPT/...

[root@hp-dl385g10-10 peter]# ./dirty_log_test 
Test iterations: 32, interval: 10 (ms)
Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
Supported guest physical address width: 48
guest physical test memory offset: 0xffffbffef000
==== Test Assertion Failure ====
  dirty_log_test.c:138: false
  pid=5433 tid=5436 - Success
     1  0x0000000000401cc1: vcpu_worker at dirty_log_test.c:138
     2  0x00007f18977992dd: ?? ??:0
     3  0x00007f18974ca132: ?? ??:0
  Invalid guest sync status: exit_reason=SHUTDOWN

[root@hp-dl385g10-10 peter]# modprobe -r kvm_amd
[root@hp-dl385g10-10 peter]# modprobe kvm_amd npt=0
[root@hp-dl385g10-10 peter]# ./dirty_log_test 
Test iterations: 32, interval: 10 (ms)
Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
Supported guest physical address width: 48
guest physical test memory offset: 0xffffbffef000
Dirtied 102400 pages
Total bits checked: dirty (99021), clear (8027939), track_next (23425)

> 
> I finally found a AMD host and I also found that it's passing with
> shadow MMU mode which is strange.  If so I would suspect it's a real
> bug in AMD NTP path but I'd like to see whether it's also happening on
> your side.
> 
> Thanks,

-- 
Peter Xu

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] KVM: selftests: Detect max PA width from cpuid
  2019-08-26 11:22   ` Peter Xu
@ 2019-08-26 11:43     ` Andrew Jones
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Jones @ 2019-08-26 11:43 UTC (permalink / raw)
  To: Peter Xu
  Cc: linux-kernel, kvm, Paolo Bonzini, Radim Krčmář,
	Thomas Huth

On Mon, Aug 26, 2019 at 07:22:44PM +0800, Peter Xu wrote:
> On Mon, Aug 26, 2019 at 01:09:58PM +0200, Andrew Jones wrote:
> > On Mon, Aug 26, 2019 at 03:57:28PM +0800, Peter Xu wrote:
> > > The dirty_log_test is failing on some old machines like Xeon E3-1220
> > > with tripple faults when writting to the tracked memory region:
> > > 
> > >   Test iterations: 32, interval: 10 (ms)
> > >   Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> > >   guest physical test memory offset: 0x7fbffef000
> > >   ==== Test Assertion Failure ====
> > >   dirty_log_test.c:138: false
> > >   pid=6137 tid=6139 - Success
> > >      1  0x0000000000401ca1: vcpu_worker at dirty_log_test.c:138
> > >      2  0x00007f3dd9e392dd: ?? ??:0
> > >      3  0x00007f3dd9b6a132: ?? ??:0
> > >   Invalid guest sync status: exit_reason=SHUTDOWN
> > > 
> > > It's because previously we moved the testing memory region from a
> > > static place (1G) to the top of the system's physical address space,
> > > meanwhile we stick to 39 bits PA for all the x86_64 machines.  That's
> > > not true for machines like Xeon E3-1220 where it only supports 36.
> > > 
> > > Let's unbreak this test by dynamically detect PA width from CPUID
> > > 0x80000008.  Meanwhile, even allow kvm_get_supported_cpuid_index() to
> > > fail.  I don't know whether that could be useful because I think
> > > 0x80000008 should be there for all x86_64 hosts, but I also think it's
> > > not really helpful to assert in the kvm_get_supported_cpuid_index().
> > > 
> > > Fixes: b442324b581556e
> > > CC: Paolo Bonzini <pbonzini@redhat.com>
> > > CC: Andrew Jones <drjones@redhat.com>
> > > CC: Radim Krčmář <rkrcmar@redhat.com>
> > > CC: Thomas Huth <thuth@redhat.com>
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > ---
> > >  tools/testing/selftests/kvm/dirty_log_test.c  | 22 +++++++++++++------
> > >  .../selftests/kvm/lib/x86_64/processor.c      |  3 ---
> > >  2 files changed, 15 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> > > index ceb52b952637..111592f3a1d7 100644
> > > --- a/tools/testing/selftests/kvm/dirty_log_test.c
> > > +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> > > @@ -274,18 +274,26 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
> > >  	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
> > >  
> > >  #ifdef __x86_64__
> > > -	/*
> > > -	 * FIXME
> > > -	 * The x86_64 kvm selftests framework currently only supports a
> > > -	 * single PML4 which restricts the number of physical address
> > > -	 * bits we can change to 39.
> > > -	 */
> > > -	guest_pa_bits = 39;
> > > +	{
> > > +		struct kvm_cpuid_entry2 *entry;
> > > +
> > > +		entry = kvm_get_supported_cpuid_entry(0x80000008);
> > > +		/*
> > > +		 * Supported PA width can be smaller than 52 even if
> > > +		 * we're with VM_MODE_P52V48_4K mode.  Fetch it from
> > 
> > It seems like x86_64 should create modes that actually work, rather than
> > always using one named 'P52', but then needing to probe for the actual
> > number of supported physical bits. Indeed testing all x86_64 supported
> > modes, like aarch64 does, would even make more sense in this test.
> 
> Should be true.  I'll think it over again...
> 
> > 
> > 
> > > +		 * the host to update the default value (SDM 4.1.4).
> > > +		 */
> > > +		if (entry)
> > > +			guest_pa_bits = entry->eax & 0xff;
> > 
> > Are we sure > 39 bits will work with this test framework? I can't
> > recall what led me to the FIXME above, other than things not working.
> > It seems I was convinced we couldn't have more bits due to how pml4's
> > were allocated, but maybe I misinterpreted it.
> 
> As mentioned in the IRC - I think I've got a "success case" of
> that... :)  Please see below:
> 
> virtlab423:~ $ lscpu
> Architecture:        x86_64
> CPU op-mode(s):      32-bit, 64-bit
> Byte Order:          Little Endian
> CPU(s):              16
> On-line CPU(s) list: 0-15
> Thread(s) per core:  1
> Core(s) per socket:  8
> Socket(s):           2
> NUMA node(s):        2
> Vendor ID:           GenuineIntel
> CPU family:          6
> Model:               63
> Model name:          Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz
> Stepping:            2
> CPU MHz:             2597.168
> BogoMIPS:            5194.31
> Virtualization:      VT-x
> L1d cache:           32K
> L1i cache:           32K
> L2 cache:            256K
> L3 cache:            20480K
> NUMA node0 CPU(s):   0,2,4,6,8,10,12,14
> NUMA node1 CPU(s):   1,3,5,7,9,11,13,15
> Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln pts
> virtlab423:~ $ ./dirty_log_test 
> Test iterations: 32, interval: 10 (ms)
> Testing guest mode: PA-bits:52, VA-bits:48, 4K pages
> Supported guest physical address width: 46
> guest physical test memory offset: 0x3fffbffef000
> Dirtied 216064 pages
> Total bits checked: dirty (204841), clear (7922119), track_next (60730)
> 
> So on above E5-2640 I got PA width==46 and it worked well.  Does this
> mean that 39bits is not really a PA restriction anywhere?  Actually
> that also matches with the other fact that if we look into
> virt_pg_map() it's indeed allocating PML4 entries rather than having
> only one.
>

Yup, that looks good to me.

Thanks,
drew 

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-08-26 11:43 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-26  7:57 [PATCH] KVM: selftests: Detect max PA width from cpuid Peter Xu
2019-08-26  8:11 ` Thomas Huth
2019-08-26  8:18   ` Peter Xu
2019-08-26  8:25 ` Vitaly Kuznetsov
2019-08-26 10:47   ` Peter Xu
2019-08-26 11:34     ` Vitaly Kuznetsov
2019-08-26 11:39     ` Peter Xu
2019-08-26 11:09 ` Andrew Jones
2019-08-26 11:22   ` Peter Xu
2019-08-26 11:43     ` Andrew Jones

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).