kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* kvm/queue demand paging test and s390
@ 2020-03-10 16:54 Christian Borntraeger
  2020-03-10 16:59 ` David Hildenbrand
  2020-03-10 17:27 ` Andrew Jones
  0 siblings, 2 replies; 7+ messages in thread
From: Christian Borntraeger @ 2020-03-10 16:54 UTC (permalink / raw)
  To: Ben Gardon
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Janosch Frank,
	David Hildenbrand, Cornelia Huck

For s390 the guest memory size must be 1M aligned. I need something like the following to make this work:

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index c1e326d3ed7f..f85ec3f01a35 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -164,6 +164,10 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
        pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
                 PTES_PER_4K_PT;
        pages = vm_adjust_num_guest_pages(mode, pages);
+#ifdef __s390x__
+       /* s390 requires 1M aligned guest sizes */
+       pages = (pages + 255) & ~0xff;
+#endif
 
        pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 

any better idea how to do that?


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: kvm/queue demand paging test and s390
  2020-03-10 16:54 kvm/queue demand paging test and s390 Christian Borntraeger
@ 2020-03-10 16:59 ` David Hildenbrand
  2020-03-10 17:27 ` Andrew Jones
  1 sibling, 0 replies; 7+ messages in thread
From: David Hildenbrand @ 2020-03-10 16:59 UTC (permalink / raw)
  To: Christian Borntraeger, Ben Gardon
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Janosch Frank, Cornelia Huck

On 10.03.20 17:54, Christian Borntraeger wrote:
> For s390 the guest memory size must be 1M aligned. I need something like the following to make this work:
> 
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index c1e326d3ed7f..f85ec3f01a35 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -164,6 +164,10 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
>         pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
>                  PTES_PER_4K_PT;
>         pages = vm_adjust_num_guest_pages(mode, pages);
> +#ifdef __s390x__
> +       /* s390 requires 1M aligned guest sizes */
> +       pages = (pages + 255) & ~0xff;
> +#endif
>  
>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>  
> 
> any better idea how to do that?
> 

Without an ALIGN_UP() and/or PAGES_PER_SEGMENT this won't get any nicer
I guess.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: kvm/queue demand paging test and s390
  2020-03-10 16:54 kvm/queue demand paging test and s390 Christian Borntraeger
  2020-03-10 16:59 ` David Hildenbrand
@ 2020-03-10 17:27 ` Andrew Jones
  2020-03-10 20:18   ` Christian Borntraeger
  1 sibling, 1 reply; 7+ messages in thread
From: Andrew Jones @ 2020-03-10 17:27 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Ben Gardon, kvm list, Paolo Bonzini, Janosch Frank,
	David Hildenbrand, Cornelia Huck

On Tue, Mar 10, 2020 at 05:54:59PM +0100, Christian Borntraeger wrote:
> For s390 the guest memory size must be 1M aligned. I need something like the following to make this work:
> 
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index c1e326d3ed7f..f85ec3f01a35 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -164,6 +164,10 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
>         pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
>                  PTES_PER_4K_PT;
>         pages = vm_adjust_num_guest_pages(mode, pages);
> +#ifdef __s390x__
> +       /* s390 requires 1M aligned guest sizes */
> +       pages = (pages + 255) & ~0xff;
> +#endif
>  
>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>  
> 
> any better idea how to do that?
>

For this one we could patch[*] vm_adjust_num_guest_pages(). That would
also allow the one on line 382, and another one at dirty_log_test.c:300
to be hidden.

I'd also like to add a

 unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size)
 {
      unsigned int n;
      n = DIV_ROUND_UP(size, vm_guest_mode_params[mode].page_size);
      return vm_adjust_num_guest_pages(mode, n);
 }

to the num-pages API we've recently put in kvm/queue. If we do, then we
could add a #ifdef-390x to that as well.

Thanks,
drew


[*]

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index fc84da4b72d4..9569b21eed26 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -261,7 +261,13 @@ unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_p
 static inline unsigned int
 vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
 {
-       return vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
+       unsigned int n;
+       n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
+#ifdef __s390x__
+       /* s390 requires 1M aligned guest sizes */
+       n = (n + 255) & ~0xff;
+#endif
+       return n;
 }
 
 struct kvm_userspace_memory_region *


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: kvm/queue demand paging test and s390
  2020-03-10 17:27 ` Andrew Jones
@ 2020-03-10 20:18   ` Christian Borntraeger
  2020-03-11  6:26     ` Andrew Jones
  0 siblings, 1 reply; 7+ messages in thread
From: Christian Borntraeger @ 2020-03-10 20:18 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Ben Gardon, kvm list, Paolo Bonzini, Janosch Frank,
	David Hildenbrand, Cornelia Huck



On 10.03.20 18:27, Andrew Jones wrote:
> On Tue, Mar 10, 2020 at 05:54:59PM +0100, Christian Borntraeger wrote:
>> For s390 the guest memory size must be 1M aligned. I need something like the following to make this work:
>>
>> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
>> index c1e326d3ed7f..f85ec3f01a35 100644
>> --- a/tools/testing/selftests/kvm/demand_paging_test.c
>> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
>> @@ -164,6 +164,10 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
>>         pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
>>                  PTES_PER_4K_PT;
>>         pages = vm_adjust_num_guest_pages(mode, pages);
>> +#ifdef __s390x__
>> +       /* s390 requires 1M aligned guest sizes */
>> +       pages = (pages + 255) & ~0xff;
>> +#endif
>>  
>>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>>  
>>
>> any better idea how to do that?
>>
> 
> For this one we could patch[*] vm_adjust_num_guest_pages(). That would
> also allow the one on line 382, and another one at dirty_log_test.c:300
> to be hidden.

I tried that first but then I ran into several other asserts that checked for
num_pages = vm_adjust_num_guest_pages(num_pages)

See kvm_util.c:     TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages

So it seems like a bigger rework is necessary to avoid this little hack :-/

> 
> I'd also like to add a
> 
>  unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size)
>  {
>       unsigned int n;
>       n = DIV_ROUND_UP(size, vm_guest_mode_params[mode].page_size);
>       return vm_adjust_num_guest_pages(mode, n);
>  }
> 
> to the num-pages API we've recently put in kvm/queue. If we do, then we
> could add a #ifdef-390x to that as well.
> 
> Thanks,
> drew
> 
> 
> [*]
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index fc84da4b72d4..9569b21eed26 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -261,7 +261,13 @@ unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_p
>  static inline unsigned int
>  vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
>  {
> -       return vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
> +       unsigned int n;
> +       n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
> +#ifdef __s390x__
> +       /* s390 requires 1M aligned guest sizes */
> +       n = (n + 255) & ~0xff;
> +#endif
> +       return n;
>  }
>  
>  struct kvm_userspace_memory_region *
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: kvm/queue demand paging test and s390
  2020-03-10 20:18   ` Christian Borntraeger
@ 2020-03-11  6:26     ` Andrew Jones
  2020-03-12  8:00       ` Christian Borntraeger
  0 siblings, 1 reply; 7+ messages in thread
From: Andrew Jones @ 2020-03-11  6:26 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Ben Gardon, kvm list, Paolo Bonzini, Janosch Frank,
	David Hildenbrand, Cornelia Huck

On Tue, Mar 10, 2020 at 09:18:16PM +0100, Christian Borntraeger wrote:
> 
> 
> On 10.03.20 18:27, Andrew Jones wrote:
> > On Tue, Mar 10, 2020 at 05:54:59PM +0100, Christian Borntraeger wrote:
> >> For s390 the guest memory size must be 1M aligned. I need something like the following to make this work:
> >>
> >> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> >> index c1e326d3ed7f..f85ec3f01a35 100644
> >> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> >> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> >> @@ -164,6 +164,10 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
> >>         pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
> >>                  PTES_PER_4K_PT;
> >>         pages = vm_adjust_num_guest_pages(mode, pages);
> >> +#ifdef __s390x__
> >> +       /* s390 requires 1M aligned guest sizes */
> >> +       pages = (pages + 255) & ~0xff;
> >> +#endif
> >>  
> >>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
> >>  
> >>
> >> any better idea how to do that?
> >>
> > 
> > For this one we could patch[*] vm_adjust_num_guest_pages(). That would
> > also allow the one on line 382, and another one at dirty_log_test.c:300
> > to be hidden.
> 
> I tried that first but then I ran into several other asserts that checked for
> num_pages = vm_adjust_num_guest_pages(num_pages)
> 
> See kvm_util.c:     TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages
> 
> So it seems like a bigger rework is necessary to avoid this little hack :-/

There's just this one other assert, and it'll only fire if the number of
guest pages aren't selectect correctly. One must just be sure they
always select the number correctly or do

 adjusted_num_pages = vm_adjust_num_guest_pages(mode, guessed_num_pages);
 vm_userspace_mem_region_add(..., adjusted_num_pages, ...);

to ensure it. If we patch vm_adjust_num_guest_pages() as suggested below
then the assert should never fire when the number is already correct,
because vm_adjust_num_guest_pages() doesn't change an already correct
number, i.e.

 adjusted_num_pages == vm_adjust_num_guest_pages(mode, adjusted_num_pages)

If an assert is firing after making that change, then I wonder if not
all s390 memregions are 1M aligned?

Thanks,
drew

> > diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> > index fc84da4b72d4..9569b21eed26 100644
> > --- a/tools/testing/selftests/kvm/include/kvm_util.h
> > +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> > @@ -261,7 +261,13 @@ unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_p
> >  static inline unsigned int
> >  vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
> >  {
> > -       return vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
> > +       unsigned int n;
> > +       n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
> > +#ifdef __s390x__
> > +       /* s390 requires 1M aligned guest sizes */
> > +       n = (n + 255) & ~0xff;
> > +#endif
> > +       return n;
> >  }
> >  
> >  struct kvm_userspace_memory_region *
> > 
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: kvm/queue demand paging test and s390
  2020-03-11  6:26     ` Andrew Jones
@ 2020-03-12  8:00       ` Christian Borntraeger
  2020-03-12  8:08         ` Christian Borntraeger
  0 siblings, 1 reply; 7+ messages in thread
From: Christian Borntraeger @ 2020-03-12  8:00 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Ben Gardon, kvm list, Paolo Bonzini, Janosch Frank,
	David Hildenbrand, Cornelia Huck



On 11.03.20 07:26, Andrew Jones wrote:
> On Tue, Mar 10, 2020 at 09:18:16PM +0100, Christian Borntraeger wrote:
>>
>>
>> On 10.03.20 18:27, Andrew Jones wrote:
>>> On Tue, Mar 10, 2020 at 05:54:59PM +0100, Christian Borntraeger wrote:
>>>> For s390 the guest memory size must be 1M aligned. I need something like the following to make this work:
>>>>
>>>> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
>>>> index c1e326d3ed7f..f85ec3f01a35 100644
>>>> --- a/tools/testing/selftests/kvm/demand_paging_test.c
>>>> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
>>>> @@ -164,6 +164,10 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
>>>>         pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
>>>>                  PTES_PER_4K_PT;
>>>>         pages = vm_adjust_num_guest_pages(mode, pages);
>>>> +#ifdef __s390x__
>>>> +       /* s390 requires 1M aligned guest sizes */
>>>> +       pages = (pages + 255) & ~0xff;
>>>> +#endif
>>>>  
>>>>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>>>>  
>>>>
>>>> any better idea how to do that?
>>>>
>>>
>>> For this one we could patch[*] vm_adjust_num_guest_pages(). That would
>>> also allow the one on line 382, and another one at dirty_log_test.c:300
>>> to be hidden.
>>
>> I tried that first but then I ran into several other asserts that checked for
>> num_pages = vm_adjust_num_guest_pages(num_pages)
>>
>> See kvm_util.c:     TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages
>>
>> So it seems like a bigger rework is necessary to avoid this little hack :-/
> 
> There's just this one other assert, and it'll only fire if the number of
> guest pages aren't selectect correctly. One must just be sure they
> always select the number correctly or do
> 
>  adjusted_num_pages = vm_adjust_num_guest_pages(mode, guessed_num_pages);
>  vm_userspace_mem_region_add(..., adjusted_num_pages, ...);
> 
> to ensure it. If we patch vm_adjust_num_guest_pages() as suggested below
> then the assert should never fire when the number is already correct,
> because vm_adjust_num_guest_pages() doesn't change an already correct
> number, i.e.
> 
>  adjusted_num_pages == vm_adjust_num_guest_pages(mode, adjusted_num_pages)
> 
> If an assert is firing after making that change, then I wonder if not
> all s390 memregions are 1M aligned?

I just checked your patch and it seems to work fine. No idea what I did wrong
in my test. Can you respin this as a proper patch against kvm/queue?


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: kvm/queue demand paging test and s390
  2020-03-12  8:00       ` Christian Borntraeger
@ 2020-03-12  8:08         ` Christian Borntraeger
  0 siblings, 0 replies; 7+ messages in thread
From: Christian Borntraeger @ 2020-03-12  8:08 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Ben Gardon, kvm list, Paolo Bonzini, Janosch Frank,
	David Hildenbrand, Cornelia Huck



On 12.03.20 09:00, Christian Borntraeger wrote:
> 
> 
> On 11.03.20 07:26, Andrew Jones wrote:
>> On Tue, Mar 10, 2020 at 09:18:16PM +0100, Christian Borntraeger wrote:
>>>
>>>
>>> On 10.03.20 18:27, Andrew Jones wrote:
>>>> On Tue, Mar 10, 2020 at 05:54:59PM +0100, Christian Borntraeger wrote:
>>>>> For s390 the guest memory size must be 1M aligned. I need something like the following to make this work:
>>>>>
>>>>> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
>>>>> index c1e326d3ed7f..f85ec3f01a35 100644
>>>>> --- a/tools/testing/selftests/kvm/demand_paging_test.c
>>>>> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
>>>>> @@ -164,6 +164,10 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
>>>>>         pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
>>>>>                  PTES_PER_4K_PT;
>>>>>         pages = vm_adjust_num_guest_pages(mode, pages);
>>>>> +#ifdef __s390x__
>>>>> +       /* s390 requires 1M aligned guest sizes */
>>>>> +       pages = (pages + 255) & ~0xff;
>>>>> +#endif
>>>>>  
>>>>>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>>>>>  
>>>>>
>>>>> any better idea how to do that?
>>>>>
>>>>
>>>> For this one we could patch[*] vm_adjust_num_guest_pages(). That would
>>>> also allow the one on line 382, and another one at dirty_log_test.c:300
>>>> to be hidden.
>>>
>>> I tried that first but then I ran into several other asserts that checked for
>>> num_pages = vm_adjust_num_guest_pages(num_pages)
>>>
>>> See kvm_util.c:     TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages
>>>
>>> So it seems like a bigger rework is necessary to avoid this little hack :-/
>>
>> There's just this one other assert, and it'll only fire if the number of
>> guest pages aren't selectect correctly. One must just be sure they
>> always select the number correctly or do
>>
>>  adjusted_num_pages = vm_adjust_num_guest_pages(mode, guessed_num_pages);
>>  vm_userspace_mem_region_add(..., adjusted_num_pages, ...);
>>
>> to ensure it. If we patch vm_adjust_num_guest_pages() as suggested below
>> then the assert should never fire when the number is already correct,
>> because vm_adjust_num_guest_pages() doesn't change an already correct
>> number, i.e.
>>
>>  adjusted_num_pages == vm_adjust_num_guest_pages(mode, adjusted_num_pages)
>>
>> If an assert is firing after making that change, then I wonder if not
>> all s390 memregions are 1M aligned?
> 
> I just checked your patch and it seems to work fine. No idea what I did wrong
> in my test. Can you respin this as a proper patch against kvm/queue?

And yes, we can then remove the following:


diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index c1e326d3ed7f..ae086c5dc118 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -378,10 +378,6 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
        guest_num_pages = (vcpus * vcpu_memory_bytes) / guest_page_size;
        guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
 
-#ifdef __s390x__
-       /* Round up to multiple of 1M (segment size) */
-       guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL;
-#endif
        /*
         * If there should be more memory in the guest test region than there
         * can be pages in the guest, it will definitely cause problems.
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 518a94a7a8b5..aaa6ff361f52 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -296,10 +296,6 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
        guest_num_pages = (1ul << (DIRTY_MEM_BITS -
                                   vm_get_page_shift(vm))) + 3;
        guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
-#ifdef __s390x__
-       /* Round up to multiple of 1M (segment size) */
-       guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL;
-#endif
        host_page_size = getpagesize();
        host_num_pages = vm_num_host_pages(mode, guest_num_pages);
 


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-03-12  8:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-10 16:54 kvm/queue demand paging test and s390 Christian Borntraeger
2020-03-10 16:59 ` David Hildenbrand
2020-03-10 17:27 ` Andrew Jones
2020-03-10 20:18   ` Christian Borntraeger
2020-03-11  6:26     ` Andrew Jones
2020-03-12  8:00       ` Christian Borntraeger
2020-03-12  8:08         ` Christian Borntraeger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).