* [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU @ 2018-07-19 21:39 Waiman Long 2018-07-19 21:54 ` Davidlohr Bueso ` (2 more replies) 0 siblings, 3 replies; 10+ messages in thread From: Waiman Long @ 2018-07-19 21:39 UTC (permalink / raw) To: Boris Ostrovsky, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin Cc: x86, xen-devel, linux-kernel, Konrad Rzeszutek Wilk, Waiman Long On a VM with only 1 vCPU, the locking fast paths will always be successful. In this case, there is no need to use the the PV qspinlock code which has higher overhead on the unlock side than the native qspinlock code. The xen_pvspin veriable is also turned off in this 1 vCPU case to eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() which is run after xen_init_spinlocks(). Signed-off-by: Waiman Long <longman@redhat.com> --- arch/x86/xen/spinlock.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index cd97a62..973f10e 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -130,6 +130,10 @@ void xen_uninit_lock_cpu(int cpu) void __init xen_init_spinlocks(void) { + /* Don't need to use pvqspinlock code if there is only 1 vCPU. */ + if (num_possible_cpus() == 1) + xen_pvspin = false; + if (!xen_pvspin) { printk(KERN_DEBUG "xen: PV spinlocks disabled\n"); return; -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-19 21:39 [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU Waiman Long @ 2018-07-19 21:54 ` Davidlohr Bueso 2018-07-19 22:02 ` Waiman Long 2018-07-19 22:35 ` Boris Ostrovsky 2018-07-31 17:05 ` Boris Ostrovsky 2 siblings, 1 reply; 10+ messages in thread From: Davidlohr Bueso @ 2018-07-19 21:54 UTC (permalink / raw) To: Waiman Long Cc: Boris Ostrovsky, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, xen-devel, linux-kernel, Konrad Rzeszutek Wilk On Thu, 19 Jul 2018, Waiman Long wrote: >On a VM with only 1 vCPU, the locking fast paths will always be >successful. In this case, there is no need to use the the PV qspinlock >code which has higher overhead on the unlock side than the native >qspinlock code. > >The xen_pvspin veriable is also turned off in this 1 vCPU case to >eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() >which is run after xen_init_spinlocks(). Wouldn't kvm also want this? diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index a37bda38d205..95aceb692010 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void) static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) { native_smp_prepare_cpus(max_cpus); - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) + if (num_possible_cpus() == 1 || + kvm_para_has_hint(KVM_HINTS_REALTIME)) static_branch_disable(&virt_spin_lock_key); } ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-19 21:54 ` Davidlohr Bueso @ 2018-07-19 22:02 ` Waiman Long 2018-07-23 3:31 ` Wanpeng Li 0 siblings, 1 reply; 10+ messages in thread From: Waiman Long @ 2018-07-19 22:02 UTC (permalink / raw) To: Davidlohr Bueso Cc: Boris Ostrovsky, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, xen-devel, linux-kernel, Konrad Rzeszutek Wilk On 07/19/2018 05:54 PM, Davidlohr Bueso wrote: > On Thu, 19 Jul 2018, Waiman Long wrote: > >> On a VM with only 1 vCPU, the locking fast paths will always be >> successful. In this case, there is no need to use the the PV qspinlock >> code which has higher overhead on the unlock side than the native >> qspinlock code. >> >> The xen_pvspin veriable is also turned off in this 1 vCPU case to >> eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() >> which is run after xen_init_spinlocks(). > > Wouldn't kvm also want this? > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index a37bda38d205..95aceb692010 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void) > static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) > { > native_smp_prepare_cpus(max_cpus); > - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) > + if (num_possible_cpus() == 1 || > + kvm_para_has_hint(KVM_HINTS_REALTIME)) > static_branch_disable(&virt_spin_lock_key); > } That doesn't really matter as the slowpath will never get executed in the 1 vCPU case. -Longman ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-19 22:02 ` Waiman Long @ 2018-07-23 3:31 ` Wanpeng Li 2018-07-23 4:42 ` Davidlohr Bueso 2018-07-23 13:33 ` Waiman Long 0 siblings, 2 replies; 10+ messages in thread From: Wanpeng Li @ 2018-07-23 3:31 UTC (permalink / raw) To: Waiman Long, Paolo Bonzini, Radim Krcmar Cc: Davidlohr Bueso, Boris Ostrovsky, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, the arch/x86 maintainers, xen-devel, LKML, Konrad Rzeszutek Wilk On Fri, 20 Jul 2018 at 06:03, Waiman Long <longman@redhat.com> wrote: > > On 07/19/2018 05:54 PM, Davidlohr Bueso wrote: > > On Thu, 19 Jul 2018, Waiman Long wrote: > > > >> On a VM with only 1 vCPU, the locking fast paths will always be > >> successful. In this case, there is no need to use the the PV qspinlock > >> code which has higher overhead on the unlock side than the native > >> qspinlock code. > >> > >> The xen_pvspin veriable is also turned off in this 1 vCPU case to > >> eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() > >> which is run after xen_init_spinlocks(). > > > > Wouldn't kvm also want this? > > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > > index a37bda38d205..95aceb692010 100644 > > --- a/arch/x86/kernel/kvm.c > > +++ b/arch/x86/kernel/kvm.c > > @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void) > > static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) > > { > > native_smp_prepare_cpus(max_cpus); > > - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) > > + if (num_possible_cpus() == 1 || > > + kvm_para_has_hint(KVM_HINTS_REALTIME)) > > static_branch_disable(&virt_spin_lock_key); > > } > > That doesn't really matter as the slowpath will never get executed in > the 1 vCPU case. So this is not needed in kvm tree? https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h=queue&id=3a792199004ec335346cc607d62600a399a7ee02 Regards, Wanpeng Li ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-23 3:31 ` Wanpeng Li @ 2018-07-23 4:42 ` Davidlohr Bueso 2018-07-23 4:50 ` Davidlohr Bueso 2018-07-23 13:52 ` Waiman Long 2018-07-23 13:33 ` Waiman Long 1 sibling, 2 replies; 10+ messages in thread From: Davidlohr Bueso @ 2018-07-23 4:42 UTC (permalink / raw) To: Wanpeng Li Cc: Waiman Long, Paolo Bonzini, Radim Krcmar, Boris Ostrovsky, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, the arch/x86 maintainers, xen-devel, LKML, Konrad Rzeszutek Wilk On Mon, 23 Jul 2018, Wanpeng Li wrote: >On Fri, 20 Jul 2018 at 06:03, Waiman Long <longman@redhat.com> wrote: >> >> On 07/19/2018 05:54 PM, Davidlohr Bueso wrote: >> > On Thu, 19 Jul 2018, Waiman Long wrote: >> > >> >> On a VM with only 1 vCPU, the locking fast paths will always be >> >> successful. In this case, there is no need to use the the PV qspinlock >> >> code which has higher overhead on the unlock side than the native >> >> qspinlock code. >> >> >> >> The xen_pvspin veriable is also turned off in this 1 vCPU case to s/veriable variable >> >> eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() >> >> which is run after xen_init_spinlocks(). >> > >> > Wouldn't kvm also want this? >> > >> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c >> > index a37bda38d205..95aceb692010 100644 >> > --- a/arch/x86/kernel/kvm.c >> > +++ b/arch/x86/kernel/kvm.c >> > @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void) >> > static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) >> > { >> > native_smp_prepare_cpus(max_cpus); >> > - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) >> > + if (num_possible_cpus() == 1 || >> > + kvm_para_has_hint(KVM_HINTS_REALTIME)) >> > static_branch_disable(&virt_spin_lock_key); >> > } >> >> That doesn't really matter as the slowpath will never get executed in >> the 1 vCPU case. How does this differ then from xen, then? I mean, same principle applies. > >So this is not needed in kvm tree? >https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h=queue&id=3a792199004ec335346cc607d62600a399a7ee02 Hmm I would think that my patch would be more appropiate as it actually does what the comment says. Thanks, Davidlohr ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-23 4:42 ` Davidlohr Bueso @ 2018-07-23 4:50 ` Davidlohr Bueso 2018-07-23 13:52 ` Waiman Long 1 sibling, 0 replies; 10+ messages in thread From: Davidlohr Bueso @ 2018-07-23 4:50 UTC (permalink / raw) To: Wanpeng Li Cc: Waiman Long, Paolo Bonzini, Radim Krcmar, Boris Ostrovsky, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, the arch/x86 maintainers, xen-devel, LKML, Konrad Rzeszutek Wilk On Sun, 22 Jul 2018, Davidlohr Bueso wrote: >On Mon, 23 Jul 2018, Wanpeng Li wrote: > >>On Fri, 20 Jul 2018 at 06:03, Waiman Long <longman@redhat.com> wrote: >>> >>>On 07/19/2018 05:54 PM, Davidlohr Bueso wrote: >>>> On Thu, 19 Jul 2018, Waiman Long wrote: >>>> >>>>> On a VM with only 1 vCPU, the locking fast paths will always be >>>>> successful. In this case, there is no need to use the the PV qspinlock >>>>> code which has higher overhead on the unlock side than the native >>>>> qspinlock code. >>>>> >>>>> The xen_pvspin veriable is also turned off in this 1 vCPU case to > >s/veriable > variable > >>>>> eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() >>>>> which is run after xen_init_spinlocks(). >>>> >>>> Wouldn't kvm also want this? >>>> >>>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c >>>> index a37bda38d205..95aceb692010 100644 >>>> --- a/arch/x86/kernel/kvm.c >>>> +++ b/arch/x86/kernel/kvm.c >>>> @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void) >>>> static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) >>>> { >>>> native_smp_prepare_cpus(max_cpus); >>>> - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) >>>> + if (num_possible_cpus() == 1 || >>>> + kvm_para_has_hint(KVM_HINTS_REALTIME)) >>>> static_branch_disable(&virt_spin_lock_key); >>>> } >>> >>>That doesn't really matter as the slowpath will never get executed in >>>the 1 vCPU case. > >How does this differ then from xen, then? I mean, same principle applies. > >> >>So this is not needed in kvm tree? >>https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h=queue&id=3a792199004ec335346cc607d62600a399a7ee02 > >Hmm I would think that my patch would be more appropiate as it actually does >what the comment says. Both would be needed actually yes, but also disabling the virt_spin_lock_key would be more robust imo. Thanks, Davidlohr ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-23 4:42 ` Davidlohr Bueso 2018-07-23 4:50 ` Davidlohr Bueso @ 2018-07-23 13:52 ` Waiman Long 1 sibling, 0 replies; 10+ messages in thread From: Waiman Long @ 2018-07-23 13:52 UTC (permalink / raw) To: Davidlohr Bueso, Wanpeng Li Cc: Paolo Bonzini, Radim Krcmar, Boris Ostrovsky, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, the arch/x86 maintainers, xen-devel, LKML, Konrad Rzeszutek Wilk On 07/23/2018 12:42 AM, Davidlohr Bueso wrote: > On Mon, 23 Jul 2018, Wanpeng Li wrote: > >> On Fri, 20 Jul 2018 at 06:03, Waiman Long <longman@redhat.com> wrote: >>> >>> On 07/19/2018 05:54 PM, Davidlohr Bueso wrote: >>> > On Thu, 19 Jul 2018, Waiman Long wrote: >>> > >>> >> On a VM with only 1 vCPU, the locking fast paths will always be >>> >> successful. In this case, there is no need to use the the PV >>> qspinlock >>> >> code which has higher overhead on the unlock side than the native >>> >> qspinlock code. >>> >> >>> >> The xen_pvspin veriable is also turned off in this 1 vCPU case to > > s/veriable > variable > >>> >> eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() >>> >> which is run after xen_init_spinlocks(). >>> > >>> > Wouldn't kvm also want this? >>> > >>> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c >>> > index a37bda38d205..95aceb692010 100644 >>> > --- a/arch/x86/kernel/kvm.c >>> > +++ b/arch/x86/kernel/kvm.c >>> > @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void) >>> > static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) >>> > { >>> > native_smp_prepare_cpus(max_cpus); >>> > - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) >>> > + if (num_possible_cpus() == 1 || >>> > + kvm_para_has_hint(KVM_HINTS_REALTIME)) >>> > static_branch_disable(&virt_spin_lock_key); >>> > } >>> >>> That doesn't really matter as the slowpath will never get executed in >>> the 1 vCPU case. > > How does this differ then from xen, then? I mean, same principle applies. I am not saying this patch is wrong. I am just saying that this is not necessary. In the xen case, they have a single variable that controls if pvqspinlock should be used and turn off all the knobs accordingly. There is no such equivalent in kvm. We had talked about that in the past, but didn't come to a conclusion. In the 1 vCPU case, the most important thing is to not use the pvqspinlock unlock path which add unneeded runtime overhead. The others just have a slight boot time overhead. For me, they are optional. So I don't bother to add code to explicit turn them off as the result will be the same with or without them. > >> >> So this is not needed in kvm tree? >> https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h=queue&id=3a792199004ec335346cc607d62600a399a7ee02 >> > > Hmm I would think that my patch would be more appropiate as it > actually does > what the comment says. The static key controls the behavior of the locking slowpath which will not be executed at all. So it is essentially a no-op. -Longman ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-23 3:31 ` Wanpeng Li 2018-07-23 4:42 ` Davidlohr Bueso @ 2018-07-23 13:33 ` Waiman Long 1 sibling, 0 replies; 10+ messages in thread From: Waiman Long @ 2018-07-23 13:33 UTC (permalink / raw) To: Wanpeng Li, Paolo Bonzini, Radim Krcmar Cc: Davidlohr Bueso, Boris Ostrovsky, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, the arch/x86 maintainers, xen-devel, LKML, Konrad Rzeszutek Wilk On 07/22/2018 11:31 PM, Wanpeng Li wrote: > On Fri, 20 Jul 2018 at 06:03, Waiman Long <longman@redhat.com> wrote: >> On 07/19/2018 05:54 PM, Davidlohr Bueso wrote: >>> On Thu, 19 Jul 2018, Waiman Long wrote: >>> >>>> On a VM with only 1 vCPU, the locking fast paths will always be >>>> successful. In this case, there is no need to use the the PV qspinlock >>>> code which has higher overhead on the unlock side than the native >>>> qspinlock code. >>>> >>>> The xen_pvspin veriable is also turned off in this 1 vCPU case to >>>> eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() >>>> which is run after xen_init_spinlocks(). >>> Wouldn't kvm also want this? >>> >>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c >>> index a37bda38d205..95aceb692010 100644 >>> --- a/arch/x86/kernel/kvm.c >>> +++ b/arch/x86/kernel/kvm.c >>> @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void) >>> static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) >>> { >>> native_smp_prepare_cpus(max_cpus); >>> - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) >>> + if (num_possible_cpus() == 1 || >>> + kvm_para_has_hint(KVM_HINTS_REALTIME)) >>> static_branch_disable(&virt_spin_lock_key); >>> } >> That doesn't really matter as the slowpath will never get executed in >> the 1 vCPU case. > So this is not needed in kvm tree? > https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h=queue&id=3a792199004ec335346cc607d62600a399a7ee02 > > Regards, > Wanpeng Li With only 1 vCPU, the slowpath will not be executed. It will be a deadlock if it happens. So we don't need to explicitly disable the static key here. -Longman ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-19 21:39 [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU Waiman Long 2018-07-19 21:54 ` Davidlohr Bueso @ 2018-07-19 22:35 ` Boris Ostrovsky 2018-07-31 17:05 ` Boris Ostrovsky 2 siblings, 0 replies; 10+ messages in thread From: Boris Ostrovsky @ 2018-07-19 22:35 UTC (permalink / raw) To: Waiman Long, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin Cc: x86, xen-devel, linux-kernel, Konrad Rzeszutek Wilk On 07/19/2018 05:39 PM, Waiman Long wrote: > On a VM with only 1 vCPU, the locking fast paths will always be > successful. In this case, there is no need to use the the PV qspinlock > code which has higher overhead on the unlock side than the native > qspinlock code. > > The xen_pvspin veriable is also turned off in this 1 vCPU case to > eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() > which is run after xen_init_spinlocks(). > > Signed-off-by: Waiman Long <longman@redhat.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> > --- > arch/x86/xen/spinlock.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c > index cd97a62..973f10e 100644 > --- a/arch/x86/xen/spinlock.c > +++ b/arch/x86/xen/spinlock.c > @@ -130,6 +130,10 @@ void xen_uninit_lock_cpu(int cpu) > void __init xen_init_spinlocks(void) > { > > + /* Don't need to use pvqspinlock code if there is only 1 vCPU. */ > + if (num_possible_cpus() == 1) > + xen_pvspin = false; > + > if (!xen_pvspin) { > printk(KERN_DEBUG "xen: PV spinlocks disabled\n"); > return; ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU 2018-07-19 21:39 [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU Waiman Long 2018-07-19 21:54 ` Davidlohr Bueso 2018-07-19 22:35 ` Boris Ostrovsky @ 2018-07-31 17:05 ` Boris Ostrovsky 2 siblings, 0 replies; 10+ messages in thread From: Boris Ostrovsky @ 2018-07-31 17:05 UTC (permalink / raw) To: Waiman Long, Juergen Gross, Thomas Gleixner, Ingo Molnar, H. Peter Anvin Cc: x86, xen-devel, linux-kernel, Konrad Rzeszutek Wilk On 07/19/2018 05:39 PM, Waiman Long wrote: > On a VM with only 1 vCPU, the locking fast paths will always be > successful. In this case, there is no need to use the the PV qspinlock > code which has higher overhead on the unlock side than the native > qspinlock code. > > The xen_pvspin veriable is also turned off in this 1 vCPU case to > eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() > which is run after xen_init_spinlocks(). > > Signed-off-by: Waiman Long <longman@redhat.com> Applied to for-linus-4.19. -boris ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-07-31 17:05 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-07-19 21:39 [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU Waiman Long 2018-07-19 21:54 ` Davidlohr Bueso 2018-07-19 22:02 ` Waiman Long 2018-07-23 3:31 ` Wanpeng Li 2018-07-23 4:42 ` Davidlohr Bueso 2018-07-23 4:50 ` Davidlohr Bueso 2018-07-23 13:52 ` Waiman Long 2018-07-23 13:33 ` Waiman Long 2018-07-19 22:35 ` Boris Ostrovsky 2018-07-31 17:05 ` Boris Ostrovsky
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).