linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* tick/sched: Make jiffies update quick check more robust
@ 2020-12-04 10:55 Thomas Gleixner
  2020-12-07  9:59 ` Peter Zijlstra
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Thomas Gleixner @ 2020-12-04 10:55 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Paul E. McKenney, Peter Zijlstra, Will Deacon

The quick check in tick_do_update_jiffies64() whether jiffies need to be
updated is not really correct under all circumstances and on all
architectures, especially not on 32bit systems.

The quick check does:

    if (now < READ_ONCE(tick_next_period))
    	return;

and the counterpart in the update is:

    WRITE_ONCE(tick_next_period, next_update_time);

This has two problems:

  1) On weakly ordered architectures there is no guarantee that the stores
     before the WRITE_ONCE() are visible which means that other CPUs can
     operate on a stale jiffies value.

  2) On 32bit the store of tick_next_period which is an u64 is split into
     two 32bit stores. If the first 32bit store advances tick_next_period
     far out and the second 32bit store is delayed (virt, NMI ...) then
     jiffies will become stale until the second 32bit store happens.

Address this by seperating the handling for 32bit and 64bit.

On 64bit problem #1 is addressed by replacing READ_ONCE() / WRITE_ONCE()
with smp_load_acquire() / smp_store_release().

On 32bit problem #2 is addressed by protecting the quick check with the
jiffies sequence counter. The load and stores can be plain because the
sequence count mechanics provides the required barriers already.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Applies on tip timers/core
---
 kernel/time/tick-sched.c |   74 +++++++++++++++++++++++++++++------------------
 1 file changed, 47 insertions(+), 27 deletions(-)

--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -57,36 +57,42 @@ static ktime_t last_jiffies_update;
 static void tick_do_update_jiffies64(ktime_t now)
 {
 	unsigned long ticks = 1;
-	ktime_t delta;
+	ktime_t delta, nextp;
 
 	/*
-	 * Do a quick check without holding jiffies_lock. The READ_ONCE()
+	 * 64bit can do a quick check without holding jiffies lock and
+	 * without looking at the sequence count. The smp_load_acquire()
 	 * pairs with the update done later in this function.
 	 *
-	 * This is also an intentional data race which is even safe on
-	 * 32bit in theory. If there is a concurrent update then the check
-	 * might give a random answer. It does not matter because if it
-	 * returns then the concurrent update is already taking care, if it
-	 * falls through then it will pointlessly contend on jiffies_lock.
-	 *
-	 * Though there is one nasty case on 32bit due to store tearing of
-	 * the 64bit value. If the first 32bit store makes the quick check
-	 * return on all other CPUs and the writing CPU context gets
-	 * delayed to complete the second store (scheduled out on virt)
-	 * then jiffies can become stale for up to ~2^32 nanoseconds
-	 * without noticing. After that point all CPUs will wait for
-	 * jiffies lock.
-	 *
-	 * OTOH, this is not any different than the situation with NOHZ=off
-	 * where one CPU is responsible for updating jiffies and
-	 * timekeeping. If that CPU goes out for lunch then all other CPUs
-	 * will operate on stale jiffies until it decides to come back.
+	 * 32bit cannot do that because the store of tick_next_period
+	 * consists of two 32bit stores and the first store could move it
+	 * to a random point in the future.
 	 */
-	if (ktime_before(now, READ_ONCE(tick_next_period)))
-		return;
+	if (IS_ENABLED(CONFIG_64BIT)) {
+		if (ktime_before(now, smp_load_acquire(&tick_next_period)))
+			return;
+	} else {
+		unsigned int seq;
+
+		/*
+		 * Avoid contention on jiffies_lock and protect the quick
+		 * check with the sequence count.
+		 */
+		do {
+			seq = read_seqcount_begin(&jiffies_seq);
+			nextp = tick_next_period;
+		} while (read_seqcount_retry(&jiffies_seq, seq));
+
+		if (ktime_before(now, nextp))
+			return;
+	}
 
-	/* Reevaluate with jiffies_lock held */
+	/* Quick check failed, i.e. update is required. */
 	raw_spin_lock(&jiffies_lock);
+	/*
+	 * Reevaluate with the lock held. Another CPU might have done the
+	 * update already.
+	 */
 	if (ktime_before(now, tick_next_period)) {
 		raw_spin_unlock(&jiffies_lock);
 		return;
@@ -112,11 +118,25 @@ static void tick_do_update_jiffies64(kti
 	jiffies_64 += ticks;
 
 	/*
-	 * Keep the tick_next_period variable up to date.  WRITE_ONCE()
-	 * pairs with the READ_ONCE() in the lockless quick check above.
+	 * Keep the tick_next_period variable up to date.
 	 */
-	WRITE_ONCE(tick_next_period,
-		   ktime_add_ns(last_jiffies_update, TICK_NSEC));
+	nextp = ktime_add_ns(last_jiffies_update, TICK_NSEC);
+
+	if (IS_ENABLED(CONFIG_64BIT)) {
+		/*
+		 * Pairs with smp_load_acquire() in the lockless quick
+		 * check above and ensures that the update to jiffies_64 is
+		 * not reordered vs. the store to tick_next_period, neither
+		 * by the compiler nor by the CPU.
+		 */
+		smp_store_release(&tick_next_period, nextp);
+	} else {
+		/*
+		 * A plain store is good enough on 32bit as the quick check
+		 * above is protected by the sequence count.
+		 */
+		tick_next_period = nextp;
+	}
 
 	/*
 	 * Release the sequence count. calc_global_load() below is not

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: tick/sched: Make jiffies update quick check more robust
  2020-12-04 10:55 tick/sched: Make jiffies update quick check more robust Thomas Gleixner
@ 2020-12-07  9:59 ` Peter Zijlstra
  2020-12-07 14:41   ` Thomas Gleixner
  2020-12-11 21:28 ` Frederic Weisbecker
  2020-12-11 22:22 ` [tip: timers/core] " tip-bot2 for Thomas Gleixner
  2 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2020-12-07  9:59 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, Frederic Weisbecker, Paul E. McKenney, Will Deacon

On Fri, Dec 04, 2020 at 11:55:19AM +0100, Thomas Gleixner wrote:
>  	/*
> +	 * 64bit can do a quick check without holding jiffies lock and
> +	 * without looking at the sequence count. The smp_load_acquire()
>  	 * pairs with the update done later in this function.
>  	 *
> +	 * 32bit cannot do that because the store of tick_next_period
> +	 * consists of two 32bit stores and the first store could move it
> +	 * to a random point in the future.
>  	 */
> +	if (IS_ENABLED(CONFIG_64BIT)) {
> +		if (ktime_before(now, smp_load_acquire(&tick_next_period)))
> +			return;

Explicit ACQUIRE

> +	} else {
> +		unsigned int seq;
> +
> +		/*
> +		 * Avoid contention on jiffies_lock and protect the quick
> +		 * check with the sequence count.
> +		 */
> +		do {
> +			seq = read_seqcount_begin(&jiffies_seq);
> +			nextp = tick_next_period;
> +		} while (read_seqcount_retry(&jiffies_seq, seq));
> +
> +		if (ktime_before(now, nextp))
> +			return;

Actually has an implicit ACQUIRE:

	read_seqcount_retry() implies smp_rmb(), which ensures
	LOAD->LOAD order, IOW any later load must happen after our
	@tick_next_period load.

	Then it has a control dependency on ktime_before(,nextp), which
	ensures LOAD->STORE order.

	Combined we have a LOAD->{LOAD,STORE} order on the
	@tick_next_period load, IOW ACQUIRE.

> +	}
>  
> +	/* Quick check failed, i.e. update is required. */
>  	raw_spin_lock(&jiffies_lock);

Another ACQUIRE, which means the above ACQUIRE only ensures we load the
lock value after?

Or are we trying to guarantee the caller is sure to observe the new
jiffies value if we return?

> +	/*
> +	 * Reevaluate with the lock held. Another CPU might have done the
> +	 * update already.
> +	 */
>  	if (ktime_before(now, tick_next_period)) {
>  		raw_spin_unlock(&jiffies_lock);
>  		return;
> @@ -112,11 +118,25 @@ static void tick_do_update_jiffies64(kti
>  	jiffies_64 += ticks;
>  
>  	/*
> +	 * Keep the tick_next_period variable up to date.
>  	 */
> +	nextp = ktime_add_ns(last_jiffies_update, TICK_NSEC);
> +
> +	if (IS_ENABLED(CONFIG_64BIT)) {
> +		/*
> +		 * Pairs with smp_load_acquire() in the lockless quick
> +		 * check above and ensures that the update to jiffies_64 is
> +		 * not reordered vs. the store to tick_next_period, neither
> +		 * by the compiler nor by the CPU.
> +		 */
> +		smp_store_release(&tick_next_period, nextp);
> +	} else {
> +		/*
> +		 * A plain store is good enough on 32bit as the quick check
> +		 * above is protected by the sequence count.
> +		 */
> +		tick_next_period = nextp;
> +	}
>  
>  	/*
>  	 * Release the sequence count. calc_global_load() below is not

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: tick/sched: Make jiffies update quick check more robust
  2020-12-07  9:59 ` Peter Zijlstra
@ 2020-12-07 14:41   ` Thomas Gleixner
  2020-12-08 11:16     ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: Thomas Gleixner @ 2020-12-07 14:41 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: LKML, Frederic Weisbecker, Paul E. McKenney, Will Deacon

On Mon, Dec 07 2020 at 10:59, Peter Zijlstra wrote:
>> +	if (IS_ENABLED(CONFIG_64BIT)) {
>> +		if (ktime_before(now, smp_load_acquire(&tick_next_period)))
>> +			return;
>
> Explicit ACQUIRE
>
>> +	} else {
>> +		unsigned int seq;
>> +
>> +		/*
>> +		 * Avoid contention on jiffies_lock and protect the quick
>> +		 * check with the sequence count.
>> +		 */
>> +		do {
>> +			seq = read_seqcount_begin(&jiffies_seq);
>> +			nextp = tick_next_period;
>> +		} while (read_seqcount_retry(&jiffies_seq, seq));
>> +
>> +		if (ktime_before(now, nextp))
>> +			return;
>
> Actually has an implicit ACQUIRE:
>
> 	read_seqcount_retry() implies smp_rmb(), which ensures
> 	LOAD->LOAD order, IOW any later load must happen after our
> 	@tick_next_period load.
>
> 	Then it has a control dependency on ktime_before(,nextp), which
> 	ensures LOAD->STORE order.
>
> 	Combined we have a LOAD->{LOAD,STORE} order on the
> 	@tick_next_period load, IOW ACQUIRE.
>
>> +	}
>>  
>> +	/* Quick check failed, i.e. update is required. */
>>  	raw_spin_lock(&jiffies_lock);
>
> Another ACQUIRE, which means the above ACQUIRE only ensures we load the
> lock value after?
>
> Or are we trying to guarantee the caller is sure to observe the new
> jiffies value if we return?

The guarantee we need on 64bit for the check w/o seqcount is:

CPU0                                         CPU1

 if (ktime_before(now, tick_next_period))
 	return;

 raw_spin_lock(&jiffies_lock);
 ....
 jiffies_64 += ticks;                           
 
 tick_next_period = next;                   if (ktime_before(now, tick_next_period))
  	                                           return;

When CPU1 returns because it observes the new value in tick_next_period
then it has to be guaranteed that jiffies_64 is observable as well.

I might have gotten it completely wrong again.

Thanks,

        tglx


  

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: tick/sched: Make jiffies update quick check more robust
  2020-12-07 14:41   ` Thomas Gleixner
@ 2020-12-08 11:16     ` Peter Zijlstra
  0 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2020-12-08 11:16 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, Frederic Weisbecker, Paul E. McKenney, Will Deacon

On Mon, Dec 07, 2020 at 03:41:47PM +0100, Thomas Gleixner wrote:
> On Mon, Dec 07 2020 at 10:59, Peter Zijlstra wrote:
> >> +	if (IS_ENABLED(CONFIG_64BIT)) {
> >> +		if (ktime_before(now, smp_load_acquire(&tick_next_period)))
> >> +			return;
> >
> > Explicit ACQUIRE
> >
> >> +	} else {
> >> +		unsigned int seq;
> >> +
> >> +		/*
> >> +		 * Avoid contention on jiffies_lock and protect the quick
> >> +		 * check with the sequence count.
> >> +		 */
> >> +		do {
> >> +			seq = read_seqcount_begin(&jiffies_seq);
> >> +			nextp = tick_next_period;
> >> +		} while (read_seqcount_retry(&jiffies_seq, seq));
> >> +
> >> +		if (ktime_before(now, nextp))
> >> +			return;
> >
> > Actually has an implicit ACQUIRE:
> >
> > 	read_seqcount_retry() implies smp_rmb(), which ensures
> > 	LOAD->LOAD order, IOW any later load must happen after our
> > 	@tick_next_period load.
> >
> > 	Then it has a control dependency on ktime_before(,nextp), which
> > 	ensures LOAD->STORE order.
> >
> > 	Combined we have a LOAD->{LOAD,STORE} order on the
> > 	@tick_next_period load, IOW ACQUIRE.

It's actually the whole of:

+               } while (read_seqcount_retry(&jiffies_seq, seq));

That implies the ACQUIRE, don't need the rest.

> >> +	}
> >>  
> >> +	/* Quick check failed, i.e. update is required. */
> >>  	raw_spin_lock(&jiffies_lock);
> >
> > Another ACQUIRE, which means the above ACQUIRE only ensures we load the
> > lock value after?
> >
> > Or are we trying to guarantee the caller is sure to observe the new
> > jiffies value if we return?
> 
> The guarantee we need on 64bit for the check w/o seqcount is:
> 
> CPU0                                         CPU1
> 
>  if (ktime_before(now, tick_next_period))
>  	return;
> 
>  raw_spin_lock(&jiffies_lock);
>  ....
>  jiffies_64 += ticks;                           
>  
>  tick_next_period = next;                   if (ktime_before(now, tick_next_period))
>   	                                           return;
> 
> When CPU1 returns because it observes the new value in tick_next_period
> then it has to be guaranteed that jiffies_64 is observable as well.

Right, it does that. Good.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: tick/sched: Make jiffies update quick check more robust
  2020-12-04 10:55 tick/sched: Make jiffies update quick check more robust Thomas Gleixner
  2020-12-07  9:59 ` Peter Zijlstra
@ 2020-12-11 21:28 ` Frederic Weisbecker
  2020-12-11 22:22 ` [tip: timers/core] " tip-bot2 for Thomas Gleixner
  2 siblings, 0 replies; 6+ messages in thread
From: Frederic Weisbecker @ 2020-12-11 21:28 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, Paul E. McKenney, Peter Zijlstra, Will Deacon

On Fri, Dec 04, 2020 at 11:55:19AM +0100, Thomas Gleixner wrote:
> The quick check in tick_do_update_jiffies64() whether jiffies need to be
> updated is not really correct under all circumstances and on all
> architectures, especially not on 32bit systems.
> 
> The quick check does:
> 
>     if (now < READ_ONCE(tick_next_period))
>     	return;
> 
> and the counterpart in the update is:
> 
>     WRITE_ONCE(tick_next_period, next_update_time);
> 
> This has two problems:
> 
>   1) On weakly ordered architectures there is no guarantee that the stores
>      before the WRITE_ONCE() are visible which means that other CPUs can
>      operate on a stale jiffies value.
> 
>   2) On 32bit the store of tick_next_period which is an u64 is split into
>      two 32bit stores. If the first 32bit store advances tick_next_period
>      far out and the second 32bit store is delayed (virt, NMI ...) then
>      jiffies will become stale until the second 32bit store happens.
> 
> Address this by seperating the handling for 32bit and 64bit.
> 
> On 64bit problem #1 is addressed by replacing READ_ONCE() / WRITE_ONCE()
> with smp_load_acquire() / smp_store_release().
> 
> On 32bit problem #2 is addressed by protecting the quick check with the
> jiffies sequence counter. The load and stores can be plain because the
> sequence count mechanics provides the required barriers already.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Looks very good! Thanks!

Reviewed-by: Frederic Weisbecker <frederic@kernel.org>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tip: timers/core] tick/sched: Make jiffies update quick check more robust
  2020-12-04 10:55 tick/sched: Make jiffies update quick check more robust Thomas Gleixner
  2020-12-07  9:59 ` Peter Zijlstra
  2020-12-11 21:28 ` Frederic Weisbecker
@ 2020-12-11 22:22 ` tip-bot2 for Thomas Gleixner
  2 siblings, 0 replies; 6+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2020-12-11 22:22 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel

The following commit has been merged into the timers/core branch of tip:

Commit-ID:     aa3b66f401b372598b29421bab4d17b631b92407
Gitweb:        https://git.kernel.org/tip/aa3b66f401b372598b29421bab4d17b631b92407
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Fri, 04 Dec 2020 11:55:19 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Fri, 11 Dec 2020 23:19:10 +01:00

tick/sched: Make jiffies update quick check more robust

The quick check in tick_do_update_jiffies64() whether jiffies need to be
updated is not really correct under all circumstances and on all
architectures, especially not on 32bit systems.

The quick check does:

    if (now < READ_ONCE(tick_next_period))
    	return;

and the counterpart in the update is:

    WRITE_ONCE(tick_next_period, next_update_time);

This has two problems:

  1) On weakly ordered architectures there is no guarantee that the stores
     before the WRITE_ONCE() are visible which means that other CPUs can
     operate on a stale jiffies value.

  2) On 32bit the store of tick_next_period which is an u64 is split into
     two 32bit stores. If the first 32bit store advances tick_next_period
     far out and the second 32bit store is delayed (virt, NMI ...) then
     jiffies will become stale until the second 32bit store happens.

Address this by seperating the handling for 32bit and 64bit.

On 64bit problem #1 is addressed by replacing READ_ONCE() / WRITE_ONCE()
with smp_load_acquire() / smp_store_release().

On 32bit problem #2 is addressed by protecting the quick check with the
jiffies sequence counter. The load and stores can be plain because the
sequence count mechanics provides the required barriers already.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/87czzpc02w.fsf@nanos.tec.linutronix.de

---
 kernel/time/tick-sched.c | 74 ++++++++++++++++++++++++---------------
 1 file changed, 47 insertions(+), 27 deletions(-)

diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index cc7cba2..a9e6893 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -57,36 +57,42 @@ static ktime_t last_jiffies_update;
 static void tick_do_update_jiffies64(ktime_t now)
 {
 	unsigned long ticks = 1;
-	ktime_t delta;
+	ktime_t delta, nextp;
 
 	/*
-	 * Do a quick check without holding jiffies_lock. The READ_ONCE()
+	 * 64bit can do a quick check without holding jiffies lock and
+	 * without looking at the sequence count. The smp_load_acquire()
 	 * pairs with the update done later in this function.
 	 *
-	 * This is also an intentional data race which is even safe on
-	 * 32bit in theory. If there is a concurrent update then the check
-	 * might give a random answer. It does not matter because if it
-	 * returns then the concurrent update is already taking care, if it
-	 * falls through then it will pointlessly contend on jiffies_lock.
-	 *
-	 * Though there is one nasty case on 32bit due to store tearing of
-	 * the 64bit value. If the first 32bit store makes the quick check
-	 * return on all other CPUs and the writing CPU context gets
-	 * delayed to complete the second store (scheduled out on virt)
-	 * then jiffies can become stale for up to ~2^32 nanoseconds
-	 * without noticing. After that point all CPUs will wait for
-	 * jiffies lock.
-	 *
-	 * OTOH, this is not any different than the situation with NOHZ=off
-	 * where one CPU is responsible for updating jiffies and
-	 * timekeeping. If that CPU goes out for lunch then all other CPUs
-	 * will operate on stale jiffies until it decides to come back.
+	 * 32bit cannot do that because the store of tick_next_period
+	 * consists of two 32bit stores and the first store could move it
+	 * to a random point in the future.
 	 */
-	if (ktime_before(now, READ_ONCE(tick_next_period)))
-		return;
+	if (IS_ENABLED(CONFIG_64BIT)) {
+		if (ktime_before(now, smp_load_acquire(&tick_next_period)))
+			return;
+	} else {
+		unsigned int seq;
 
-	/* Reevaluate with jiffies_lock held */
+		/*
+		 * Avoid contention on jiffies_lock and protect the quick
+		 * check with the sequence count.
+		 */
+		do {
+			seq = read_seqcount_begin(&jiffies_seq);
+			nextp = tick_next_period;
+		} while (read_seqcount_retry(&jiffies_seq, seq));
+
+		if (ktime_before(now, nextp))
+			return;
+	}
+
+	/* Quick check failed, i.e. update is required. */
 	raw_spin_lock(&jiffies_lock);
+	/*
+	 * Reevaluate with the lock held. Another CPU might have done the
+	 * update already.
+	 */
 	if (ktime_before(now, tick_next_period)) {
 		raw_spin_unlock(&jiffies_lock);
 		return;
@@ -112,11 +118,25 @@ static void tick_do_update_jiffies64(ktime_t now)
 	jiffies_64 += ticks;
 
 	/*
-	 * Keep the tick_next_period variable up to date.  WRITE_ONCE()
-	 * pairs with the READ_ONCE() in the lockless quick check above.
+	 * Keep the tick_next_period variable up to date.
 	 */
-	WRITE_ONCE(tick_next_period,
-		   ktime_add_ns(last_jiffies_update, TICK_NSEC));
+	nextp = ktime_add_ns(last_jiffies_update, TICK_NSEC);
+
+	if (IS_ENABLED(CONFIG_64BIT)) {
+		/*
+		 * Pairs with smp_load_acquire() in the lockless quick
+		 * check above and ensures that the update to jiffies_64 is
+		 * not reordered vs. the store to tick_next_period, neither
+		 * by the compiler nor by the CPU.
+		 */
+		smp_store_release(&tick_next_period, nextp);
+	} else {
+		/*
+		 * A plain store is good enough on 32bit as the quick check
+		 * above is protected by the sequence count.
+		 */
+		tick_next_period = nextp;
+	}
 
 	/*
 	 * Release the sequence count. calc_global_load() below is not

^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-12-11 23:16 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-04 10:55 tick/sched: Make jiffies update quick check more robust Thomas Gleixner
2020-12-07  9:59 ` Peter Zijlstra
2020-12-07 14:41   ` Thomas Gleixner
2020-12-08 11:16     ` Peter Zijlstra
2020-12-11 21:28 ` Frederic Weisbecker
2020-12-11 22:22 ` [tip: timers/core] " tip-bot2 for Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).