From: Simon Jeons <simon.jeons@gmail.com>
To: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
Andrea Arcangeli <aarcange@redhat.com>,
Ingo Molnar <mingo@kernel.org>, Rik van Riel <riel@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Hugh Dickins <hughd@google.com>,
Thomas Gleixner <tglx@linutronix.de>,
Paul Turner <pjt@google.com>, Hillf Danton <dhillf@gmail.com>,
David Rientjes <rientjes@google.com>,
Lee Schermerhorn <Lee.Schermerhorn@hp.com>,
Alex Shi <lkml.alex@gmail.com>,
Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 22/49] mm: mempolicy: Add MPOL_MF_LAZY
Date: Fri, 04 Jan 2013 23:18:17 -0600 [thread overview]
Message-ID: <1357363097.5273.12.camel@kernel.cn.ibm.com> (raw)
In-Reply-To: <1354875832-9700-23-git-send-email-mgorman@suse.de>
On Fri, 2012-12-07 at 10:23 +0000, Mel Gorman wrote:
> From: Lee Schermerhorn <lee.schermerhorn@hp.com>
>
> NOTE: Once again there is a lot of patch stealing and the end result
> is sufficiently different that I had to drop the signed-offs.
> Will re-add if the original authors are ok with that.
>
> This patch adds another mbind() flag to request "lazy migration". The
> flag, MPOL_MF_LAZY, modifies MPOL_MF_MOVE* such that the selected
> pages are marked PROT_NONE. The pages will be migrated in the fault
> path on "first touch", if the policy dictates at that time.
>
> "Lazy Migration" will allow testing of migrate-on-fault via mbind().
> Also allows applications to specify that only subsequently touched
> pages be migrated to obey new policy, instead of all pages in range.
> This can be useful for multi-threaded applications working on a
> large shared data area that is initialized by an initial thread
> resulting in all pages on one [or a few, if overflowed] nodes.
> After PROT_NONE, the pages in regions assigned to the worker threads
> will be automatically migrated local to the threads on 1st touch.
>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> Reviewed-by: Rik van Riel <riel@redhat.com>
> ---
> include/linux/mm.h | 5 ++
> include/uapi/linux/mempolicy.h | 13 ++-
> mm/mempolicy.c | 185 ++++++++++++++++++++++++++++++++++++----
> 3 files changed, 185 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index fa16152..471185e 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1551,6 +1551,11 @@ static inline pgprot_t vm_get_page_prot(unsigned long vm_flags)
> }
> #endif
>
> +#ifdef CONFIG_ARCH_USES_NUMA_PROT_NONE
> +void change_prot_numa(struct vm_area_struct *vma,
> + unsigned long start, unsigned long end);
> +#endif
> +
> struct vm_area_struct *find_extend_vma(struct mm_struct *, unsigned long addr);
> int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
> unsigned long pfn, unsigned long size, pgprot_t);
> diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
> index 472de8a..6a1baae 100644
> --- a/include/uapi/linux/mempolicy.h
> +++ b/include/uapi/linux/mempolicy.h
> @@ -49,9 +49,16 @@ enum mpol_rebind_step {
>
> /* Flags for mbind */
> #define MPOL_MF_STRICT (1<<0) /* Verify existing pages in the mapping */
> -#define MPOL_MF_MOVE (1<<1) /* Move pages owned by this process to conform to mapping */
> -#define MPOL_MF_MOVE_ALL (1<<2) /* Move every page to conform to mapping */
> -#define MPOL_MF_INTERNAL (1<<3) /* Internal flags start here */
> +#define MPOL_MF_MOVE (1<<1) /* Move pages owned by this process to conform
> + to policy */
> +#define MPOL_MF_MOVE_ALL (1<<2) /* Move every page to conform to policy */
> +#define MPOL_MF_LAZY (1<<3) /* Modifies '_MOVE: lazy migrate on fault */
> +#define MPOL_MF_INTERNAL (1<<4) /* Internal flags start here */
> +
> +#define MPOL_MF_VALID (MPOL_MF_STRICT | \
> + MPOL_MF_MOVE | \
> + MPOL_MF_MOVE_ALL | \
> + MPOL_MF_LAZY)
>
> /*
> * Internal flags that share the struct mempolicy flags word with
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index df1466d..51d3ebd 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -90,6 +90,7 @@
> #include <linux/syscalls.h>
> #include <linux/ctype.h>
> #include <linux/mm_inline.h>
> +#include <linux/mmu_notifier.h>
>
> #include <asm/tlbflush.h>
> #include <asm/uaccess.h>
> @@ -565,6 +566,145 @@ static inline int check_pgd_range(struct vm_area_struct *vma,
> return 0;
> }
>
> +#ifdef CONFIG_ARCH_USES_NUMA_PROT_NONE
> +/*
> + * Here we search for not shared page mappings (mapcount == 1) and we
> + * set up the pmd/pte_numa on those mappings so the very next access
> + * will fire a NUMA hinting page fault.
> + */
> +static int
> +change_prot_numa_range(struct mm_struct *mm, struct vm_area_struct *vma,
> + unsigned long address)
> +{
> + pgd_t *pgd;
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *pte, *_pte;
> + struct page *page;
> + unsigned long _address, end;
> + spinlock_t *ptl;
> + int ret = 0;
> +
> + VM_BUG_ON(address & ~PAGE_MASK);
> +
> + pgd = pgd_offset(mm, address);
> + if (!pgd_present(*pgd))
> + goto out;
> +
> + pud = pud_offset(pgd, address);
> + if (!pud_present(*pud))
> + goto out;
> +
> + pmd = pmd_offset(pud, address);
> + if (pmd_none(*pmd))
> + goto out;
> +
> + if (pmd_trans_huge_lock(pmd, vma) == 1) {
> + int page_nid;
> + ret = HPAGE_PMD_NR;
> +
> + VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> +
> + if (pmd_numa(*pmd)) {
> + spin_unlock(&mm->page_table_lock);
> + goto out;
> + }
> +
> + page = pmd_page(*pmd);
> +
> + /* only check non-shared pages */
> + if (page_mapcount(page) != 1) {
> + spin_unlock(&mm->page_table_lock);
> + goto out;
> + }
> +
> + page_nid = page_to_nid(page);
> +
> + if (pmd_numa(*pmd)) {
> + spin_unlock(&mm->page_table_lock);
> + goto out;
> + }
> +
Hi Gorman,
Since pmd_trans_huge_lock has already held &mm->page_table_lock, then
why check pmd_numa(*pmd) again?
> + set_pmd_at(mm, address, pmd, pmd_mknuma(*pmd));
> + ret += HPAGE_PMD_NR;
> + /* defer TLB flush to lower the overhead */
> + spin_unlock(&mm->page_table_lock);
> + goto out;
> + }
> +
> + if (pmd_trans_unstable(pmd))
> + goto out;
> + VM_BUG_ON(!pmd_present(*pmd));
> +
> + end = min(vma->vm_end, (address + PMD_SIZE) & PMD_MASK);
> + pte = pte_offset_map_lock(mm, pmd, address, &ptl);
> + for (_address = address, _pte = pte; _address < end;
> + _pte++, _address += PAGE_SIZE) {
> + pte_t pteval = *_pte;
> + if (!pte_present(pteval))
> + continue;
> + if (pte_numa(pteval))
> + continue;
> + page = vm_normal_page(vma, _address, pteval);
> + if (unlikely(!page))
> + continue;
> + /* only check non-shared pages */
> + if (page_mapcount(page) != 1)
> + continue;
> +
> + set_pte_at(mm, _address, _pte, pte_mknuma(pteval));
> +
> + /* defer TLB flush to lower the overhead */
> + ret++;
> + }
> + pte_unmap_unlock(pte, ptl);
> +
> + if (ret && !pmd_numa(*pmd)) {
> + spin_lock(&mm->page_table_lock);
> + set_pmd_at(mm, address, pmd, pmd_mknuma(*pmd));
> + spin_unlock(&mm->page_table_lock);
> + /* defer TLB flush to lower the overhead */
> + }
> +
> +out:
> + return ret;
> +}
> +
> +/* Assumes mmap_sem is held */
> +void
> +change_prot_numa(struct vm_area_struct *vma,
> + unsigned long address, unsigned long end)
> +{
> + struct mm_struct *mm = vma->vm_mm;
> + int progress = 0;
> +
> + while (address < end) {
> + VM_BUG_ON(address < vma->vm_start ||
> + address + PAGE_SIZE > vma->vm_end);
> +
> + progress += change_prot_numa_range(mm, vma, address);
> + address = (address + PMD_SIZE) & PMD_MASK;
> + }
> +
> + /*
> + * Flush the TLB for the mm to start the NUMA hinting
> + * page faults after we finish scanning this vma part
> + * if there were any PTE updates
> + */
> + if (progress) {
> + mmu_notifier_invalidate_range_start(vma->vm_mm, address, end);
> + flush_tlb_range(vma, address, end);
> + mmu_notifier_invalidate_range_end(vma->vm_mm, address, end);
> + }
> +}
> +#else
> +static unsigned long change_prot_numa(struct vm_area_struct *vma,
> + unsigned long addr, unsigned long end)
> +{
> + return 0;
> +}
> +#endif /* CONFIG_ARCH_USES_NUMA_PROT_NONE */
> +
> /*
> * Check if all pages in a range are on a set of nodes.
> * If pagelist != NULL then isolate pages from the LRU and
> @@ -583,22 +723,32 @@ check_range(struct mm_struct *mm, unsigned long start, unsigned long end,
> return ERR_PTR(-EFAULT);
> prev = NULL;
> for (vma = first; vma && vma->vm_start < end; vma = vma->vm_next) {
> + unsigned long endvma = vma->vm_end;
> +
> + if (endvma > end)
> + endvma = end;
> + if (vma->vm_start > start)
> + start = vma->vm_start;
> +
> if (!(flags & MPOL_MF_DISCONTIG_OK)) {
> if (!vma->vm_next && vma->vm_end < end)
> return ERR_PTR(-EFAULT);
> if (prev && prev->vm_end < vma->vm_start)
> return ERR_PTR(-EFAULT);
> }
> - if (!is_vm_hugetlb_page(vma) &&
> - ((flags & MPOL_MF_STRICT) ||
> +
> + if (is_vm_hugetlb_page(vma))
> + goto next;
> +
> + if (flags & MPOL_MF_LAZY) {
> + change_prot_numa(vma, start, endvma);
> + goto next;
> + }
> +
> + if ((flags & MPOL_MF_STRICT) ||
> ((flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) &&
> - vma_migratable(vma)))) {
> - unsigned long endvma = vma->vm_end;
> + vma_migratable(vma))) {
>
> - if (endvma > end)
> - endvma = end;
> - if (vma->vm_start > start)
> - start = vma->vm_start;
> err = check_pgd_range(vma, start, endvma, nodes,
> flags, private);
> if (err) {
> @@ -606,6 +756,7 @@ check_range(struct mm_struct *mm, unsigned long start, unsigned long end,
> break;
> }
> }
> +next:
> prev = vma;
> }
> return first;
> @@ -1138,8 +1289,7 @@ static long do_mbind(unsigned long start, unsigned long len,
> int err;
> LIST_HEAD(pagelist);
>
> - if (flags & ~(unsigned long)(MPOL_MF_STRICT |
> - MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
> + if (flags & ~(unsigned long)MPOL_MF_VALID)
> return -EINVAL;
> if ((flags & MPOL_MF_MOVE_ALL) && !capable(CAP_SYS_NICE))
> return -EPERM;
> @@ -1162,6 +1312,9 @@ static long do_mbind(unsigned long start, unsigned long len,
> if (IS_ERR(new))
> return PTR_ERR(new);
>
> + if (flags & MPOL_MF_LAZY)
> + new->flags |= MPOL_F_MOF;
> +
> /*
> * If we are using the default policy then operation
> * on discontinuous address spaces is okay after all
> @@ -1198,13 +1351,15 @@ static long do_mbind(unsigned long start, unsigned long len,
> vma = check_range(mm, start, end, nmask,
> flags | MPOL_MF_INVERT, &pagelist);
>
> - err = PTR_ERR(vma);
> - if (!IS_ERR(vma)) {
> - int nr_failed = 0;
> -
> + err = PTR_ERR(vma); /* maybe ... */
> + if (!IS_ERR(vma) && mode != MPOL_NOOP)
> err = mbind_range(mm, start, end, new);
>
> + if (!err) {
> + int nr_failed = 0;
> +
> if (!list_empty(&pagelist)) {
> + WARN_ON_ONCE(flags & MPOL_MF_LAZY);
> nr_failed = migrate_pages(&pagelist, new_vma_page,
> (unsigned long)vma,
> false, MIGRATE_SYNC,
> @@ -1213,7 +1368,7 @@ static long do_mbind(unsigned long start, unsigned long len,
> putback_lru_pages(&pagelist);
> }
>
> - if (!err && nr_failed && (flags & MPOL_MF_STRICT))
> + if (nr_failed && (flags & MPOL_MF_STRICT))
> err = -EIO;
> } else
> putback_lru_pages(&pagelist);
next prev parent reply other threads:[~2013-01-05 5:18 UTC|newest]
Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-07 10:23 [PATCH 00/49] Automatic NUMA Balancing v10 Mel Gorman
2012-12-07 10:23 ` [PATCH 01/49] x86: mm: only do a local tlb flush in ptep_set_access_flags() Mel Gorman
2012-12-07 10:23 ` [PATCH 02/49] x86: mm: drop TLB flush from ptep_set_access_flags Mel Gorman
2012-12-07 10:23 ` [PATCH 03/49] mm,generic: only flush the local TLB in ptep_set_access_flags Mel Gorman
2012-12-07 10:23 ` [PATCH 04/49] x86/mm: Introduce pte_accessible() Mel Gorman
2012-12-07 10:23 ` [PATCH 05/49] mm: Only flush the TLB when clearing an accessible pte Mel Gorman
2012-12-07 10:23 ` [PATCH 06/49] mm: Count the number of pages affected in change_protection() Mel Gorman
2012-12-07 10:23 ` [PATCH 07/49] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Mel Gorman
2012-12-07 10:23 ` [PATCH 08/49] mm: compaction: Move migration fail/success stats to migrate.c Mel Gorman
2012-12-07 10:23 ` [PATCH 09/49] mm: migrate: Add a tracepoint for migrate_pages Mel Gorman
2012-12-07 10:23 ` [PATCH 10/49] mm: compaction: Add scanned and isolated counters for compaction Mel Gorman
2012-12-07 10:23 ` [PATCH 11/49] mm: numa: define _PAGE_NUMA Mel Gorman
2012-12-07 10:23 ` [PATCH 12/49] mm: numa: pte_numa() and pmd_numa() Mel Gorman
2012-12-07 10:23 ` [PATCH 13/49] mm: numa: Support NUMA hinting page faults from gup/gup_fast Mel Gorman
2012-12-07 10:23 ` [PATCH 14/49] mm: numa: split_huge_page: transfer the NUMA type from the pmd to the pte Mel Gorman
2012-12-07 10:23 ` [PATCH 15/49] mm: numa: Create basic numa page hinting infrastructure Mel Gorman
2012-12-07 10:23 ` [PATCH 16/49] mm: mempolicy: Make MPOL_LOCAL a real policy Mel Gorman
2012-12-07 10:23 ` [PATCH 17/49] mm: mempolicy: Add MPOL_NOOP Mel Gorman
2012-12-07 10:23 ` [PATCH 18/49] mm: mempolicy: Check for misplaced page Mel Gorman
2012-12-07 10:23 ` [PATCH 19/49] mm: migrate: Introduce migrate_misplaced_page() Mel Gorman
2012-12-07 10:23 ` [PATCH 20/49] mm: migrate: Drop the misplaced pages reference count if the target node is full Mel Gorman
2012-12-07 10:23 ` [PATCH 21/49] mm: mempolicy: Use _PAGE_NUMA to migrate pages Mel Gorman
2012-12-07 10:23 ` [PATCH 22/49] mm: mempolicy: Add MPOL_MF_LAZY Mel Gorman
2013-01-05 5:18 ` Simon Jeons [this message]
2013-01-07 15:14 ` Mel Gorman
2012-12-07 10:23 ` [PATCH 23/49] mm: mempolicy: Implement change_prot_numa() in terms of change_protection() Mel Gorman
2012-12-07 10:23 ` [PATCH 24/49] mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now Mel Gorman
2012-12-07 10:23 ` [PATCH 25/49] mm: numa: Add fault driven placement and migration Mel Gorman
2013-01-04 11:56 ` Simon Jeons
2012-12-07 10:23 ` [PATCH 26/49] mm: sched: numa: Implement constant, per task Working Set Sampling (WSS) rate Mel Gorman
2012-12-07 10:23 ` [PATCH 27/49] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Mel Gorman
2012-12-07 10:23 ` [PATCH 28/49] mm: sched: numa: Implement slow start for working set sampling Mel Gorman
2012-12-07 10:23 ` [PATCH 29/49] mm: numa: Add pte updates, hinting and migration stats Mel Gorman
2013-01-04 11:42 ` Simon Jeons
2013-01-07 15:29 ` Mel Gorman
2012-12-07 10:23 ` [PATCH 30/49] mm: numa: Migrate on reference policy Mel Gorman
2012-12-07 10:23 ` [PATCH 31/49] mm: numa: Migrate pages handled during a pmd_numa hinting fault Mel Gorman
2012-12-07 10:23 ` [PATCH 32/49] mm: numa: Structures for Migrate On Fault per NUMA migration rate limiting Mel Gorman
2012-12-07 10:23 ` [PATCH 33/49] mm: numa: Rate limit the amount of memory that is migrated between nodes Mel Gorman
2012-12-07 10:23 ` [PATCH 34/49] mm: numa: Rate limit setting of pte_numa if node is saturated Mel Gorman
2012-12-07 10:23 ` [PATCH 35/49] sched: numa: Slowly increase the scanning period as NUMA faults are handled Mel Gorman
2012-12-07 10:23 ` [PATCH 36/49] mm: numa: Introduce last_nid to the page frame Mel Gorman
2012-12-07 10:23 ` [PATCH 37/49] mm: numa: split_huge_page: Transfer last_nid on tail page Mel Gorman
2012-12-07 10:23 ` [PATCH 38/49] mm: numa: migrate: Set last_nid on newly allocated page Mel Gorman
2012-12-07 10:23 ` [PATCH 39/49] mm: numa: Use a two-stage filter to restrict pages being migrated for unlikely task<->node relationships Mel Gorman
2012-12-07 10:23 ` [PATCH 40/49] mm: sched: Adapt the scanning rate if a NUMA hinting fault does not migrate Mel Gorman
2012-12-07 10:23 ` [PATCH 41/49] mm: sched: numa: Control enabling and disabling of NUMA balancing Mel Gorman
2012-12-07 10:23 ` [PATCH 42/49] mm: sched: numa: Control enabling and disabling of NUMA balancing if !SCHED_DEBUG Mel Gorman
2012-12-07 10:23 ` [PATCH 43/49] mm: sched: numa: Delay PTE scanning until a task is scheduled on a new node Mel Gorman
2012-12-07 10:23 ` [PATCH 44/49] mm: numa: Add THP migration for the NUMA working set scanning fault case Mel Gorman
[not found] ` <20130105084229.GA3208@hacker.(null)>
2013-01-07 15:37 ` Mel Gorman
2012-12-07 10:23 ` [PATCH 45/49] mm: numa: Add THP migration for the NUMA working set scanning fault case build fix Mel Gorman
2012-12-07 10:23 ` [PATCH 46/49] mm: numa: Account for failed allocations and isolations as migration failures Mel Gorman
2012-12-07 10:23 ` [PATCH 47/49] mm: migrate: Account a transhuge page properly when rate limiting Mel Gorman
2012-12-07 10:23 ` [PATCH 48/49] mm/rmap: Convert the struct anon_vma::mutex to an rwsem Mel Gorman
2012-12-07 10:23 ` [PATCH 49/49] mm/rmap, migration: Make rmap_walk_anon() and try_to_unmap_anon() more scalable Mel Gorman
2012-12-07 11:01 ` [PATCH 00/49] Automatic NUMA Balancing v10 Ingo Molnar
2012-12-09 20:36 ` Mel Gorman
2012-12-09 21:17 ` Kirill A. Shutemov
2012-12-10 8:44 ` Mel Gorman
2012-12-10 5:07 ` Srikar Dronamraju
2012-12-10 6:28 ` Srikar Dronamraju
2012-12-10 12:44 ` [PATCH] sched: Fix task_numa_fault() + KSM crash Ingo Molnar
2012-12-13 13:57 ` Srikar Dronamraju
2012-12-10 8:46 ` [PATCH 00/49] Automatic NUMA Balancing v10 Mel Gorman
2012-12-10 12:35 ` Ingo Molnar
2012-12-10 11:39 ` Ingo Molnar
2012-12-10 11:53 ` Ingo Molnar
2012-12-10 15:24 ` Mel Gorman
2012-12-11 1:02 ` Mel Gorman
2012-12-11 8:52 ` Ingo Molnar
2012-12-11 9:18 ` Ingo Molnar
2012-12-11 15:22 ` Mel Gorman
2012-12-11 16:30 ` Mel Gorman
2012-12-17 10:33 ` Ingo Molnar
2012-12-10 16:42 ` Srikar Dronamraju
2012-12-10 19:23 ` Ingo Molnar
2012-12-10 23:35 ` Srikar Dronamraju
2012-12-10 23:40 ` Srikar Dronamraju
2012-12-13 13:21 ` Srikar Dronamraju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1357363097.5273.12.camel@kernel.cn.ibm.com \
--to=simon.jeons@gmail.com \
--cc=Lee.Schermerhorn@hp.com \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=dhillf@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkml.alex@gmail.com \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=pjt@google.com \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=srikar@linux.vnet.ibm.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).