* [PATCH] mm: memory: Introduce new vmf_insert_mixed_mkwrite
@ 2018-04-21 17:05 Souptick Joarder
2018-04-21 19:34 ` Matthew Wilcox
2018-04-23 9:21 ` kbuild test robot
0 siblings, 2 replies; 4+ messages in thread
From: Souptick Joarder @ 2018-04-21 17:05 UTC (permalink / raw)
To: hughd, minchan, ying.huang, ross.zwisler, willy
Cc: linux-mm, linux-kernel, viro, linux-fsdevel
vm_insert_mixed_mkwrite() has inefficiency when it
returns err, driver has to convert err to vm_fault_t
type. With new vmf_insert_mixed_mkwrite we can handle
this limitation.
As of now vm_insert_mixed_mkwrite() is only getting
invoked from fs/dax.c, so this change has to go first
in linus tree before changes in dax.
Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
---
include/linux/mm.h | 4 ++--
mm/memory.c | 15 +++++++++++----
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1ac1f06..9fe441c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2423,8 +2423,8 @@ int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, pgprot_t pgprot);
int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
pfn_t pfn);
-int vm_insert_mixed_mkwrite(struct vm_area_struct *vma, unsigned long addr,
- pfn_t pfn);
+vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
+ unsigned long addr, pfn_t pfn);
int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
static inline vm_fault_t vmf_insert_page(struct vm_area_struct *vma,
diff --git a/mm/memory.c b/mm/memory.c
index 01f5464..721cfd5 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1955,12 +1955,19 @@ int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
}
EXPORT_SYMBOL(vm_insert_mixed);
-int vm_insert_mixed_mkwrite(struct vm_area_struct *vma, unsigned long addr,
- pfn_t pfn)
+vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
+ unsigned long addr, pfn_t pfn)
{
- return __vm_insert_mixed(vma, addr, pfn, true);
+ int err;
+
+ err = __vm_insert_mixed(vma, addr, pfn, true);
+ if (err == -ENOMEM)
+ return VM_FAULT_OOM;
+ if (err < 0 && err != -EBUSY)
+ return VM_FAULT_SIGBUS;
+ return VM_FAULT_NOPAGE;
}
-EXPORT_SYMBOL(vm_insert_mixed_mkwrite);
+EXPORT_SYMBOL(vmf_insert_mixed_mkwrite);
/*
* maps a range of physical memory into the requested pages. the old
--
1.9.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: memory: Introduce new vmf_insert_mixed_mkwrite
2018-04-21 17:05 [PATCH] mm: memory: Introduce new vmf_insert_mixed_mkwrite Souptick Joarder
@ 2018-04-21 19:34 ` Matthew Wilcox
2018-04-21 19:41 ` Souptick Joarder
2018-04-23 9:21 ` kbuild test robot
1 sibling, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2018-04-21 19:34 UTC (permalink / raw)
To: Souptick Joarder
Cc: hughd, minchan, ying.huang, ross.zwisler, linux-mm, linux-kernel,
viro, linux-fsdevel
On Sat, Apr 21, 2018 at 10:35:40PM +0530, Souptick Joarder wrote:
> As of now vm_insert_mixed_mkwrite() is only getting
> invoked from fs/dax.c, so this change has to go first
> in linus tree before changes in dax.
No. One patch which changes both at the same time. The history should
be bisectable so that it compiles and works at every point.
The rest of the patch looks good.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: memory: Introduce new vmf_insert_mixed_mkwrite
2018-04-21 19:34 ` Matthew Wilcox
@ 2018-04-21 19:41 ` Souptick Joarder
0 siblings, 0 replies; 4+ messages in thread
From: Souptick Joarder @ 2018-04-21 19:41 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Hugh Dickins, Minchan Kim, ying.huang, Ross Zwisler, Linux-MM,
linux-kernel, Al Viro, linux-fsdevel
On Sun, Apr 22, 2018 at 1:04 AM, Matthew Wilcox <willy@infradead.org> wrote:
> On Sat, Apr 21, 2018 at 10:35:40PM +0530, Souptick Joarder wrote:
>> As of now vm_insert_mixed_mkwrite() is only getting
>> invoked from fs/dax.c, so this change has to go first
>> in linus tree before changes in dax.
>
> No. One patch which changes both at the same time. The history should
> be bisectable so that it compiles and works at every point.
>
> The rest of the patch looks good.
Sure, I will send in a single patch.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: memory: Introduce new vmf_insert_mixed_mkwrite
2018-04-21 17:05 [PATCH] mm: memory: Introduce new vmf_insert_mixed_mkwrite Souptick Joarder
2018-04-21 19:34 ` Matthew Wilcox
@ 2018-04-23 9:21 ` kbuild test robot
1 sibling, 0 replies; 4+ messages in thread
From: kbuild test robot @ 2018-04-23 9:21 UTC (permalink / raw)
To: Souptick Joarder
Cc: kbuild-all, hughd, minchan, ying.huang, ross.zwisler, willy,
linux-mm, linux-kernel, viro, linux-fsdevel
[-- Attachment #1: Type: text/plain, Size: 14023 bytes --]
Hi Souptick,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on mmotm/master]
[also build test ERROR on v4.17-rc2 next-20180423]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Souptick-Joarder/mm-memory-Introduce-new-vmf_insert_mixed_mkwrite/20180423-095015
base: git://git.cmpxchg.org/linux-mmotm.git master
config: x86_64-federa-25 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64
All errors (new ones prefixed by >>):
fs/dax.c: In function 'dax_iomap_pte_fault':
>> fs/dax.c:1265:12: error: implicit declaration of function 'vm_insert_mixed_mkwrite'; did you mean 'vmf_insert_mixed_mkwrite'? [-Werror=implicit-function-declaration]
error = vm_insert_mixed_mkwrite(vma, vaddr, pfn);
^~~~~~~~~~~~~~~~~~~~~~~
vmf_insert_mixed_mkwrite
cc1: some warnings being treated as errors
vim +1265 fs/dax.c
aaa422c4c Dan Williams 2017-11-13 1134
9a0dd4225 Jan Kara 2017-11-01 1135 static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
c0b246259 Jan Kara 2018-01-07 1136 int *iomap_errp, const struct iomap_ops *ops)
a7d73fe6c Christoph Hellwig 2016-09-19 1137 {
a0987ad5c Jan Kara 2017-11-01 1138 struct vm_area_struct *vma = vmf->vma;
a0987ad5c Jan Kara 2017-11-01 1139 struct address_space *mapping = vma->vm_file->f_mapping;
a7d73fe6c Christoph Hellwig 2016-09-19 1140 struct inode *inode = mapping->host;
1a29d85eb Jan Kara 2016-12-14 1141 unsigned long vaddr = vmf->address;
a7d73fe6c Christoph Hellwig 2016-09-19 1142 loff_t pos = (loff_t)vmf->pgoff << PAGE_SHIFT;
a7d73fe6c Christoph Hellwig 2016-09-19 1143 struct iomap iomap = { 0 };
9484ab1bf Jan Kara 2016-11-10 1144 unsigned flags = IOMAP_FAULT;
a7d73fe6c Christoph Hellwig 2016-09-19 1145 int error, major = 0;
d2c43ef13 Jan Kara 2017-11-01 1146 bool write = vmf->flags & FAULT_FLAG_WRITE;
caa51d26f Jan Kara 2017-11-01 1147 bool sync;
b1aa812b2 Jan Kara 2016-12-14 1148 int vmf_ret = 0;
a7d73fe6c Christoph Hellwig 2016-09-19 1149 void *entry;
1b5a1cb21 Jan Kara 2017-11-01 1150 pfn_t pfn;
a7d73fe6c Christoph Hellwig 2016-09-19 1151
a9c42b33e Ross Zwisler 2017-05-08 1152 trace_dax_pte_fault(inode, vmf, vmf_ret);
a7d73fe6c Christoph Hellwig 2016-09-19 1153 /*
a7d73fe6c Christoph Hellwig 2016-09-19 1154 * Check whether offset isn't beyond end of file now. Caller is supposed
a7d73fe6c Christoph Hellwig 2016-09-19 1155 * to hold locks serializing us with truncate / punch hole so this is
a7d73fe6c Christoph Hellwig 2016-09-19 1156 * a reliable test.
a7d73fe6c Christoph Hellwig 2016-09-19 1157 */
a9c42b33e Ross Zwisler 2017-05-08 1158 if (pos >= i_size_read(inode)) {
a9c42b33e Ross Zwisler 2017-05-08 1159 vmf_ret = VM_FAULT_SIGBUS;
a9c42b33e Ross Zwisler 2017-05-08 1160 goto out;
a9c42b33e Ross Zwisler 2017-05-08 1161 }
a7d73fe6c Christoph Hellwig 2016-09-19 1162
d2c43ef13 Jan Kara 2017-11-01 1163 if (write && !vmf->cow_page)
a7d73fe6c Christoph Hellwig 2016-09-19 1164 flags |= IOMAP_WRITE;
a7d73fe6c Christoph Hellwig 2016-09-19 1165
13e451fdc Jan Kara 2017-05-12 1166 entry = grab_mapping_entry(mapping, vmf->pgoff, 0);
13e451fdc Jan Kara 2017-05-12 1167 if (IS_ERR(entry)) {
13e451fdc Jan Kara 2017-05-12 1168 vmf_ret = dax_fault_return(PTR_ERR(entry));
13e451fdc Jan Kara 2017-05-12 1169 goto out;
13e451fdc Jan Kara 2017-05-12 1170 }
13e451fdc Jan Kara 2017-05-12 1171
a7d73fe6c Christoph Hellwig 2016-09-19 1172 /*
e2093926a Ross Zwisler 2017-06-02 1173 * It is possible, particularly with mixed reads & writes to private
e2093926a Ross Zwisler 2017-06-02 1174 * mappings, that we have raced with a PMD fault that overlaps with
e2093926a Ross Zwisler 2017-06-02 1175 * the PTE we need to set up. If so just return and the fault will be
e2093926a Ross Zwisler 2017-06-02 1176 * retried.
e2093926a Ross Zwisler 2017-06-02 1177 */
e2093926a Ross Zwisler 2017-06-02 1178 if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) {
e2093926a Ross Zwisler 2017-06-02 1179 vmf_ret = VM_FAULT_NOPAGE;
e2093926a Ross Zwisler 2017-06-02 1180 goto unlock_entry;
e2093926a Ross Zwisler 2017-06-02 1181 }
e2093926a Ross Zwisler 2017-06-02 1182
e2093926a Ross Zwisler 2017-06-02 1183 /*
a7d73fe6c Christoph Hellwig 2016-09-19 1184 * Note that we don't bother to use iomap_apply here: DAX required
a7d73fe6c Christoph Hellwig 2016-09-19 1185 * the file system block size to be equal the page size, which means
a7d73fe6c Christoph Hellwig 2016-09-19 1186 * that we never have to deal with more than a single extent here.
a7d73fe6c Christoph Hellwig 2016-09-19 1187 */
a7d73fe6c Christoph Hellwig 2016-09-19 1188 error = ops->iomap_begin(inode, pos, PAGE_SIZE, flags, &iomap);
c0b246259 Jan Kara 2018-01-07 1189 if (iomap_errp)
c0b246259 Jan Kara 2018-01-07 1190 *iomap_errp = error;
a9c42b33e Ross Zwisler 2017-05-08 1191 if (error) {
a9c42b33e Ross Zwisler 2017-05-08 1192 vmf_ret = dax_fault_return(error);
13e451fdc Jan Kara 2017-05-12 1193 goto unlock_entry;
a9c42b33e Ross Zwisler 2017-05-08 1194 }
a7d73fe6c Christoph Hellwig 2016-09-19 1195 if (WARN_ON_ONCE(iomap.offset + iomap.length < pos + PAGE_SIZE)) {
13e451fdc Jan Kara 2017-05-12 1196 error = -EIO; /* fs corruption? */
13e451fdc Jan Kara 2017-05-12 1197 goto error_finish_iomap;
a7d73fe6c Christoph Hellwig 2016-09-19 1198 }
a7d73fe6c Christoph Hellwig 2016-09-19 1199
a7d73fe6c Christoph Hellwig 2016-09-19 1200 if (vmf->cow_page) {
31a6f1a6e Jan Kara 2017-11-01 1201 sector_t sector = dax_iomap_sector(&iomap, pos);
31a6f1a6e Jan Kara 2017-11-01 1202
a7d73fe6c Christoph Hellwig 2016-09-19 1203 switch (iomap.type) {
a7d73fe6c Christoph Hellwig 2016-09-19 1204 case IOMAP_HOLE:
a7d73fe6c Christoph Hellwig 2016-09-19 1205 case IOMAP_UNWRITTEN:
a7d73fe6c Christoph Hellwig 2016-09-19 1206 clear_user_highpage(vmf->cow_page, vaddr);
a7d73fe6c Christoph Hellwig 2016-09-19 1207 break;
a7d73fe6c Christoph Hellwig 2016-09-19 1208 case IOMAP_MAPPED:
cccbce671 Dan Williams 2017-01-27 1209 error = copy_user_dax(iomap.bdev, iomap.dax_dev,
cccbce671 Dan Williams 2017-01-27 1210 sector, PAGE_SIZE, vmf->cow_page, vaddr);
a7d73fe6c Christoph Hellwig 2016-09-19 1211 break;
a7d73fe6c Christoph Hellwig 2016-09-19 1212 default:
a7d73fe6c Christoph Hellwig 2016-09-19 1213 WARN_ON_ONCE(1);
a7d73fe6c Christoph Hellwig 2016-09-19 1214 error = -EIO;
a7d73fe6c Christoph Hellwig 2016-09-19 1215 break;
a7d73fe6c Christoph Hellwig 2016-09-19 1216 }
a7d73fe6c Christoph Hellwig 2016-09-19 1217
a7d73fe6c Christoph Hellwig 2016-09-19 1218 if (error)
13e451fdc Jan Kara 2017-05-12 1219 goto error_finish_iomap;
b1aa812b2 Jan Kara 2016-12-14 1220
b1aa812b2 Jan Kara 2016-12-14 1221 __SetPageUptodate(vmf->cow_page);
b1aa812b2 Jan Kara 2016-12-14 1222 vmf_ret = finish_fault(vmf);
b1aa812b2 Jan Kara 2016-12-14 1223 if (!vmf_ret)
b1aa812b2 Jan Kara 2016-12-14 1224 vmf_ret = VM_FAULT_DONE_COW;
13e451fdc Jan Kara 2017-05-12 1225 goto finish_iomap;
a7d73fe6c Christoph Hellwig 2016-09-19 1226 }
a7d73fe6c Christoph Hellwig 2016-09-19 1227
aaa422c4c Dan Williams 2017-11-13 1228 sync = dax_fault_is_synchronous(flags, vma, &iomap);
caa51d26f Jan Kara 2017-11-01 1229
a7d73fe6c Christoph Hellwig 2016-09-19 1230 switch (iomap.type) {
a7d73fe6c Christoph Hellwig 2016-09-19 1231 case IOMAP_MAPPED:
a7d73fe6c Christoph Hellwig 2016-09-19 1232 if (iomap.flags & IOMAP_F_NEW) {
a7d73fe6c Christoph Hellwig 2016-09-19 1233 count_vm_event(PGMAJFAULT);
a0987ad5c Jan Kara 2017-11-01 1234 count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
a7d73fe6c Christoph Hellwig 2016-09-19 1235 major = VM_FAULT_MAJOR;
a7d73fe6c Christoph Hellwig 2016-09-19 1236 }
1b5a1cb21 Jan Kara 2017-11-01 1237 error = dax_iomap_pfn(&iomap, pos, PAGE_SIZE, &pfn);
1b5a1cb21 Jan Kara 2017-11-01 1238 if (error < 0)
1b5a1cb21 Jan Kara 2017-11-01 1239 goto error_finish_iomap;
1b5a1cb21 Jan Kara 2017-11-01 1240
56addbc73 Andrew Morton 2018-04-14 1241 entry = dax_insert_mapping_entry(mapping, vmf, entry, pfn,
caa51d26f Jan Kara 2017-11-01 1242 0, write && !sync);
1b5a1cb21 Jan Kara 2017-11-01 1243 if (IS_ERR(entry)) {
1b5a1cb21 Jan Kara 2017-11-01 1244 error = PTR_ERR(entry);
1b5a1cb21 Jan Kara 2017-11-01 1245 goto error_finish_iomap;
1b5a1cb21 Jan Kara 2017-11-01 1246 }
1b5a1cb21 Jan Kara 2017-11-01 1247
caa51d26f Jan Kara 2017-11-01 1248 /*
caa51d26f Jan Kara 2017-11-01 1249 * If we are doing synchronous page fault and inode needs fsync,
caa51d26f Jan Kara 2017-11-01 1250 * we can insert PTE into page tables only after that happens.
caa51d26f Jan Kara 2017-11-01 1251 * Skip insertion for now and return the pfn so that caller can
caa51d26f Jan Kara 2017-11-01 1252 * insert it after fsync is done.
caa51d26f Jan Kara 2017-11-01 1253 */
caa51d26f Jan Kara 2017-11-01 1254 if (sync) {
caa51d26f Jan Kara 2017-11-01 1255 if (WARN_ON_ONCE(!pfnp)) {
caa51d26f Jan Kara 2017-11-01 1256 error = -EIO;
caa51d26f Jan Kara 2017-11-01 1257 goto error_finish_iomap;
caa51d26f Jan Kara 2017-11-01 1258 }
caa51d26f Jan Kara 2017-11-01 1259 *pfnp = pfn;
caa51d26f Jan Kara 2017-11-01 1260 vmf_ret = VM_FAULT_NEEDDSYNC | major;
caa51d26f Jan Kara 2017-11-01 1261 goto finish_iomap;
caa51d26f Jan Kara 2017-11-01 1262 }
1b5a1cb21 Jan Kara 2017-11-01 1263 trace_dax_insert_mapping(inode, vmf, entry);
1b5a1cb21 Jan Kara 2017-11-01 1264 if (write)
1b5a1cb21 Jan Kara 2017-11-01 @1265 error = vm_insert_mixed_mkwrite(vma, vaddr, pfn);
1b5a1cb21 Jan Kara 2017-11-01 1266 else
1b5a1cb21 Jan Kara 2017-11-01 1267 error = vm_insert_mixed(vma, vaddr, pfn);
1b5a1cb21 Jan Kara 2017-11-01 1268
9f141d6ef Jan Kara 2016-10-19 1269 /* -EBUSY is fine, somebody else faulted on the same PTE */
9f141d6ef Jan Kara 2016-10-19 1270 if (error == -EBUSY)
9f141d6ef Jan Kara 2016-10-19 1271 error = 0;
a7d73fe6c Christoph Hellwig 2016-09-19 1272 break;
a7d73fe6c Christoph Hellwig 2016-09-19 1273 case IOMAP_UNWRITTEN:
a7d73fe6c Christoph Hellwig 2016-09-19 1274 case IOMAP_HOLE:
d2c43ef13 Jan Kara 2017-11-01 1275 if (!write) {
91d25ba8a Ross Zwisler 2017-09-06 1276 vmf_ret = dax_load_hole(mapping, entry, vmf);
13e451fdc Jan Kara 2017-05-12 1277 goto finish_iomap;
1550290b0 Ross Zwisler 2016-11-08 1278 }
a7d73fe6c Christoph Hellwig 2016-09-19 1279 /*FALLTHRU*/
a7d73fe6c Christoph Hellwig 2016-09-19 1280 default:
a7d73fe6c Christoph Hellwig 2016-09-19 1281 WARN_ON_ONCE(1);
a7d73fe6c Christoph Hellwig 2016-09-19 1282 error = -EIO;
a7d73fe6c Christoph Hellwig 2016-09-19 1283 break;
a7d73fe6c Christoph Hellwig 2016-09-19 1284 }
a7d73fe6c Christoph Hellwig 2016-09-19 1285
13e451fdc Jan Kara 2017-05-12 1286 error_finish_iomap:
9f141d6ef Jan Kara 2016-10-19 1287 vmf_ret = dax_fault_return(error) | major;
1550290b0 Ross Zwisler 2016-11-08 1288 finish_iomap:
1550290b0 Ross Zwisler 2016-11-08 1289 if (ops->iomap_end) {
9f141d6ef Jan Kara 2016-10-19 1290 int copied = PAGE_SIZE;
9f141d6ef Jan Kara 2016-10-19 1291
9f141d6ef Jan Kara 2016-10-19 1292 if (vmf_ret & VM_FAULT_ERROR)
9f141d6ef Jan Kara 2016-10-19 1293 copied = 0;
9f141d6ef Jan Kara 2016-10-19 1294 /*
9f141d6ef Jan Kara 2016-10-19 1295 * The fault is done by now and there's no way back (other
9f141d6ef Jan Kara 2016-10-19 1296 * thread may be already happily using PTE we have installed).
9f141d6ef Jan Kara 2016-10-19 1297 * Just ignore error from ->iomap_end since we cannot do much
9f141d6ef Jan Kara 2016-10-19 1298 * with it.
9f141d6ef Jan Kara 2016-10-19 1299 */
9f141d6ef Jan Kara 2016-10-19 1300 ops->iomap_end(inode, pos, PAGE_SIZE, copied, flags, &iomap);
1550290b0 Ross Zwisler 2016-11-08 1301 }
13e451fdc Jan Kara 2017-05-12 1302 unlock_entry:
91d25ba8a Ross Zwisler 2017-09-06 1303 put_locked_mapping_entry(mapping, vmf->pgoff);
a9c42b33e Ross Zwisler 2017-05-08 1304 out:
a9c42b33e Ross Zwisler 2017-05-08 1305 trace_dax_pte_fault_done(inode, vmf, vmf_ret);
b1aa812b2 Jan Kara 2016-12-14 1306 return vmf_ret;
1550290b0 Ross Zwisler 2016-11-08 1307 }
642261ac9 Ross Zwisler 2016-11-08 1308
:::::: The code at line 1265 was first introduced by commit
:::::: 1b5a1cb21e0cdfb001050c76dc31039cdece1a63 dax: Inline dax_insert_mapping() into the callsite
:::::: TO: Jan Kara <jack@suse.cz>
:::::: CC: Dan Williams <dan.j.williams@intel.com>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 47804 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-04-23 9:21 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-21 17:05 [PATCH] mm: memory: Introduce new vmf_insert_mixed_mkwrite Souptick Joarder
2018-04-21 19:34 ` Matthew Wilcox
2018-04-21 19:41 ` Souptick Joarder
2018-04-23 9:21 ` kbuild test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).