All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: NFS client write performance issue ... thoughts?
@ 2004-01-09 21:30 trond.myklebust
  2004-01-12 21:37 ` Paul Smith
  0 siblings, 1 reply; 14+ messages in thread
From: trond.myklebust @ 2004-01-09 21:30 UTC (permalink / raw)
  To: pausmith; +Cc: nfs

[-- Attachment #1: Type: text/plain, Size: 1338 bytes --]

På to , 08/01/2004 klokka 12:47, skreiv Paul Smith:
> Do you know when those accounting errors were fixed?

In the official kernels? 2.4.22, I believe... However there are patches
going back to 2.4.19...

See http://www.fys.uio.no/~trondmy/src/Linux-2.4.x/2.4.19/

The main patch you want will be linux-2.4.19-14-call_start.dif

>
> ClearCase implements its own virtual filesystem type, and so is
> heavily tied to specific kernels (the kernel module is not open source
> of course :( ).  We basically can move to any kernel that has been
> released as part of an official Red Hat release (say, 2.4.20-8 from
> RH9 would work), but no other kernels can be used (the ClearCase
> kernel module has checks on the sizes of various kernel structures and
> won't load if they're not what it thinks they should be--and since
> it's a filesystem it cares deeply about structures that have tended to
> change a lot.  It won't even work with vanilla kernel.org kernels of
> the same version.)

Blech...

Note: if you want to try implementing a scheme like what you propose,
then the simplest way to do it, would be to add something like the
following patch. It disables nfs_strategy(), then causes
nfs_updatepage() to extend the request size if it sees that we're not
using byte-range locking, and the complete page is in cache.

Cheers,
  Trond



[-- Attachment #2: untitled-1.2 --]
[-- Type: text/html, Size: 2822 bytes --]

[-- Attachment #3: gnurr.dif --]
[-- Type: text/plain, Size: 1214 bytes --]

--- linux-2.4.23-rc1/fs/nfs/write.c.orig	2003-11-16 19:24:23.000000000 -0500
+++ linux-2.4.23-rc1/fs/nfs/write.c	2004-01-09 16:25:18.000000000 -0500
@@ -746,6 +746,7 @@ nfs_update_request(struct file* file, st
 static void
 nfs_strategy(struct inode *inode)
 {
+#if 0
 	unsigned int	dirty, wpages;
 
 	dirty  = inode->u.nfs_i.ndirty;
@@ -760,6 +761,7 @@ nfs_strategy(struct inode *inode)
 	if (dirty >= NFS_STRATEGY_PAGES * wpages)
 		nfs_flush_file(inode, NULL, 0, 0, 0);
 #endif
+#endif
 }
 
 int
@@ -849,8 +851,19 @@ nfs_updatepage(struct file *file, struct
 		SetPageUptodate(page);
 		nfs_unlock_request(req);
 		nfs_strategy(inode);
-	} else
+	} else {
+		/* If we are not locking, and the page is up to date,
+		 * we may write out the entire page for efficiency.
+		 */
+		if (Page_Uptodate(page) && inode->i_flock == 0) {
+			req->wb_offset = 0;
+			if (page->index < (inode->i_size >> PAGE_CACHE_SHIFT))
+				req->wb_bytes = PAGE_CACHE_SIZE;
+			else
+				req->wb_bytes = inode->i_size & PAGE_CACHE_MASK;
+		}
 		nfs_unlock_request(req);
+	}
 done:
         dprintk("NFS:      nfs_updatepage returns %d (isize %Ld)\n",
                                                 status, (long long)inode->i_size);


^ permalink raw reply	[flat|nested] 14+ messages in thread
* RE: NFS client write performance issue ... thoughts?
@ 2004-01-12 12:45 Mikkelborg, Kjetil
  2004-01-12 17:30 ` Paul Smith
  0 siblings, 1 reply; 14+ messages in thread
From: Mikkelborg, Kjetil @ 2004-01-12 12:45 UTC (permalink / raw)
  To: Paul Smith; +Cc: nfs



-----Original Message-----
From: Paul Smith [mailto:pausmith@nortelnetworks.com]=20
Sent: 8. januar 2004 18:47
To: nfs@lists.sourceforge.net
Subject: Re: [NFS] NFS client write performance issue ... thoughts?


%% <trond.myklebust@fys.uio.no> writes:

  tm> All you are basically showing here is that our write caching sucks
  tm> badly. There's nothing there to pinpoint merging vs not merging
  tm> requests as the culprit.

Good point.  I think that was "intuited" from other info, but I'll have
to check.

  tm> 3 things that will affect those numbers, and cloud the issue:

  tm>   1) Linux 2.4.x has a hard limit of 256 outstanding read+write
nfs_page
  tm> struct per mountpoint in order to deal with the fact that the VM
does
  tm> not have the necessary support to notify us when we are low on
memory
  tm> (This limit has been removed in 2.6.x...).

OK.

  tm>   2) Linux immediately puts the write on the wire once there are
more
  tm> than wsize bytes to write out. This explains why bumping wsize
results
  tm> in fewer writes.

OK.

  tm>   3) There are accounting errors in Linux 2.4.18 that cause
  tm> retransmitted requests to be added to the total number of
transmitted
  tm> ones. That explains why switching to TCP improves matters.

Do you know when those accounting errors were fixed?

ClearCase implements its own virtual filesystem type, and so is heavily
tied to specific kernels (the kernel module is not open source of course
:( ).  We basically can move to any kernel that has been released as
part of an official Red Hat release (say, 2.4.20-8 from RH9 would work),
but no other kernels can be used (the ClearCase kernel module has checks
on the sizes of various kernel structures and won't load if they're not
what it thinks they should be--and since it's a filesystem it cares
deeply about structures that have tended to change a lot.  It won't even
work with vanilla kernel.org kernels of the same version.)

Actually It does not look like clearcase is checking for an exact kernel
version, it just depends on redhat hacks in the kernel (I have no clue
to which). But taking a 2.4.20-XX redhat kernel, and building it from
SRPM actually work. Furthermore, since you have the kernel in source
when building it from SRPM, you can add as many patches as you want, as
long as these patches does not screw with the same stuff clearcase mvfs
relies on. I managed to do some heavy modifying of a rh9 kernel SRPM,
patch it up to what level I needed + include support for diskless boot.
And use this on Fedora, and still got clearcase to work ( I had to tweak
the /etc/issue, since clearcase actually checks for redhat(version)
string).

  tm> Note: Try doing this with mmap(), and you will get very different
  tm> numbers, since mmap() can cache the entire database in memory, and
only
  tm> flush it out when you msync() (or when memory pressure forces it
to do
  tm> so).

OK... except since we don't have the source we can't switch to mmap()
without doing something very hacky like introducing some kind of shim
shared library to remap some read/write calls to mmap().  Ouch.

Also I think that ClearCase _does_ force sync fairly regularly to be
sure the database is consistent.

  tm> One further criticism: there are no READ requests on the Sun
  tm> machine.  That suggests that it had the database entirely in cache
  tm> when you started you test.

Good point.


Thanks Trond!

--=20
------------------------------------------------------------------------
-------
 Paul D. Smith <psmith@nortelnetworks.com>   HASMAT--HA Software Mthds &
Tools
 "Please remain calm...I may be mad, but I am a professional." --Mad
Scientist
------------------------------------------------------------------------
-------
   These are my opinions---Nortel Networks takes no responsibility for
them.


-------------------------------------------------------
This SF.net email is sponsored by: Perforce Software.
Perforce is the Fast Software Configuration Management System offering
advanced branching capabilities and atomic changes on 50+ platforms.
Free Eval! http://www.perforce.com/perforce/loadprog.html
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs


-------------------------------------------------------
This SF.net email is sponsored by: Perforce Software.
Perforce is the Fast Software Configuration Management System offering
advanced branching capabilities and atomic changes on 50+ platforms.
Free Eval! http://www.perforce.com/perforce/loadprog.html
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 14+ messages in thread
* Re: NFS client write performance issue ... thoughts?
@ 2004-01-08 17:32 trond.myklebust
  2004-01-08 17:47 ` Paul Smith
  2004-01-08 17:48 ` trond.myklebust
  0 siblings, 2 replies; 14+ messages in thread
From: trond.myklebust @ 2004-01-08 17:32 UTC (permalink / raw)
  To: pausmith; +Cc: nfs

[-- Attachment #1: Type: text/plain, Size: 1999 bytes --]

På to , 08/01/2004 klokka 10:26, skreiv Paul Smith:

> > View server on Linux 2.4.18-27 (zcard0pf):
> >
> >    Build time: 35.75s user 31.68s system 33% cpu 3:21.02 total RPC
> calls:     94922
> >    RPC retrans:       0
> >    NFS V3 WRITE:  63317
> >    NFS V3 COMMIT: 28916
> >    NFS V3 LOOKUP:  1067
> >    NFS V3 READ:     458
> >    NFS V3 GETATTR:  406
> >    NFS V3 ACCESS      0
> >    NFS V3 REMOVE      5
> >
> > View server on Solaris 5.8 (zcars0z4)
> >
> >    Build time:  35.50s user 32.09s system 46% cpu 2:26.36 total NFS
> calls:      3785
> >    RPC retrans:       0
> >    NFS V3 WRITE:    612
> >    NFS V3 COMMIT:     7
> >    NFS V3 LOOKUP:  1986
> >    NFS V3 READ:       0
> >    NFS V3 GETATTR:  532
> >    NFS V3 ACCESS    291
> >    NFS V3 REMOVE    291

All you are basically showing here is that our write caching sucks
badly. There's nothing there to pinpoint merging vs not merging requests
as the culprit.

3 things that will affect those numbers, and cloud the issue:

  1) Linux 2.4.x has a hard limit of 256 outstanding read+write nfs_page
struct per mountpoint in order to deal with the fact that the VM does
not have the necessary support to notify us when we are low on memory
(This limit has been removed in 2.6.x...).

  2) Linux immediately puts the write on the wire once there are more
than wsize bytes to write out. This explains why bumping wsize results
in fewer writes.

  3) There are accounting errors in Linux 2.4.18 that cause
retransmitted requests to be added to the total number of transmitted
ones. That explains why switching to TCP improves matters.


Note: Try doing this with mmap(), and you will get very different
numbers, since mmap() can cache the entire database in memory, and only
flush it out when you msync() (or when memory pressure forces it to do
so).

One further criticism: there are no READ requests on the Sun machine.
That suggests that it had the database entirely in cache when you
started you test.

Cheers,
  Trond



[-- Attachment #2: untitled-2 --]
[-- Type: text/html, Size: 3589 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread
* RE: NFS client write performance issue ... thoughts?
@ 2004-01-07 20:50 Lever, Charles
  0 siblings, 0 replies; 14+ messages in thread
From: Lever, Charles @ 2004-01-07 20:50 UTC (permalink / raw)
  To: Paul Smith; +Cc: nfs

ClearCase is a unique situation.

i would love an opportunity to work directly with the
Rational folks to make their products work well on
Linux NFS.  my (limited) experience with ClearCase
is that it is not terribly NFS friendly.


> -----Original Message-----
> From: Paul Smith [mailto:pausmith@nortelnetworks.com]
> Sent: Tuesday, January 06, 2004 1:10 PM
> To: nfs@lists.sourceforge.net
> Subject: Re: [NFS] NFS client write performance issue ... thoughts?
>=20
>=20
> %% "Lever, Charles" <Charles.Lever@netapp.com> writes:
>=20
>   lc> large commercial databases write whole pages, and never
>   lc> parts of pages, at once, to their data files.  and they
>   lc> write log files by extending them in a single write
>   lc> request.
>=20
>   lc> thus the single-write-request per-page limit is not a
>   lc> problem for them.
>=20
> I'm sure you're correct, but in our environment (ClearCase) the usage
> characteristics are very different.
>=20
> I'm working on getting you some hard numbers but I think we=20
> do all agree
> that for this particular use case as I've described it, the=20
> Linux method
> would result in less performance than the Sun method.  I'm not saying
> the Sun method is better in all cases, or even in most cases, I'm just
> saying that for this particular usage we are seeing a performance
> penalty on Linux.
>=20
>=20
> The question is, is there anything to be done about this?  Or is this
> too much of a niche situation for the folks on this list to worry much
> about?
>=20
> I took Trond's comments on using mmap() to heart: in retrospect it
> surprises me that they don't already use mmap() because I would think
> that would give better performance.  But in any case all we can do is
> suggest this to IBM/Rational and a major change like that=20
> will be a long
> time coming, even if they do accept it is a good idea.
>=20
> --=20
> --------------------------------------------------------------
> -----------------
>  Paul D. Smith <psmith@nortelnetworks.com>   HASMAT--HA=20
> Software Mthds & Tools
>  "Please remain calm...I may be mad, but I am a=20
> professional." --Mad Scientist
> --------------------------------------------------------------
> -----------------
>    These are my opinions---Nortel Networks takes no=20
> responsibility for them.
>=20
>=20
> -------------------------------------------------------
> This SF.net email is sponsored by: Perforce Software.
> Perforce is the Fast Software Configuration Management System offering
> advanced branching capabilities and atomic changes on 50+ platforms.
> Free Eval! http://www.perforce.com/perforce/loadprog.html
> _______________________________________________
> NFS maillist  -  NFS@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs
>=20


-------------------------------------------------------
This SF.net email is sponsored by: Perforce Software.
Perforce is the Fast Software Configuration Management System offering
advanced branching capabilities and atomic changes on 50+ platforms.
Free Eval! http://www.perforce.com/perforce/loadprog.html
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 14+ messages in thread
* RE: NFS client write performance issue ... thoughts?
@ 2004-01-06 16:17 Lever, Charles
  2004-01-06 18:10 ` Paul Smith
  0 siblings, 1 reply; 14+ messages in thread
From: Lever, Charles @ 2004-01-06 16:17 UTC (permalink / raw)
  To: Paul Smith, nfs

paul-

large commercial databases write whole pages, and never
parts of pages, at once, to their data files.  and they
write log files by extending them in a single write
request.

thus the single-write-request per-page limit is not a
problem for them.

> -----Original Message-----
> From: Paul Smith [mailto:pausmith@nortelnetworks.com]=20
> Sent: Monday, January 05, 2004 5:11 PM
> To: nfs@lists.sourceforge.net
> Subject: [NFS] NFS client write performance issue ... thoughts?
>=20
>=20
> Hi all; we've been doing some examination of NFS client=20
> performance, and
> have seen this apparently sub-optimal behavior: anyone here have any
> comments on this observation or thoughts about it?
>=20
> Thanks!
>=20
> > I was looking at the code (which doesn't appear to be fixed in 2.6),
> > and here's what I think it's doing.
> >=20
> > The Linux NFS client code is capable of only remembering a=20
> write to a
> > single contiguous chunk in each 4KB page of a file.  If a second
> > non-contiguous write occurs to the page, the first write has to be
> > flushed to the server.  So if the view server is seeking around in a
> > file writing a few bytes here and a few there, whenever it does a
> > write to a page that has already been written, the first=20
> write has to
> > be flushed to the server.  The code that does most of this is in
> > fs/nfs/write.c, function nfs_update_request().  That's the routine
> > that, given a new write request, searches for an existing=20
> one that is
> > contiguous with the new one.  If it finds a contiguous request, the
> > new one is coalesced with it and no NFS activitity is required.  If
> > instead it finds a pending write to the same page, it=20
> returns EBUSY to
> > its caller, which tells the caller (nfs_updatepage()) to=20
> synchronously
> > write the existing request to the server.
> >=20
> > The Solaris NFS client (and probably most other NFS client
> > implementations) doesn't work this way.  Whenever a small write is
> > made to a block of a file, the block is read from the=20
> server, and then
> > the write is applied to the cached block.  If the block is=20
> already in
> > cache, the write is applied to block without any NFS transactions
> > being required.  When the file is closed or fsync()ed, the entire
> > block is written to the server.  So the client code doesn't need to
> > record each individual small write to a block, it just modifies the
> > cached block as necessary.
> >=20
> > I don't know why the Linux NFS client doesn't work this way, but I
> > think it wouldn't be difficult to make it work that way.  In some
> > scenarios, this might be a performance hit (because an=20
> entire block of
> > the file has to be read from the server just to write a few bytes to
> > it), but I think that in most cases, doing it the Solaris=20
> way would be
> > a performance win.  I'd guess that if any sort of database,=20
> such as a
> > GDBM database, is accessed over NFS, the Linux method would=20
> result in
> > much poorer performance than the Solaris method.
>=20
>=20
> --=20
> --------------------------------------------------------------
> -----------------
>  Paul D. Smith <psmith@nortelnetworks.com>   HASMAT: HA=20
> Software Mthds & Tools
>  "Please remain calm...I may be mad, but I am a=20
> professional." --Mad Scientist
> --------------------------------------------------------------
> -----------------
>    These are my opinions---Nortel Networks takes no=20
> responsibility for them.
>=20
>=20
> -------------------------------------------------------
> This SF.net email is sponsored by: IBM Linux Tutorials.
> Become an expert in LINUX or just sharpen your skills.  Sign=20
> up for IBM's
> Free Linux Tutorials.  Learn everything from the bash shell=20
> to sys admin.
> Click now! =
http://ads.osdn.com/?ad_id=3D1278&alloc_id=3D3371&op=3Dclick
> _______________________________________________
> NFS maillist  -  NFS@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs
>=20


-------------------------------------------------------
This SF.net email is sponsored by: IBM Linux Tutorials.
Become an expert in LINUX or just sharpen your skills.  Sign up for IBM's
Free Linux Tutorials.  Learn everything from the bash shell to sys admin.
Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 14+ messages in thread
* Re: NFS client write performance issue ... thoughts?
@ 2004-01-06  4:34 Trond Myklebust
  2004-01-06  6:33 ` Paul Smith
  0 siblings, 1 reply; 14+ messages in thread
From: Trond Myklebust @ 2004-01-06  4:34 UTC (permalink / raw)
  To: nfs


På må , 05/01/2004 klokka 17:11, skreiv Paul Smith:
> Hi all; we've been doing some examination of NFS client performance, and
> have seen this apparently sub-optimal behavior: anyone here have any
> comments on this observation or thoughts about it?

Does your observation include numbers, or is it just conjecture?

If we actually are doing contiguous sequential writes, the Linux
implementation has the obvious advantage that it doesn't issue any
read requests to the server at all.

The Solaris approach is only a win in the particular case where you
are doing several non-contiguous writes into the same page (and
without any byte-range locking).
Note that like Linux, two processes that do not share the same RPC
credentials still cannot merge their writes since you cannot rely on
them having the same file write permissions.

Looking at your particular example of GDBM, you should recall that
Solaris is forced to revert to uncached synchronous reads and writes
when doing byte range locking precisely because their page cache
writes back entire pages (the ugly alternative would be to demand that
byte range locks must be page-aligned).
OTOH Linux can continue to do cached asynchronous reads and writes
right up until the user forces an fsync()+cache invalidation by
changing the locking range because our page cache writes are not
required to be page aligned.

However, GDBM is a pretty poor example of a database. Large
professional databases will tend to want to use their own custom
locking protocols, and manage their caching entirely on their own
(using O_DIRECT in order to circumvent the kernel page cache). The
asynchronous read/write code doesn't apply at all in this case.


Finally, please note that you can actually obtain the Solaris
behaviour on the existing Linux NFS client, if you so desire, by
replacing write() with mmap().

Cheers,
  Trond


-------------------------------------------------------
This SF.net email is sponsored by: IBM Linux Tutorials.
Become an expert in LINUX or just sharpen your skills.  Sign up for IBM's
Free Linux Tutorials.  Learn everything from the bash shell to sys admin.
Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 14+ messages in thread
* NFS client write performance issue ... thoughts?
@ 2004-01-05 22:11 Paul Smith
  2004-01-08 15:26 ` Paul Smith
  0 siblings, 1 reply; 14+ messages in thread
From: Paul Smith @ 2004-01-05 22:11 UTC (permalink / raw)
  To: nfs

Hi all; we've been doing some examination of NFS client performance, and
have seen this apparently sub-optimal behavior: anyone here have any
comments on this observation or thoughts about it?

Thanks!

> I was looking at the code (which doesn't appear to be fixed in 2.6),
> and here's what I think it's doing.
> 
> The Linux NFS client code is capable of only remembering a write to a
> single contiguous chunk in each 4KB page of a file.  If a second
> non-contiguous write occurs to the page, the first write has to be
> flushed to the server.  So if the view server is seeking around in a
> file writing a few bytes here and a few there, whenever it does a
> write to a page that has already been written, the first write has to
> be flushed to the server.  The code that does most of this is in
> fs/nfs/write.c, function nfs_update_request().  That's the routine
> that, given a new write request, searches for an existing one that is
> contiguous with the new one.  If it finds a contiguous request, the
> new one is coalesced with it and no NFS activitity is required.  If
> instead it finds a pending write to the same page, it returns EBUSY to
> its caller, which tells the caller (nfs_updatepage()) to synchronously
> write the existing request to the server.
> 
> The Solaris NFS client (and probably most other NFS client
> implementations) doesn't work this way.  Whenever a small write is
> made to a block of a file, the block is read from the server, and then
> the write is applied to the cached block.  If the block is already in
> cache, the write is applied to block without any NFS transactions
> being required.  When the file is closed or fsync()ed, the entire
> block is written to the server.  So the client code doesn't need to
> record each individual small write to a block, it just modifies the
> cached block as necessary.
> 
> I don't know why the Linux NFS client doesn't work this way, but I
> think it wouldn't be difficult to make it work that way.  In some
> scenarios, this might be a performance hit (because an entire block of
> the file has to be read from the server just to write a few bytes to
> it), but I think that in most cases, doing it the Solaris way would be
> a performance win.  I'd guess that if any sort of database, such as a
> GDBM database, is accessed over NFS, the Linux method would result in
> much poorer performance than the Solaris method.


-- 
-------------------------------------------------------------------------------
 Paul D. Smith <psmith@nortelnetworks.com>   HASMAT: HA Software Mthds & Tools
 "Please remain calm...I may be mad, but I am a professional." --Mad Scientist
-------------------------------------------------------------------------------
   These are my opinions---Nortel Networks takes no responsibility for them.


-------------------------------------------------------
This SF.net email is sponsored by: IBM Linux Tutorials.
Become an expert in LINUX or just sharpen your skills.  Sign up for IBM's
Free Linux Tutorials.  Learn everything from the bash shell to sys admin.
Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2004-01-12 21:37 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-01-09 21:30 NFS client write performance issue ... thoughts? trond.myklebust
2004-01-12 21:37 ` Paul Smith
  -- strict thread matches above, loose matches on Subject: below --
2004-01-12 12:45 Mikkelborg, Kjetil
2004-01-12 17:30 ` Paul Smith
2004-01-08 17:32 trond.myklebust
2004-01-08 17:47 ` Paul Smith
2004-01-08 17:48 ` trond.myklebust
2004-01-07 20:50 Lever, Charles
2004-01-06 16:17 Lever, Charles
2004-01-06 18:10 ` Paul Smith
2004-01-06  4:34 Trond Myklebust
2004-01-06  6:33 ` Paul Smith
2004-01-05 22:11 Paul Smith
2004-01-08 15:26 ` Paul Smith

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.