All of lore.kernel.org
 help / color / mirror / Atom feed
* RE: last_block() vs. min_bs when > blockalign
@ 2015-01-27 22:42 Justin Eno (jeno)
  2015-01-28 16:11 ` Jens Axboe
  0 siblings, 1 reply; 4+ messages in thread
From: Justin Eno (jeno) @ 2015-01-27 22:42 UTC (permalink / raw)
  To: fio

[-- Attachment #1: Type: text/plain, Size: 4030 bytes --]

The attached updated patch produces the desired behavior.  If min_bs is larger than blockalign, last_block() now returns a suitable value.

Thanks,
Justin

-----Original Message-----
From: Justin Eno (jeno) 
Sent: Tuesday, January 27, 2015 1:35 PM
To: 'fio@vger.kernel.org'
Subject: RE: last_block() vs. min_bs when > blockalign

Please disregard the patch - it incorrectly limits the target write range.  I will work on a proper fix.  Sorry for the noise.

-Justin

-----Original Message-----
From: Justin Eno (jeno) 
Sent: Tuesday, January 27, 2015 12:00 PM
To: 'fio@vger.kernel.org'
Subject: last_block() vs. min_bs when > blockalign

Hi Jens and all,

I have found that when performing mixed-sized random writes, fio ceases writing early due to last_block() not respecting min_bs.
The attached workload demonstrates the issue.  When run with io debugging enabled, "failed getting buflen" is reported, and the written data falls short of the total requested (in this case, 10M), e.g., ###############################################
[jeno@rhel fio-clean]$ ./fio --debug=io examples/random-fill.fio
fio: Any use of blockalign= turns off randommap
fio: set debug option io
io       25214 load ioengine sync
write-phase: (g=0): rw=randwrite, bs=1K-5K/1K-5K/1K-5K, ioengine=sync, iodepth=1 fio-2.2.5-3-g209e Starting 1 thread
write-phase: Laying out IO file(s) (1 file(s) / 5MB)
io       25214 invalidate cache datafile.tmp: 0/5242880
io       25214 invalidate cache datafile.tmp: 0/5242880
io       25214 fill_io_u: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 prep: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 ->prep(0x7fe49c003d40)=0
io       25214 queue: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 io complete: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
...
io       25214 fill_io_u: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 prep: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 ->prep(0x7fe49c003d40)=0
io       25214 queue: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 io complete: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 io_u 0x7fe49c003d40, failed getting buflen
io       25214 io_u 0x7fe49c003d40, setting file failed
io       25214 get_io_u failed
io       25214 close ioengine sync
io       25214 free ioengine sync

write-phase: (groupid=0, jobs=1): err= 0: pid=25281: Tue Jan 27 10:33:50 2015
  write: io=4335.0KB, bw=585007B/s, iops=195, runt=  7588msec
    clat (usec): min=762, max=17788, avg=5100.38, stdev=2509.05
     lat (usec): min=764, max=17790, avg=5101.80, stdev=2509.06
    clat percentiles (usec):
     |  1.00th=[  868],  5.00th=[ 1240], 10.00th=[ 1704], 20.00th=[ 2544],
     | 30.00th=[ 3440], 40.00th=[ 4192], 50.00th=[ 5024], 60.00th=[ 5984],
     | 70.00th=[ 6752], 80.00th=[ 7648], 90.00th=[ 8384], 95.00th=[ 8896],
     | 99.00th=[ 9280], 99.50th=[10304], 99.90th=[17792], 99.95th=[17792],
     | 99.99th=[17792]
    bw (KB  /s): min=  520, max=  640, per=99.98%, avg=570.87, stdev=35.30
    lat (usec) : 1000=2.76%
    lat (msec) : 2=11.25%, 4=22.76%, 10=62.63%, 20=0.61%
  cpu          : usr=0.12%, sys=0.28%, ctx=1486, majf=0, minf=1
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=1485/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=4335KB, aggrb=571KB/s, minb=571KB/s, maxb=571KB/s, mint=7588msec, maxt=7588msec ...
###############################################

The attached patch changes last_block() to respect min_bs.

Thanks,
Justin


[-- Attachment #2: last_block_v2.patch --]
[-- Type: application/octet-stream, Size: 957 bytes --]

From f4f9dd6eb3b81f2cb79aa3c41fd4d131cca91b6c Mon Sep 17 00:00:00 2001
From: Justin Eno <jeno@micron.com>
Date: Tue, 27 Jan 2015 14:23:20 -0800
Subject: [PATCH] Better accommodate random writes larger than blockalign

fill_io_u() fails prematurely if the randomly-chosen offset satisfies
blockalign but not min_bs, i.e., the offset lies too near the end of
the target region.  This change honors both parameters.

Signed-off-by: Justin Eno <jeno@micron.com>
---
 io_u.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/io_u.c b/io_u.c
index 5971d78..77a3abf 100644
--- a/io_u.c
+++ b/io_u.c
@@ -68,6 +68,9 @@ static uint64_t last_block(struct thread_data *td, struct fio_file *f,
 	if (td->o.zone_range)
 		max_size = td->o.zone_range;
 
+	if (td->o.min_bs[ddir] > td->o.ba[ddir])
+		max_size -= td->o.min_bs[ddir] - td->o.ba[ddir];
+
 	max_blocks = max_size / (uint64_t) td->o.ba[ddir];
 	if (!max_blocks)
 		return 0;
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: last_block() vs. min_bs when > blockalign
  2015-01-27 22:42 last_block() vs. min_bs when > blockalign Justin Eno (jeno)
@ 2015-01-28 16:11 ` Jens Axboe
  0 siblings, 0 replies; 4+ messages in thread
From: Jens Axboe @ 2015-01-28 16:11 UTC (permalink / raw)
  To: Justin Eno (jeno), fio

Thanks, applied.


On 01/27/2015 03:42 PM, Justin Eno (jeno) wrote:
> The attached updated patch produces the desired behavior.  If min_bs is larger than blockalign, last_block() now returns a suitable value.
>
> Thanks,
> Justin
>
> -----Original Message-----
> From: Justin Eno (jeno)
> Sent: Tuesday, January 27, 2015 1:35 PM
> To: 'fio@vger.kernel.org'
> Subject: RE: last_block() vs. min_bs when > blockalign
>
> Please disregard the patch - it incorrectly limits the target write range.  I will work on a proper fix.  Sorry for the noise.
>
> -Justin
>
> -----Original Message-----
> From: Justin Eno (jeno)
> Sent: Tuesday, January 27, 2015 12:00 PM
> To: 'fio@vger.kernel.org'
> Subject: last_block() vs. min_bs when > blockalign
>
> Hi Jens and all,
>
> I have found that when performing mixed-sized random writes, fio ceases writing early due to last_block() not respecting min_bs.
> The attached workload demonstrates the issue.  When run with io debugging enabled, "failed getting buflen" is reported, and the written data falls short of the total requested (in this case, 10M), e.g., ###############################################
> [jeno@rhel fio-clean]$ ./fio --debug=io examples/random-fill.fio
> fio: Any use of blockalign= turns off randommap
> fio: set debug option io
> io       25214 load ioengine sync
> write-phase: (g=0): rw=randwrite, bs=1K-5K/1K-5K/1K-5K, ioengine=sync, iodepth=1 fio-2.2.5-3-g209e Starting 1 thread
> write-phase: Laying out IO file(s) (1 file(s) / 5MB)
> io       25214 invalidate cache datafile.tmp: 0/5242880
> io       25214 invalidate cache datafile.tmp: 0/5242880
> io       25214 fill_io_u: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
> io       25214 prep: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
> io       25214 ->prep(0x7fe49c003d40)=0
> io       25214 queue: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
> io       25214 io complete: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
> ...
> io       25214 fill_io_u: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
> io       25214 prep: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
> io       25214 ->prep(0x7fe49c003d40)=0
> io       25214 queue: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
> io       25214 io complete: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
> io       25214 io_u 0x7fe49c003d40, failed getting buflen
> io       25214 io_u 0x7fe49c003d40, setting file failed
> io       25214 get_io_u failed
> io       25214 close ioengine sync
> io       25214 free ioengine sync
>
> write-phase: (groupid=0, jobs=1): err= 0: pid=25281: Tue Jan 27 10:33:50 2015
>    write: io=4335.0KB, bw=585007B/s, iops=195, runt=  7588msec
>      clat (usec): min=762, max=17788, avg=5100.38, stdev=2509.05
>       lat (usec): min=764, max=17790, avg=5101.80, stdev=2509.06
>      clat percentiles (usec):
>       |  1.00th=[  868],  5.00th=[ 1240], 10.00th=[ 1704], 20.00th=[ 2544],
>       | 30.00th=[ 3440], 40.00th=[ 4192], 50.00th=[ 5024], 60.00th=[ 5984],
>       | 70.00th=[ 6752], 80.00th=[ 7648], 90.00th=[ 8384], 95.00th=[ 8896],
>       | 99.00th=[ 9280], 99.50th=[10304], 99.90th=[17792], 99.95th=[17792],
>       | 99.99th=[17792]
>      bw (KB  /s): min=  520, max=  640, per=99.98%, avg=570.87, stdev=35.30
>      lat (usec) : 1000=2.76%
>      lat (msec) : 2=11.25%, 4=22.76%, 10=62.63%, 20=0.61%
>    cpu          : usr=0.12%, sys=0.28%, ctx=1486, majf=0, minf=1
>    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>       issued    : total=r=0/w=1485/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
>       latency   : target=0, window=0, percentile=100.00%, depth=1
>
> Run status group 0 (all jobs):
>    WRITE: io=4335KB, aggrb=571KB/s, minb=571KB/s, maxb=571KB/s, mint=7588msec, maxt=7588msec ...
> ###############################################
>
> The attached patch changes last_block() to respect min_bs.
>
> Thanks,
> Justin
>


-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: last_block() vs. min_bs when > blockalign
@ 2015-01-27 21:34 Justin Eno (jeno)
  0 siblings, 0 replies; 4+ messages in thread
From: Justin Eno (jeno) @ 2015-01-27 21:34 UTC (permalink / raw)
  To: fio

Please disregard the patch - it incorrectly limits the target write range.  I will work on a proper fix.  Sorry for the noise.

-Justin

-----Original Message-----
From: Justin Eno (jeno) 
Sent: Tuesday, January 27, 2015 12:00 PM
To: 'fio@vger.kernel.org'
Subject: last_block() vs. min_bs when > blockalign

Hi Jens and all,

I have found that when performing mixed-sized random writes, fio ceases writing early due to last_block() not respecting min_bs.
The attached workload demonstrates the issue.  When run with io debugging enabled, "failed getting buflen" is reported, and the written data falls short of the total requested (in this case, 10M), e.g., ###############################################
[jeno@rhel fio-clean]$ ./fio --debug=io examples/random-fill.fio
fio: Any use of blockalign= turns off randommap
fio: set debug option io
io       25214 load ioengine sync
write-phase: (g=0): rw=randwrite, bs=1K-5K/1K-5K/1K-5K, ioengine=sync, iodepth=1 fio-2.2.5-3-g209e Starting 1 thread
write-phase: Laying out IO file(s) (1 file(s) / 5MB)
io       25214 invalidate cache datafile.tmp: 0/5242880
io       25214 invalidate cache datafile.tmp: 0/5242880
io       25214 fill_io_u: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 prep: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 ->prep(0x7fe49c003d40)=0
io       25214 queue: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 io complete: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
...
io       25214 fill_io_u: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 prep: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 ->prep(0x7fe49c003d40)=0
io       25214 queue: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 io complete: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 io_u 0x7fe49c003d40, failed getting buflen
io       25214 io_u 0x7fe49c003d40, setting file failed
io       25214 get_io_u failed
io       25214 close ioengine sync
io       25214 free ioengine sync

write-phase: (groupid=0, jobs=1): err= 0: pid=25281: Tue Jan 27 10:33:50 2015
  write: io=4335.0KB, bw=585007B/s, iops=195, runt=  7588msec
    clat (usec): min=762, max=17788, avg=5100.38, stdev=2509.05
     lat (usec): min=764, max=17790, avg=5101.80, stdev=2509.06
    clat percentiles (usec):
     |  1.00th=[  868],  5.00th=[ 1240], 10.00th=[ 1704], 20.00th=[ 2544],
     | 30.00th=[ 3440], 40.00th=[ 4192], 50.00th=[ 5024], 60.00th=[ 5984],
     | 70.00th=[ 6752], 80.00th=[ 7648], 90.00th=[ 8384], 95.00th=[ 8896],
     | 99.00th=[ 9280], 99.50th=[10304], 99.90th=[17792], 99.95th=[17792],
     | 99.99th=[17792]
    bw (KB  /s): min=  520, max=  640, per=99.98%, avg=570.87, stdev=35.30
    lat (usec) : 1000=2.76%
    lat (msec) : 2=11.25%, 4=22.76%, 10=62.63%, 20=0.61%
  cpu          : usr=0.12%, sys=0.28%, ctx=1486, majf=0, minf=1
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=1485/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=4335KB, aggrb=571KB/s, minb=571KB/s, maxb=571KB/s, mint=7588msec, maxt=7588msec ...
###############################################

The attached patch changes last_block() to respect min_bs.

Thanks,
Justin


^ permalink raw reply	[flat|nested] 4+ messages in thread

* last_block() vs. min_bs when > blockalign
@ 2015-01-27 20:00 Justin Eno (jeno)
  0 siblings, 0 replies; 4+ messages in thread
From: Justin Eno (jeno) @ 2015-01-27 20:00 UTC (permalink / raw)
  To: fio

[-- Attachment #1: Type: text/plain, Size: 3376 bytes --]

Hi Jens and all,

I have found that when performing mixed-sized random writes, fio ceases writing early due to last_block() not respecting min_bs.
The attached workload demonstrates the issue.  When run with io debugging enabled, "failed getting buflen" is reported, and the
written data falls short of the total requested (in this case, 10M), e.g.,
###############################################
[jeno@rhel fio-clean]$ ./fio --debug=io examples/random-fill.fio
fio: Any use of blockalign= turns off randommap
fio: set debug option io
io       25214 load ioengine sync
write-phase: (g=0): rw=randwrite, bs=1K-5K/1K-5K/1K-5K, ioengine=sync, iodepth=1
fio-2.2.5-3-g209e
Starting 1 thread
write-phase: Laying out IO file(s) (1 file(s) / 5MB)
io       25214 invalidate cache datafile.tmp: 0/5242880
io       25214 invalidate cache datafile.tmp: 0/5242880
io       25214 fill_io_u: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 prep: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 ->prep(0x7fe49c003d40)=0
io       25214 queue: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
io       25214 io complete: io_u 0x7fe49c003d40: off=315904/len=1024/ddir=1/datafile.tmp
...
io       25214 fill_io_u: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 prep: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 ->prep(0x7fe49c003d40)=0
io       25214 queue: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 io complete: io_u 0x7fe49c003d40: off=4930560/len=5120/ddir=1/datafile.tmp
io       25214 io_u 0x7fe49c003d40, failed getting buflen
io       25214 io_u 0x7fe49c003d40, setting file failed
io       25214 get_io_u failed
io       25214 close ioengine sync
io       25214 free ioengine sync

write-phase: (groupid=0, jobs=1): err= 0: pid=25281: Tue Jan 27 10:33:50 2015
  write: io=4335.0KB, bw=585007B/s, iops=195, runt=  7588msec
    clat (usec): min=762, max=17788, avg=5100.38, stdev=2509.05
     lat (usec): min=764, max=17790, avg=5101.80, stdev=2509.06
    clat percentiles (usec):
     |  1.00th=[  868],  5.00th=[ 1240], 10.00th=[ 1704], 20.00th=[ 2544],
     | 30.00th=[ 3440], 40.00th=[ 4192], 50.00th=[ 5024], 60.00th=[ 5984],
     | 70.00th=[ 6752], 80.00th=[ 7648], 90.00th=[ 8384], 95.00th=[ 8896],
     | 99.00th=[ 9280], 99.50th=[10304], 99.90th=[17792], 99.95th=[17792],
     | 99.99th=[17792]
    bw (KB  /s): min=  520, max=  640, per=99.98%, avg=570.87, stdev=35.30
    lat (usec) : 1000=2.76%
    lat (msec) : 2=11.25%, 4=22.76%, 10=62.63%, 20=0.61%
  cpu          : usr=0.12%, sys=0.28%, ctx=1486, majf=0, minf=1
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=1485/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=4335KB, aggrb=571KB/s, minb=571KB/s, maxb=571KB/s, mint=7588msec, maxt=7588msec
...
###############################################

The attached patch changes last_block() to respect min_bs.

Thanks,
Justin


[-- Attachment #2: last_block.patch --]
[-- Type: application/octet-stream, Size: 1173 bytes --]

From 5a73db26d5ebd8137b1446f6bb14e2b4e4f48ea9 Mon Sep 17 00:00:00 2001
From: Justin Eno <jeno@micron.com>
Date: Tue, 27 Jan 2015 11:12:50 -0800
Subject: [PATCH] Better accommodate random writes larger than blockalign

fill_io_u() fails prematurely if the randomly-chosen offset satisfies
blockalign but not min_bs, i.e., the offset lies too near the end of
the target region.  This change honors both parameters.

Signed-off-by: Justin Eno <jeno@micron.com>
---
 io_u.c |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/io_u.c b/io_u.c
index 5971d78..b1aa8a8 100644
--- a/io_u.c
+++ b/io_u.c
@@ -55,6 +55,7 @@ static uint64_t last_block(struct thread_data *td, struct fio_file *f,
 {
 	uint64_t max_blocks;
 	uint64_t max_size;
+	uint64_t min_size;
 
 	assert(ddir_rw(ddir));
 
@@ -69,6 +70,12 @@ static uint64_t last_block(struct thread_data *td, struct fio_file *f,
 		max_size = td->o.zone_range;
 
 	max_blocks = max_size / (uint64_t) td->o.ba[ddir];
+	min_size = td->o.ba[ddir];
+	while (min_size < td->o.min_bs[ddir]) {
+		min_size += td->o.ba[ddir];
+	}
+
+	max_blocks = max_size / min_size;
 	if (!max_blocks)
 		return 0;
 
-- 
1.7.1


[-- Attachment #3: random-fill.fio --]
[-- Type: application/octet-stream, Size: 220 bytes --]

; writes mixed-size blocks
[global]
thread=1
direct=1
ioengine=sync
bsrange=1k:5k
blockalign=512
offset=0
size=5M
io_limit=10M

[write-phase]
filename=datafile.tmp	; or use a full disk, for example /dev/sda
rw=randwrite

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-01-28 16:11 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-27 22:42 last_block() vs. min_bs when > blockalign Justin Eno (jeno)
2015-01-28 16:11 ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2015-01-27 21:34 Justin Eno (jeno)
2015-01-27 20:00 Justin Eno (jeno)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.