All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: power loss results in mounting failure on SLC NAND
       [not found] <CWLP265MB0721344DBE1B7785E7D2C6A5D2DE0@CWLP265MB0721.GBRP265.PROD.OUTLOOK.COM>
@ 2018-03-09 20:12 ` Richard Weinberger
  2018-03-14 12:53   ` Richard Weinberger
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Weinberger @ 2018-03-09 20:12 UTC (permalink / raw)
  To: martin bayern; +Cc: linux-mtd

Martin,

Am Freitag, 9. März 2018, 20:39:27 CET schrieb martin bayern:
> Hi
> 
> I encountered one issue on my board during power loss immunity testing. It
> has SLC Nand and Linux 4.2. From my initial analysis, seems the issue is
> related to fastmap.
> 
> When disabling fastmap, the corrupted PEB, which was damaged by power-cut
> when erasing it, can be re-erased during Ubi attaching. So, in the end, I
> can’t see mounting failure.
> 
> But, if enable fastmap, I will see mounting failure.
> 
> I tracked Ubi source code, during attaching, Ubi will scan every PEB, and
> read EC&Vid pages if both pages are damaged, will erase it later due to
> uncompleted erase before.  While for the fastmap case, it doesn’t scan each
> PEB, so, maybe it will miss re-erase corrupted PEB?

So, in this case the upper layer, namely UBIFS, will be unhappy because it 
cannot deal with the ECC error, right?
I fear fastmap changed the semantics of UBI a bit and UBIFS trips over that.

Does the following patch make UBIFS happy again?
If so, we need to improve it further, to be more smart and check every mapping 
only once.

diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
index 250e30fac61b..617742948442 100644
--- a/drivers/mtd/ubi/eba.c
+++ b/drivers/mtd/ubi/eba.c
@@ -490,6 +490,37 @@ int ubi_eba_unmap_leb(struct ubi_device *ubi, struct 
ubi_volume *vol,
 	return err;
 }
 
+int fixup_mapping(struct ubi_device *ubi, struct ubi_volume *vol, int lnum, 
int *pnum)
+{
+	int err;
+	struct ubi_vid_io_buf *vidb;
+
+	if (!ubi->fast_attach)
+		return 0;
+
+	vidb = ubi_alloc_vid_buf(ubi, GFP_NOFS);
+	if (!vidb)
+		return -ENOMEM;
+
+	err = ubi_io_read_vid_hdr(ubi, *pnum, vidb, 0);
+	if (err > 0 && err != UBI_IO_BITFLIPS) {
+		int torture = 0;
+
+		if (err == UBI_IO_BAD_HDR_EBADMSG || err == UBI_IO_FF_BITFLIPS)
+			torture = 1;
+
+		down_read(&ubi->fm_eba_sem);
+		vol->eba_tbl->entries[lnum].pnum = UBI_LEB_UNMAPPED;
+		up_read(&ubi->fm_eba_sem);
+		ubi_wl_put_peb(ubi, vol->vol_id, lnum, *pnum, torture);
+
+		*pnum = -1;
+	}
+
+	ubi_free_vid_buf(vidb);
+	return 0;
+}
+
 /**
  * ubi_eba_read_leb - read data.
  * @ubi: UBI device description object
@@ -522,6 +553,13 @@ int ubi_eba_read_leb(struct ubi_device *ubi, struct 
ubi_volume *vol, int lnum,
 		return err;
 
 	pnum = vol->eba_tbl->entries[lnum].pnum;
+
+	if (pnum >= 0) {
+		err = fixup_mapping(ubi, vol, lnum, &pnum);
+		if (err < 0)
+			goto out_unlock;
+	}
+
 	if (pnum < 0) {
 		/*
 		 * The logical eraseblock is not mapped, fill the whole buffer
@@ -931,6 +969,12 @@ int ubi_eba_write_leb(struct ubi_device *ubi, struct 
ubi_volume *vol, int lnum,
 
 	pnum = vol->eba_tbl->entries[lnum].pnum;
 	if (pnum >= 0) {
+		err = fixup_mapping(ubi, vol, lnum, &pnum);
+		if (err < 0)
+			goto out;
+	}
+
+	if (pnum >= 0) {
 		dbg_eba("write %d bytes at offset %d of LEB %d:%d, PEB %d",
 			len, offset, vol_id, lnum, pnum);

Thanks,
//richard

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: power loss results in mounting failure on SLC NAND
  2018-03-09 20:12 ` power loss results in mounting failure on SLC NAND Richard Weinberger
@ 2018-03-14 12:53   ` Richard Weinberger
  2018-03-21 16:05     ` Richard Weinberger
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Weinberger @ 2018-03-14 12:53 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: martin bayern, linux-mtd

On Fri, Mar 9, 2018 at 9:12 PM, Richard Weinberger <richard@nod.at> wrote:
> Martin,
>
> Am Freitag, 9. März 2018, 20:39:27 CET schrieb martin bayern:
>> Hi
>>
>> I encountered one issue on my board during power loss immunity testing. It
>> has SLC Nand and Linux 4.2. From my initial analysis, seems the issue is
>> related to fastmap.
>>
>> When disabling fastmap, the corrupted PEB, which was damaged by power-cut
>> when erasing it, can be re-erased during Ubi attaching. So, in the end, I
>> can’t see mounting failure.
>>
>> But, if enable fastmap, I will see mounting failure.
>>
>> I tracked Ubi source code, during attaching, Ubi will scan every PEB, and
>> read EC&Vid pages if both pages are damaged, will erase it later due to
>> uncompleted erase before.  While for the fastmap case, it doesn’t scan each
>> PEB, so, maybe it will miss re-erase corrupted PEB?
>
> So, in this case the upper layer, namely UBIFS, will be unhappy because it
> cannot deal with the ECC error, right?
> I fear fastmap changed the semantics of UBI a bit and UBIFS trips over that.
>
> Does the following patch make UBIFS happy again?
> If so, we need to improve it further, to be more smart and check every mapping
> only once.
>
> diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
> index 250e30fac61b..617742948442 100644
> --- a/drivers/mtd/ubi/eba.c
> +++ b/drivers/mtd/ubi/eba.c
> @@ -490,6 +490,37 @@ int ubi_eba_unmap_leb(struct ubi_device *ubi, struct
> ubi_volume *vol,
>         return err;
>  }
>
> +int fixup_mapping(struct ubi_device *ubi, struct ubi_volume *vol, int lnum,
> int *pnum)
> +{
> +       int err;
> +       struct ubi_vid_io_buf *vidb;
> +
> +       if (!ubi->fast_attach)
> +               return 0;
> +
> +       vidb = ubi_alloc_vid_buf(ubi, GFP_NOFS);
> +       if (!vidb)
> +               return -ENOMEM;
> +
> +       err = ubi_io_read_vid_hdr(ubi, *pnum, vidb, 0);
> +       if (err > 0 && err != UBI_IO_BITFLIPS) {
> +               int torture = 0;
> +
> +               if (err == UBI_IO_BAD_HDR_EBADMSG || err == UBI_IO_FF_BITFLIPS)
> +                       torture = 1;
> +
> +               down_read(&ubi->fm_eba_sem);
> +               vol->eba_tbl->entries[lnum].pnum = UBI_LEB_UNMAPPED;
> +               up_read(&ubi->fm_eba_sem);
> +               ubi_wl_put_peb(ubi, vol->vol_id, lnum, *pnum, torture);
> +
> +               *pnum = -1;
> +       }
> +
> +       ubi_free_vid_buf(vidb);
> +       return 0;
> +}
> +
>  /**
>   * ubi_eba_read_leb - read data.
>   * @ubi: UBI device description object
> @@ -522,6 +553,13 @@ int ubi_eba_read_leb(struct ubi_device *ubi, struct
> ubi_volume *vol, int lnum,
>                 return err;
>
>         pnum = vol->eba_tbl->entries[lnum].pnum;
> +
> +       if (pnum >= 0) {
> +               err = fixup_mapping(ubi, vol, lnum, &pnum);
> +               if (err < 0)
> +                       goto out_unlock;
> +       }
> +
>         if (pnum < 0) {
>                 /*
>                  * The logical eraseblock is not mapped, fill the whole buffer
> @@ -931,6 +969,12 @@ int ubi_eba_write_leb(struct ubi_device *ubi, struct
> ubi_volume *vol, int lnum,
>
>         pnum = vol->eba_tbl->entries[lnum].pnum;
>         if (pnum >= 0) {
> +               err = fixup_mapping(ubi, vol, lnum, &pnum);
> +               if (err < 0)
> +                       goto out;
> +       }
> +
> +       if (pnum >= 0) {
>                 dbg_eba("write %d bytes at offset %d of LEB %d:%d, PEB %d",
>                         len, offset, vol_id, lnum, pnum);
>

Did this help?

-- 
Thanks,
//richard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: power loss results in mounting failure on SLC NAND
  2018-03-14 12:53   ` Richard Weinberger
@ 2018-03-21 16:05     ` Richard Weinberger
  2018-03-22 15:05       ` martin bayern
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Weinberger @ 2018-03-21 16:05 UTC (permalink / raw)
  To: martin bayern; +Cc: linux-mtd

On Wed, Mar 14, 2018 at 1:53 PM, Richard Weinberger
<richard.weinberger@gmail.com> wrote:
> On Fri, Mar 9, 2018 at 9:12 PM, Richard Weinberger <richard@nod.at> wrote:
> Did this help?

*kind ping*

I'd really love to hear whether the patch helps. I think we need to
address this case, but first I'd like to hear whether it helped.

-- 
Thanks,
//richard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: power loss results in mounting failure on SLC NAND
  2018-03-21 16:05     ` Richard Weinberger
@ 2018-03-22 15:05       ` martin bayern
  2018-04-12  8:55         ` martin bayern
  0 siblings, 1 reply; 6+ messages in thread
From: martin bayern @ 2018-03-22 15:05 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd

Dear Richard
sorry, I didn't forget you and this patch.
last week, I was just in my vocation. this week, 
I am working on this. I will back to you as long as I am any new status.
I reflashed my NAND, let me test again.

>*kind ping*

>I'd really love to hear whether the patch helps. I think we need to
>address this case, but first I'd like to hear whether it helped.

>--
>Thanks,
>//richard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: power loss results in mounting failure on SLC NAND
  2018-03-22 15:05       ` martin bayern
@ 2018-04-12  8:55         ` martin bayern
  2018-05-24 20:12           ` martin bayern
  0 siblings, 1 reply; 6+ messages in thread
From: martin bayern @ 2018-04-12  8:55 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd

hi, Richard
Seems it works, but I cannot much sure that.  I need more tests.
Normally, all corrupted blocks (because of power loss during erasing) will be re-erase during attach stage.
you know it is not easy to reproduce. the corrupted block should be behind first-64-PEB, also it results from
power loss during erasing. I only reproduced one-time during last two-week testing. Maybe because of re-flash
NAND. Anyway, as long as I have any new updates, I will touch you.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: power loss results in mounting failure on SLC NAND
  2018-04-12  8:55         ` martin bayern
@ 2018-05-24 20:12           ` martin bayern
  0 siblings, 0 replies; 6+ messages in thread
From: martin bayern @ 2018-05-24 20:12 UTC (permalink / raw)
  To: Richard Weinberger; +Cc: linux-mtd

hi, Richard 
Finally, we reproduced the issue again the same with before. after enabling the patch, and keep fastmap enabling. once power on, we received following print log. now we can confirm this patch can fix this issue.
please check.
Then, is this a preliminary patch for this issue? or do we have other good solution for this issue?
I notice that the parch will introduce tiny inevitable overhead since we added fixup_mapping() in both ubi_eba_read_leb() and ubi_eba_write_leb().
Thanks.

......
[    2.501501] nand: 256 MiB, SLC, erase size: 128 KiB, page size: 2048, OOB size: 64
[    2.501657] nand: using OMAP_ECC_BCH8_CODE_HW ECC scheme
[    2.501760] 2 ofpart partitions found on MTD device 8000000.nand
[    2.501778] Creating 2 MTD partitions on "8000000.nand":
[    2.501830] 0x000000000000-0x000000080000 : "MTD_0"
[    2.501849] mtd: skip badblock stats collection for "MTD_0"
[    2.504808] 0x000000080000-0x000010000000 : "MTD_1"
[    2.504841] mtd: skip badblock stats collection for "MTD_1"

[    4.386929] ubi0: default fastmap pool size: 100
[    4.386976] ubi0: default fastmap WL pool size: 50
[    4.387172] ubi0: attaching mtd1

[    4.651907] ubi0: attached by fastmap
[    4.651956] ubi0: fastmap pool size: 100
[    4.651972] ubi0: fastmap WL pool size: 50
[    4.703131] ubi0: attached mtd1 (name "MTD_1", size 255 MiB)
[    4.703180] ubi0: PEB size: 131072 bytes (128 KiB), LEB size: 129024 bytes
[    4.703198] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page size 512
[    4.703215] ubi0: VID header offset: 512 (aligned 512), data offset: 2048
[    4.703232] ubi0: good PEBs: 2044, bad PEBs: 0, corrupted PEBs: 0
[    4.703249] ubi0: user volume: 8, internal volumes: 1, max. volumes count: 128
[    4.703270] ubi0: max/mean erase counter: 71/30, WL threshold: 4096, image sequence number: 3772622085
[    4.703288] ubi0: available PEBs: 433, total reserved PEBs: 1611, PEBs reserved for bad PEB handling: 40
[    4.712462] ubi0: background thread "ubi_bgt0d" started, PID 838
[    4.718508] ubi0 error: ubi_attach_mtd_dev: mtd1 is already attached to ubi0
[    4.728041] UPDATE.SH: Mounting swdl partition
[    4.824993] UBIFS (ubi0:10): background thread "ubifs_bgt0_10" started, PID 844
[    4.889128] UBIFS (ubi0:10): recovery needed
[    4.917378] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    4.923654] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    4.929970] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    4.936196] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    4.942486] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 512 bytes from PEB 2038:512, read only 512 bytes, retry
[    4.952264] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    4.958649] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    4.964878] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    4.971160] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    4.977447] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 512 bytes from PEB 2038:512, read only 512 bytes, retry
[    4.979118] cpsw 4a100000.ethernet eth0: Link is Up - 100Mbps/Full - flow control off
[    4.979195] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[    4.993113] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.000646] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.007786] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.014016] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.020305] ubi0 warning: ubi_io_read: error -74 (ECC error) while reading 512 bytes from PEB 2038:512, read only 512 bytes, retry
[    5.030132] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.036403] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.042712] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.048979] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.055223] ubi0 error: ubi_io_read: error -74 (ECC error) while reading 512 bytes from PEB 2038:512, read 512 bytes
[    5.065856] CPU: 0 PID: 841 Comm: mount Not tainted 4.4.14 #Rel_Elina_J5Entry_TUNERGEN2_18013A
[    5.065876] Hardware name: Generic AM33XX (Flattened Device Tree)
[    5.065893] Backtrace:
[    5.065973] [<c0014a88>] (dump_backtrace) from [<c0014cd8>] (show_stack+0x20/0x24)
[    5.065988]  r7:00000200 r6:000007f6 r5:c71c6000 r4:ffffffb6
[    5.066042] [<c0014cb8>] (show_stack) from [<c02b0784>] (dump_stack+0x24/0x28)
[    5.066086] [<c02b0760>] (dump_stack) from [<c038e6f8>] (ubi_io_read+0x16c/0x3bc)
[    5.066117] [<c038e58c>] (ubi_io_read) from [<c038ef00>] (ubi_io_read_vid_hdr+0x70/0x338)
[    5.066131]  r10:c8322000 r9:00000000 r8:c4a07e00 r7:000007f6 r6:c13414d8 r5:c71c6000
[    5.066171]  r4:c71c6000
[    5.066204] [<c038ee90>] (ubi_io_read_vid_hdr) from [<c038b770>] (fixup_mapping.part.1+0x60/0xf0)
[    5.066219]  r10:c8322000 r9:c4a8c800 r8:00000003 r7:c4a85be4 r6:c4a8c800 r5:c4a07e00
[    5.066259]  r4:c71c6000
[    5.066290] [<c038b710>] (fixup_mapping.part.1) from [<c038bba4>] (ubi_eba_read_leb+0x234/0x4d4)
[    5.066304]  r10:c8322000 r9:c8322000 r8:00000003 r7:0000000a r6:c4a8c800 r5:c71c6000
[    5.066343]  r4:c4a8c800
[    5.066374] [<c038b970>] (ubi_eba_read_leb) from [<c038a66c>] (ubi_leb_read+0xa4/0x138)
[    5.066388]  r10:c8322000 r9:c71c6000 r8:00000003 r7:00000000 r6:c494a5c0 r5:0001f800
[    5.066427]  r4:c4a8c800
[    5.066463] [<c038a5c8>] (ubi_leb_read) from [<c022a5e4>] (ubifs_leb_read+0x3c/0xa8)
[    5.066477]  r10:00000001 r9:00000003 r8:00000000 r7:00000003 r6:0001f800 r5:c71cf000
[    5.066515]  r4:c494ad40
[    5.066549] [<c022a5a8>] (ubifs_leb_read) from [<c0233a14>] (ubifs_start_scan+0xa0/0x148)
[    5.066563]  r8:c8322000 r7:c71cf000 r6:00000003 r5:00000000 r4:c494ad40
[    5.066616] [<c0233974>] (ubifs_start_scan) from [<c0233de8>] (ubifs_scan+0x38/0x3f8)
[    5.066630]  r8:00000109 r7:c71cf000 r6:00000000 r5:c8322000 r4:0001f800
[    5.066682] [<c0233db0>] (ubifs_scan) from [<c0234620>] (ubifs_replay_journal+0x140/0x185c)
[    5.066696]  r10:c71cf000 r9:00002000 r8:00000109 r7:c494a8c0 r6:00000000 r5:00000003
[    5.066735]  r4:c8322000
[    5.066764] [<c02344e0>] (ubifs_replay_journal) from [<c0227dd8>] (ubifs_mount+0x1340/0x21f0)
[    5.066779]  r10:00000000 r9:00000000 r8:c71cf000 r7:c4a8dc00 r6:c133d2a8 r5:c494a280
[    5.066818]  r4:00000000
[    5.066855] [<c0226a98>] (ubifs_mount) from [<c0122e40>] (mount_fs+0x54/0x170)
[    5.066869]  r10:00000000 r9:c494a080 r8:00000000 r7:00000000 r6:c130a1f4 r5:00000000
[    5.066909]  r4:c0226a98
[    5.066947] [<c0122dec>] (mount_fs) from [<c013c048>] (vfs_kern_mount+0x58/0x104)
[    5.066961]  r10:00000000 r9:c130a1f4 r8:00000000 r7:c130a1f4 r6:00000000 r5:c494a080
[    5.067052]  r4:c49f4240
[    5.067089] [<c013bff0>] (vfs_kern_mount) from [<c013ed84>] (do_mount+0x1f4/0xbf4)
[    5.067105]  r9:c130a1f4 r8:c12d237c r7:c494a0c0 r6:c494a080 r5:00000020 r4:00000000
[    5.067161] [<c013eb90>] (do_mount) from [<c013fb14>] (SyS_mount+0xa4/0xd0)
[    5.067175]  r10:00000000 r9:c4a84000 r8:c0ed0000 r7:00028188 r6:00000000 r5:c494a080
[    5.067214]  r4:c494a0c0
[    5.067249] [<c013fa70>] (SyS_mount) from [<c0010620>] (ret_fast_syscall+0x0/0x1c)
[    5.067263]  r8:c00107c4 r7:00000015 r6:c0ed0000 r5:491667d8 r4:00000000
[    5.090761] ubi0: run torture test for PEB 2038
[    5.092590] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.098969] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.105202] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.111486] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.127448] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.133721] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.140028] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.146256] omap2-nand 8000000.nand: uncorrectable bit-flips found
[    5.455349] ubi0: PEB 2038 passed torture test, do not mark it as bad
[    5.641511] UBIFS (ubi0:10): recovery completed
[    5.642180] UBIFS (ubi0:10): UBIFS: mounted UBI device 0, volume 10, name "SWDL"
[    5.642213] UBIFS (ubi0:10): LEB size: 129024 bytes (126 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    5.642240] UBIFS (ubi0:10): FS size: 62705664 bytes (59 MiB, 486 LEBs), journal size 8386560 bytes (7 MiB, 65 LEBs)
[    5.642257] UBIFS (ubi0:10): reserved for root: 0 bytes (0 KiB)
[    5.642288] UBIFS (ubi0:10): media format: w4/r0 (latest is w4/r0), UUID 0BD13516-3CBA-494F-8511-57064C646591, small LPT model
[    6.072556] UBIFS (ubi0:5): UBIFS: mounted UBI device 0, volume 5, name "APERS", R/O mode
[    6.072612] UBIFS (ubi0:5): LEB size: 129024 bytes (126 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    6.072639] UBIFS (ubi0:5): FS size: 2967552 bytes (2 MiB, 23 LEBs), journal size 1161217 bytes (1 MiB, 8 LEBs)
[    6.072657] UBIFS (ubi0:5): reserved for root: 0 bytes (0 KiB)
[    6.072686] UBIFS (ubi0:5): media format: w4/r0 (latest is w4/r0), UUID 265A1070-E271-4122-B8AF-AA3A210A1D74, small LPT model


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-05-24 20:12 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CWLP265MB0721344DBE1B7785E7D2C6A5D2DE0@CWLP265MB0721.GBRP265.PROD.OUTLOOK.COM>
2018-03-09 20:12 ` power loss results in mounting failure on SLC NAND Richard Weinberger
2018-03-14 12:53   ` Richard Weinberger
2018-03-21 16:05     ` Richard Weinberger
2018-03-22 15:05       ` martin bayern
2018-04-12  8:55         ` martin bayern
2018-05-24 20:12           ` martin bayern

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.