All of lore.kernel.org
 help / color / mirror / Atom feed
* [QUESTION] lvmcache_label_scan: checksum error at offset xxx
@ 2021-06-09  9:32 Wu Guanghao
  2021-06-09 11:09 ` Zdenek Kabelac
  0 siblings, 1 reply; 7+ messages in thread
From: Wu Guanghao @ 2021-06-09  9:32 UTC (permalink / raw)
  To: lvm-devel

Hi,

lvcreate and lvremove execute lvmcache_label_scan() to read the metadata on the lvm device,
but no lock is added for protection. if other processes are currently executing vg_commit() to submit metadata,
then it will go wrong.


lvremove (process1)					
   |--> lvmcache_label_scan()(no lock protection)				
		|-> read metadata data
			 |
			 |
			 | 			
			 |--------->lvm device metadata  				


lvremove (process2)
  ...
  |--> lockd_vg(process_vg)
  |    (Lock the vg that needs to be processed)
  ...
  |--> vg_commit() 								
  	   |--> write metadata data
		     |-----------------> lvm device metadata
  ...
  |--> unlock_vg(process_vg)

Wu Guanghao



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [QUESTION] lvmcache_label_scan: checksum error at offset xxx
  2021-06-09  9:32 [QUESTION] lvmcache_label_scan: checksum error at offset xxx Wu Guanghao
@ 2021-06-09 11:09 ` Zdenek Kabelac
  2021-06-10  9:19   ` Wu Guanghao
  0 siblings, 1 reply; 7+ messages in thread
From: Zdenek Kabelac @ 2021-06-09 11:09 UTC (permalink / raw)
  To: lvm-devel

Dne 09. 06. 21 v 11:32 Wu Guanghao napsal(a):
> Hi,
>
> lvcreate and lvremove execute lvmcache_label_scan() to read the metadata on the lvm device,
> but no lock is added for protection. if other processes are currently executing vg_commit() to submit metadata,
> then it will go wrong.



Hi

The initial scan is going 'lockless' to avoid holding VG locks for too long 
when it's not strictly necessary.

The purpose of this scan is to quickly collect & cache all the info which is 
needed to know. The we take the lock and essentially do another 'rescan' 
however in this case we only need to 'check' if the cache data corresponds to 
the current disk state by checking only PV small header information.

Another 'side' efffect? (probably not very used) is that user may specify VG 
via? VG UUID, but for the VG lock we need to use VG name - so we need to be 
able to 'translate' UUID to vgname - and for this we need to know lvm2 
metadata with this info.

Now when the parallel processe manages to change some data (while holding its 
lock), such internal disk cache will be thrown away (PV header will mismatch) 
and this particular PV is rereaded while the command now holds the correct 
lock. But this is really very occasional situation and the benefit of lockless 
asynchronous scanning outweighs this.


Regards


Zdenek




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [QUESTION] lvmcache_label_scan: checksum error at offset xxx
  2021-06-09 11:09 ` Zdenek Kabelac
@ 2021-06-10  9:19   ` Wu Guanghao
  2021-06-10  9:53     ` Zdenek Kabelac
  2021-06-10 18:44     ` David Teigland
  0 siblings, 2 replies; 7+ messages in thread
From: Wu Guanghao @ 2021-06-10  9:19 UTC (permalink / raw)
  To: lvm-devel

Hi,

If the execution of lvmcache_label_scan() fails, the scan can be performed again, then there is no problem;
But the execution of lvmcache_label_scan() fails, which will cause the vgid to be unable to obtain normally,
then the re-reading will not be triggered in vg_read(), causing the execution of this command to fail .


2.03.12 Error log:
143682:Jun 10 16:40:38 lvm[708341]: /dev/ram0: Checksum error at offset 203776
143683:Jun 10 16:40:38 lvm[708341]: WARNING: invalid metadata text from /dev/ram0 at 203776.
143684:Jun 10 16:40:38 lvm[708341]: WARNING: metadata on /dev/ram0 at 203776 has invalid summary for VG.
143685:Jun 10 16:40:38 lvm[708341]: WARNING: bad metadata text on /dev/ram0 in mda1
143686:Jun 10 16:40:38 lvm[708341]: WARNING: scanning /dev/ram0 mda1 failed to read metadata summary.
143687:Jun 10 16:40:38 lvm[708341]: WARNING: repair VG metadata on /dev/ram0 with vgck --updatemetadata.
143688:Jun 10 16:40:38 lvm[708341]: WARNING: scan failed to get metadata summary from /dev/ram0 PVID elQQtAsePg5vfZA2DuMV4exqYhvvRdrq
143689:Jun 10 16:40:38 lvm[708341]: /dev/ram1: Checksum error at offset 508416
143690:Jun 10 16:40:38 lvm[708341]: WARNING: invalid metadata text from /dev/ram1 at 508416.
143691:Jun 10 16:40:38 lvm[708341]: WARNING: metadata on /dev/ram1 at 508416 has invalid summary for VG.
143692:Jun 10 16:40:38 lvm[708341]: WARNING: bad metadata text on /dev/ram1 in mda1
143693:Jun 10 16:40:38 lvm[708341]: WARNING: scanning /dev/ram1 mda1 failed to read metadata summary.
143694:Jun 10 16:40:38 lvm[708341]: WARNING: repair VG metadata on /dev/ram1 with vgck --updatemetadata.
143695:Jun 10 16:40:38 lvm[708341]: WARNING: scan failed to get metadata summary from /dev/ram1 PVID mLoVSnWPN1IqsHh3eXMh5QCmWyHsBwDi
143901:Jun 10 16:41:15 lvm[708341]: Volume group "brd_vg" not found
143902:Jun 10 16:41:15 lvm[708341]: Cannot process volume group brd_vg


2.03.09 Error log: (The version we use)
106626:Jun 10 15:09:02 localhost.localdomain lvm[639523]: ppid=593053, cmdline:lvremove -f brd_vg/lv47
106627:Jun 10 15:09:02 localhost.localdomain lvm[639523]: Call udev_device_get_devnode use 1417334 usec, device:/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:03.0/virtio1/host0/target0:0:0/0:0:0:0/block/sda/sda3
107044:Jun 10 15:09:05 localhost.localdomain lvm[639523]: Call udev_device_get_devnode use 1388769 usec, device:/sys/devices/virtual/block/dm-202
107256:Jun 10 15:09:07 localhost.localdomain lvm[639523]: Call udev_device_get_devlinks_list_entry use 1472467 usec, dev:/sys/devices/virtual/block/dm-213
107879:Jun 10 15:09:11 localhost.localdomain lvm[639523]: Call udev_device_get_devlinks_list_entry use 1386394 usec, dev:/sys/devices/virtual/block/dm-389
109417:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: Metadata location on /dev/ram0 at 464384 begins with invalid VG name.
109418:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: bad metadata text on /dev/ram0 in mda1
109419:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: scanning /dev/ram0 mda1 failed to read metadata summary.
109420:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: repair VG metadata on /dev/ram0 with vgck --updatemetadata.
109421:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: scan failed to get metadata summary from /dev/ram0 PVID elQQtAsePg5vfZA2DuMV4exqYhvvRdrq
109423:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: Metadata location on /dev/ram1 at 959488 begins with invalid VG name.
109424:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: bad metadata text on /dev/ram1 in mda1
109425:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: scanning /dev/ram1 mda1 failed to read metadata summary.
109426:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: repair VG metadata on /dev/ram1 with vgck --updatemetadata.
109427:Jun 10 15:09:44 localhost.localdomain lvm[639523]: WARNING: scan failed to get metadata summary from /dev/ram1 PVID mLoVSnWPN1IqsHh3eXMh5QCmWyHsBwDi
111273:Jun 10 15:10:46 localhost.localdomain lvm[639523]: Reading VG brd_vg <no vgid>  	               (Output after adjusting the printing level)
111274:Jun 10 15:10:46 localhost.localdomain lvm[639523]: Rescanning devices for brd_vg rw             (Output after adjusting the printing level)
111275:Jun 10 15:10:46 localhost.localdomain lvm[639523]: Cache did not find VG vgid from name brd_vg  (Output after adjusting the printing level)
111276:Jun 10 15:10:46 localhost.localdomain lvm[639523]: Volume group "brd_vg" not found.
111277:Jun 10 15:10:46 localhost.localdomain lvm[639523]: Cannot process volume group brd_vg

Wu Guanghao

? 2021/6/9 19:09, Zdenek Kabelac ??:
> Dne 09. 06. 21 v 11:32 Wu Guanghao napsal(a):
>> Hi,
>>
>> lvcreate and lvremove execute lvmcache_label_scan() to read the metadata on the lvm device,
>> but no lock is added for protection. if other processes are currently executing vg_commit() to submit metadata,
>> then it will go wrong.
> 
> 
> 
> Hi
> 
> The initial scan is going 'lockless' to avoid holding VG locks for too long when it's not strictly necessary.
> 
> The purpose of this scan is to quickly collect & cache all the info which is needed to know. The we take the lock and essentially do another 'rescan' however in this case we only need to 'check' if the cache data corresponds to the current disk state by checking only PV small header information.
> 
> Another 'side' efffect? (probably not very used) is that user may specify VG via? VG UUID, but for the VG lock we need to use VG name - so we need to be able to 'translate' UUID to vgname - and for this we need to know lvm2 metadata with this info.
> 
> Now when the parallel processe manages to change some data (while holding its lock), such internal disk cache will be thrown away (PV header will mismatch) and this particular PV is rereaded while the command now holds the correct lock. But this is really very occasional situation and the benefit of lockless asynchronous scanning outweighs this.
> 
> 
> Regards
> 
> 
> Zdenek
> 
> 
> .




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [QUESTION] lvmcache_label_scan: checksum error at offset xxx
  2021-06-10  9:19   ` Wu Guanghao
@ 2021-06-10  9:53     ` Zdenek Kabelac
  2021-06-10 18:44     ` David Teigland
  1 sibling, 0 replies; 7+ messages in thread
From: Zdenek Kabelac @ 2021-06-10  9:53 UTC (permalink / raw)
  To: lvm-devel

Dne 10. 06. 21 v 11:19 Wu Guanghao napsal(a):
> Hi,
>
> If the execution of lvmcache_label_scan() fails, the scan can be performed again, then there is no problem;
> But the execution of lvmcache_label_scan() fails, which will cause the vgid to be unable to obtain normally,
> then the re-reading will not be triggered in vg_read(), causing the execution of this command to fail .
>
>
> 2.03.12 Error log:
> 143682:Jun 10 16:40:38 lvm[708341]: /dev/ram0: Checksum error at offset 203776
> 143683:Jun 10 16:40:38 lvm[708341]: WARNING: invalid metadata text from /dev/ram0 at 203776.
> 143684:Jun 10 16:40:38 lvm[708341]: WARNING: metadata on /dev/ram0 at 203776 has invalid summary for VG.
> 143685:Jun 10 16:40:38 lvm[708341]: WARNING: bad metadata text on /dev/ram0 in mda1
> 143686:Jun 10 16:40:38 lvm[708341]: WARNING: scanning /dev/ram0 mda1 failed to read metadata summary.
> 143687:Jun 10 16:40:38 lvm[708341]: WARNING: repair VG metadata on /dev/ram0 with vgck --updatemetadata.
> 143688:Jun 10 16:40:38 lvm[708341]: WARNING: scan failed to get metadata summary from /dev/ram0 PVID elQQtAsePg5vfZA2DuMV4exqYhvvRdrq
> 143689:Jun 10 16:40:38 lvm[708341]: /dev/ram1: Checksum error at offset 508416
> 143690:Jun 10 16:40:38 lvm[708341]: WARNING: invalid metadata text from /dev/ram1 at 508416.
> 143691:Jun 10 16:40:38 lvm[708341]: WARNING: metadata on /dev/ram1 at 508416 has invalid summary for VG.
> 143692:Jun 10 16:40:38 lvm[708341]: WARNING: bad metadata text on /dev/ram1 in mda1
> 143693:Jun 10 16:40:38 lvm[708341]: WARNING: scanning /dev/ram1 mda1 failed to read metadata summary.
> 143694:Jun 10 16:40:38 lvm[708341]: WARNING: repair VG metadata on /dev/ram1 with vgck --updatemetadata.
> 143695:Jun 10 16:40:38 lvm[708341]: WARNING: scan failed to get metadata summary from /dev/ram1 PVID mLoVSnWPN1IqsHh3eXMh5QCmWyHsBwDi
> 143901:Jun 10 16:41:15 lvm[708341]: Volume group "brd_vg" not found
> 143902:Jun 10 16:41:15 lvm[708341]: Cannot process volume group brd_vg


Hi


Yep this case looks like bug - could you please open? bugzilla for this case 
and don't forget to attache full -vvvv log plus possible reproducer how you 
reach it.

It seems the invalid device during? unlocked scan is not handled correctly and 
needs some examination.

Regards

Zdenek




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [QUESTION] lvmcache_label_scan: checksum error at offset xxx
  2021-06-10  9:19   ` Wu Guanghao
  2021-06-10  9:53     ` Zdenek Kabelac
@ 2021-06-10 18:44     ` David Teigland
  2021-06-11  1:53       ` Wu Guanghao
  1 sibling, 1 reply; 7+ messages in thread
From: David Teigland @ 2021-06-10 18:44 UTC (permalink / raw)
  To: lvm-devel

On Thu, Jun 10, 2021 at 05:19:17PM +0800, Wu Guanghao wrote:

> lvm[708341]: /dev/ram0: Checksum error at offset 203776
> lvm[708341]: WARNING: invalid metadata text from /dev/ram0 at 203776.
> lvm[708341]: WARNING: metadata on /dev/ram0 at 203776 has invalid summary for VG.
> lvm[708341]: WARNING: bad metadata text on /dev/ram0 in mda1
> lvm[708341]: WARNING: scanning /dev/ram0 mda1 failed to read metadata summary.

> lvm[639523]: WARNING: Metadata location on /dev/ram0 at 464384 begins with invalid VG name.
> lvm[639523]: WARNING: bad metadata text on /dev/ram0 in mda1
> lvm[639523]: WARNING: scanning /dev/ram0 mda1 failed to read metadata summary.

Hi, I'm not sure if these specific errors would be caused by the lock-less
label scan.  I would expect lvm to read valid but old metadata (which it
expects to find on occasion because of the lock-less scan.)  The full
command debugging along with a dd of the ondisk metadata areas would be
most helpful.

Thanks,
Dave



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [QUESTION] lvmcache_label_scan: checksum error at offset xxx
  2021-06-10 18:44     ` David Teigland
@ 2021-06-11  1:53       ` Wu Guanghao
  2021-06-11 21:50         ` David Teigland
  0 siblings, 1 reply; 7+ messages in thread
From: Wu Guanghao @ 2021-06-11  1:53 UTC (permalink / raw)
  To: lvm-devel

Hi

A high probability is caused by lock-free scanning, because after the command execution fails,
it is normal to execute it manually again. The specific test script is already in
https://bugzilla.redhat.com/show_bug.cgi?id=1970719

Wu Guanghao

? 2021/6/11 2:44, David Teigland ??:
> Hi, I'm not sure if these specific errors would be caused by the lock-less
> label scan.  I would expect lvm to read valid but old metadata (which it
> expects to find on occasion because of the lock-less scan.)  The full
> command debugging along with a dd of the ondisk metadata areas would be
> most helpful.




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [QUESTION] lvmcache_label_scan: checksum error at offset xxx
  2021-06-11  1:53       ` Wu Guanghao
@ 2021-06-11 21:50         ` David Teigland
  0 siblings, 0 replies; 7+ messages in thread
From: David Teigland @ 2021-06-11 21:50 UTC (permalink / raw)
  To: lvm-devel

On Fri, Jun 11, 2021 at 09:53:27AM +0800, Wu Guanghao wrote:
> A high probability is caused by lock-free scanning, because after the command execution fails,
> it is normal to execute it manually again. The specific test script is already in
> https://bugzilla.redhat.com/show_bug.cgi?id=1970719

Thanks, I didn't yet see the script produce the errors, but I'll continue
trying.

I've attached an experimental patch that might help your test avoid the
errors.  The patch is an optimization that I had planned to try, but it
may also help in this case, since it acquires the vg lock prior to the
label scan.  If our understanding of the problem is correct, then this
patch would not solve all cases, but it should avoid many, like your test.
Other changes would be needed to handle remaining cases.  I'd be
interested to know if it helps.

For now, the optimization depends on the VG name being found in the first
positional command arg.  So, you'll need to use traditional command form
in which the VG name is placed at the end, e.g.

  lvcreate --type thin -V1M --thinpool poolname -n lvname vgname

Dave
-------------- next part --------------
>From 24e72200b5abc72df51e864a2adaf21a855b4b38 Mon Sep 17 00:00:00 2001
From: David Teigland <teigland@redhat.com>
Date: Fri, 11 Jun 2021 16:30:05 -0500
Subject: [PATCH] locking: hint-based vg locking optimization

This adds an optimization for some common cases in which
the VG lock can be acquired early, prior to label scan.
This reduces the chance that devices may be changed
between label scan and the normal vg lock in vg_read.

This is a proof-of-concept / experimental patch for testing.
---
 lib/commands/toolcontext.h |  1 +
 lib/label/hints.c          |  6 ++++--
 lib/label/hints.h          |  2 +-
 lib/label/label.c          | 48 ++++++++++++++++++++++++++++++++++++++--------
 lib/locking/locking.c      |  5 +++++
 5 files changed, 51 insertions(+), 11 deletions(-)

diff --git a/lib/commands/toolcontext.h b/lib/commands/toolcontext.h
index a47b7d760317..8389553e7bdb 100644
--- a/lib/commands/toolcontext.h
+++ b/lib/commands/toolcontext.h
@@ -256,6 +256,7 @@ struct cmd_context {
 	unsigned rand_seed;
 	struct dm_list pending_delete;		/* list of LVs for removal */
 	struct dm_pool *pending_delete_mem;	/* memory pool for pending deletes */
+	int early_lock_vg_mode;
 };
 
 /*
diff --git a/lib/label/hints.c b/lib/label/hints.c
index 47236a15a63d..5546c168cf06 100644
--- a/lib/label/hints.c
+++ b/lib/label/hints.c
@@ -1288,12 +1288,14 @@ check:
  */
 
 int get_hints(struct cmd_context *cmd, struct dm_list *hints_out, int *newhints,
-	      struct dm_list *devs_in, struct dm_list *devs_out)
+	      struct dm_list *devs_in, struct dm_list *devs_out, char **vgname_out)
 {
 	struct dm_list hints_list;
 	int needs_refresh = 0;
 	char *vgname = NULL;
 
+	*vgname_out = NULL;
+
 	dm_list_init(&hints_list);
 
 	/* Decide below if the caller should create new hints. */
@@ -1433,7 +1435,7 @@ int get_hints(struct cmd_context *cmd, struct dm_list *hints_out, int *newhints,
 
 	dm_list_splice(hints_out, &hints_list);
 
-	free(vgname);
+	*vgname_out = vgname;
 
 	return 1;
 }
diff --git a/lib/label/hints.h b/lib/label/hints.h
index e8cfd6a7e935..b8be4fd85683 100644
--- a/lib/label/hints.h
+++ b/lib/label/hints.h
@@ -33,7 +33,7 @@ void clear_hint_file(struct cmd_context *cmd);
 void invalidate_hints(struct cmd_context *cmd);
 
 int get_hints(struct cmd_context *cmd, struct dm_list *hints, int *newhints,
-              struct dm_list *devs_in, struct dm_list *devs_out);
+              struct dm_list *devs_in, struct dm_list *devs_out, char **vgname_out);
 
 int validate_hints(struct cmd_context *cmd, struct dm_list *hints);
 
diff --git a/lib/label/label.c b/lib/label/label.c
index cfb9ebc80b35..3ea4bfc5241e 100644
--- a/lib/label/label.c
+++ b/lib/label/label.c
@@ -1032,6 +1032,7 @@ int label_scan(struct cmd_context *cmd)
 	struct dev_iter *iter;
 	struct device_list *devl, *devl2;
 	struct device *dev;
+	char *vgname_hint = NULL;
 	uint64_t max_metadata_size_bytes;
 	int device_ids_invalid = 0;
 	int using_hints;
@@ -1137,21 +1138,52 @@ int label_scan(struct cmd_context *cmd)
 	 * by using hints which tell us which devices are PVs, which
 	 * are the only devices we actually need to scan.  Without
 	 * hints we need to scan all devs to find which are PVs.
-	 *
-	 * TODO: if the command is using hints and a single vgname
+	 */
+	if (!get_hints(cmd, &hints_list, &create_hints, &all_devs, &scan_devs, &vgname_hint)) {
+		dm_list_splice(&scan_devs, &all_devs);
+		dm_list_init(&hints_list);
+		using_hints = 0;
+	} else
+		using_hints = 1;
+
+	/*
+	 * If the command is using hints and a single vgname
 	 * arg, we can also take the vg lock here, prior to scanning.
 	 * This means we would not need to rescan the PVs in the VG
 	 * in vg_read (skip lvmcache_label_rescan_vg) after the
 	 * vg lock is usually taken.  (Some commands are already
 	 * able to avoid rescan in vg_read, but locking early would
 	 * apply to more cases.)
+	 *
+	 * TODO: we don't know exactly which vg lock mode (read or write)
+	 * the command will use in vg_read() for the normal lock_vol(),
+	 * but we could make a fairly accurate guess of READ/WRITE based
+	 * on looking at the command name.  If we guess wrong we can
+	 * just unlock_vg and lock_vol again with the correct mode in
+	 * vg_read().
 	 */
-	if (!get_hints(cmd, &hints_list, &create_hints, &all_devs, &scan_devs)) {
-		dm_list_splice(&scan_devs, &all_devs);
-		dm_list_init(&hints_list);
-		using_hints = 0;
-	} else
-		using_hints = 1;
+	if (vgname_hint) {
+		uint32_t lck_type = LCK_VG_WRITE;
+
+		log_debug("Early lock vg");
+
+		/* FIXME: borrowing this lockd flag which should be
+		   quite close to what we want, based on the command name.
+		   Need to do proper mode selection here, and then check
+		   in case the later lock_vol in vg_read wants different. */
+		if (cmd->lockd_vg_default_sh)
+			lck_type = LCK_VG_READ;
+
+		if (!lock_vol(cmd, vgname_hint, lck_type, NULL)) {
+			log_warn("Could not pre-lock VG %s.", vgname_hint);
+			/* not an error since this is just an optimization */
+		} else {
+			/* Save some state indicating that the vg lock
+			   is already held so that the normal lock_vol()
+			   will know. */
+			cmd->early_lock_vg_mode = lck_type;
+		}
+	}
 
 	/*
 	 * If the total number of devices exceeds the soft open file
diff --git a/lib/locking/locking.c b/lib/locking/locking.c
index c69f08c09271..0aceb194a884 100644
--- a/lib/locking/locking.c
+++ b/lib/locking/locking.c
@@ -203,6 +203,11 @@ int lock_vol(struct cmd_context *cmd, const char *vol, uint32_t flags, const str
 	if (is_orphan_vg(vol))
 		return 1;
 
+	if (!is_global && cmd->early_lock_vg_mode && (lck_type != LCK_UNLOCK)) {
+		log_debug("VG was locked early.");
+		return 1;
+	}
+
 	if (!_blocking_supported)
 		flags |= LCK_NONBLOCK;
 
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-06-11 21:50 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-09  9:32 [QUESTION] lvmcache_label_scan: checksum error at offset xxx Wu Guanghao
2021-06-09 11:09 ` Zdenek Kabelac
2021-06-10  9:19   ` Wu Guanghao
2021-06-10  9:53     ` Zdenek Kabelac
2021-06-10 18:44     ` David Teigland
2021-06-11  1:53       ` Wu Guanghao
2021-06-11 21:50         ` David Teigland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.