linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* adding new devices to degraded raid1
@ 2020-08-27 12:41 Eric Wong
  2020-08-27 17:14 ` Goffredo Baroncelli
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Wong @ 2020-08-27 12:41 UTC (permalink / raw)
  To: linux-btrfs

I don't need to do it right away, but is it possible to add new
devices to a degraded raid1?

One thing I might do in the future is replace a broken big drive
with two small drives.  It may even be used to migrate to SSDs.

Since btrfs-replace only seems to do 1:1 replacements, and I
needed to physically remove an existing broken device to make
room for the replacements, could I do something like:

	mount -o degraded /mnt/foo
	btrfs device add small1 small2 /mnt/foo
	btrfs device remove broken /mnt/foo

?

Anyways, so far raid1 has been working great for me, but I have
some devices nearing 70K Power_On_Hours according to SMART

Thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: adding new devices to degraded raid1
  2020-08-27 12:41 adding new devices to degraded raid1 Eric Wong
@ 2020-08-27 17:14 ` Goffredo Baroncelli
  2020-08-28  0:30   ` Zygo Blaxell
  0 siblings, 1 reply; 9+ messages in thread
From: Goffredo Baroncelli @ 2020-08-27 17:14 UTC (permalink / raw)
  To: Eric Wong, linux-btrfs

On 8/27/20 2:41 PM, Eric Wong wrote:
> I don't need to do it right away, but is it possible to add new
> devices to a degraded raid1?
> 
> One thing I might do in the future is replace a broken big drive
> with two small drives.  It may even be used to migrate to SSDs.
> 
> Since btrfs-replace only seems to do 1:1 replacements, and I
> needed to physically remove an existing broken device to make
> room for the replacements, could I do something like:
> 
> 	mount -o degraded /mnt/foo
> 	btrfs device add small1 small2 /mnt/foo
> 	btrfs device remove broken /mnt/foo
> 
> ?
> 

Instead of

  	btrfs device remove broken /mnt/foo

You should do

	btrfs device remove missing /mnt/foo

("missing" has to be write as is, it is a special term, see man page)

and

	btrfs balance start /mnt/foo


To redistribute the data to the disks.

Please before trying it, wait for other suggestions or confirmation from more expert dveloper about that

BR
G.Baroncelli

> Anyways, so far raid1 has been working great for me, but I have
> some devices nearing 70K Power_On_Hours according to SMART
> 
> Thanks.
> 


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: adding new devices to degraded raid1
  2020-08-27 17:14 ` Goffredo Baroncelli
@ 2020-08-28  0:30   ` Zygo Blaxell
  2020-08-28  2:34     ` Eric Wong
  0 siblings, 1 reply; 9+ messages in thread
From: Zygo Blaxell @ 2020-08-28  0:30 UTC (permalink / raw)
  To: kreijack; +Cc: Eric Wong, linux-btrfs

On Thu, Aug 27, 2020 at 07:14:18PM +0200, Goffredo Baroncelli wrote:
> On 8/27/20 2:41 PM, Eric Wong wrote:
> > I don't need to do it right away, but is it possible to add new
> > devices to a degraded raid1?
> > 
> > One thing I might do in the future is replace a broken big drive
> > with two small drives.  It may even be used to migrate to SSDs.
> > 
> > Since btrfs-replace only seems to do 1:1 replacements, and I
> > needed to physically remove an existing broken device to make
> > room for the replacements, could I do something like:
> > 
> > 	mount -o degraded /mnt/foo
> > 	btrfs device add small1 small2 /mnt/foo
> > 	btrfs device remove broken /mnt/foo

Note that add/remove is orders of magnitude slower than replace.
Replace might take hours or even a day or two on a huge spinning drive.
Add/remove might take _months_, though if you have 8-year-old disks
then it's probably a few days, weeks at most.

Add/remove does work for raid1* (i.e. raid1, raid10, raid1c3, raid1c4).
At the moment only 'replace' works reliably for raid5/raid6.

> > ?
> > 
> 
> Instead of
> 
>  	btrfs device remove broken /mnt/foo
> 
> You should do
> 
> 	btrfs device remove missing /mnt/foo
> 
> ("missing" has to be write as is, it is a special term, see man page)
> 
> and
> 
> 	btrfs balance start /mnt/foo

If the replacement disks are larger than half the size of the failed disk
then device remove may do sufficient data relocation and you won't need
balance.  Once all the disks have equal amounts of unallocated space in
'btrfs fi usage' you can cancel any balances that are running.

On the other hand, if the replacement disks are close to half the size
of the failed disk, then some careful balance filtering is required in
order to utilize all the available space.  This filtering is more than
what the stock tool offers.  You have to make sure that there are no block
groups with a mirror copy on both of the small disks, as any such block
group removes 1GB of available mirror space for data on the largest disk.

> To redistribute the data to the disks.
> 
> Please before trying it, wait for other suggestions or confirmation from more expert dveloper about that
> 
> BR
> G.Baroncelli
> 
> > Anyways, so far raid1 has been working great for me, but I have
> > some devices nearing 70K Power_On_Hours according to SMART
> > 
> > Thanks.
> > 
> 
> 
> -- 
> gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
> Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: adding new devices to degraded raid1
  2020-08-28  0:30   ` Zygo Blaxell
@ 2020-08-28  2:34     ` Eric Wong
  2020-08-28  4:36       ` Zygo Blaxell
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Wong @ 2020-08-28  2:34 UTC (permalink / raw)
  To: Zygo Blaxell; +Cc: kreijack, linux-btrfs

Zygo Blaxell <ce3g8jdj@umail.furryterror.org> wrote:
> Note that add/remove is orders of magnitude slower than replace.
> Replace might take hours or even a day or two on a huge spinning drive.
> Add/remove might take _months_, though if you have 8-year-old disks
> then it's probably a few days, weeks at most.

Btw, any explanation or profiling done on why remove is so much
slower than replace?  Especially since btrfs raid1 ought to be
fairly mature at this point (and I run recent stable kernels).

Converting a single drive to raid1 was not slow at all, either.
RAID 1 ought to be straightforward if there's plenty of free
space, one would think...

> Add/remove does work for raid1* (i.e. raid1, raid10, raid1c3, raid1c4).
> At the moment only 'replace' works reliably for raid5/raid6.

Noted, I'm staying far, far away from raid5/6 :)  Thanks for
your posts on that topic, by the way.

> On Thu, Aug 27, 2020 at 07:14:18PM +0200, Goffredo Baroncelli wrote:
> > Instead of
> > 
> >  	btrfs device remove broken /mnt/foo
> > 
> > You should do
> > 
> > 	btrfs device remove missing /mnt/foo
> > 
> > ("missing" has to be write as is, it is a special term, see man page)

Thanks Goffredo, noted.

> > and
> > 
> > 	btrfs balance start /mnt/foo
> 
> If the replacement disks are larger than half the size of the failed disk
> then device remove may do sufficient data relocation and you won't need
> balance.  Once all the disks have equal amounts of unallocated space in
> 'btrfs fi usage' you can cancel any balances that are running.
> 
> On the other hand, if the replacement disks are close to half the size
> of the failed disk, then some careful balance filtering is required in
> order to utilize all the available space.  This filtering is more than
> what the stock tool offers.  You have to make sure that there are no block
> groups with a mirror copy on both of the small disks, as any such block
> group removes 1GB of available mirror space for data on the largest disk.

Yikes, that balancing sounds like a pain.  I'm not super-limited
on space, and a fair bit gets overwritten or replaced as time
goes on, anyways.

I wonder how far I could get with some lossless rewrites which
might make sense, anyways.

1) full "git gc" (I have a fair amount of git repos)
   Maybe setting pack.compression=0 will even help dedupe
   similar repos (but they'll be no fun to serve over network)

2) replacing some manually-compressed files with uncompressed
   versions (let btrfs compression handle it).  I expect that'll
   let dedupe work better, too.

   I have a lot of FLAC that could live as uncompressed .sox
   files.  I expect FLAC to be more efficient on single files,
   but dedupe could save on cuts that are/were used for editing.
   I won't miss FLAC MD5 checksums when btrfs has checksums, either.

3) is this also something defrag can help with?

Thanks again.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: adding new devices to degraded raid1
  2020-08-28  2:34     ` Eric Wong
@ 2020-08-28  4:36       ` Zygo Blaxell
  2020-08-28  5:09         ` Andrei Borzenkov
  2020-08-29  0:42         ` Eric Wong
  0 siblings, 2 replies; 9+ messages in thread
From: Zygo Blaxell @ 2020-08-28  4:36 UTC (permalink / raw)
  To: Eric Wong; +Cc: kreijack, linux-btrfs

On Fri, Aug 28, 2020 at 02:34:12AM +0000, Eric Wong wrote:
> Zygo Blaxell <ce3g8jdj@umail.furryterror.org> wrote:
> > Note that add/remove is orders of magnitude slower than replace.
> > Replace might take hours or even a day or two on a huge spinning drive.
> > Add/remove might take _months_, though if you have 8-year-old disks
> > then it's probably a few days, weeks at most.
> 
> Btw, any explanation or profiling done on why remove is so much
> slower than replace?  Especially since btrfs raid1 ought to be
> fairly mature at this point (and I run recent stable kernels).

They do different things.

Replace just computes the contents of the filesystem the same way scrub
does:  except for the occasional metadata seek, it runs at wire speeds
because it reads blocks in order from one disk and writes in order on
the other disk, 99.999% of the time.

Remove makes a copy of every extent, updates every reference to the
extent, then deletes the original extents.  Very seek-heavy--including
seeks between reads and writes on the same drive--and the work is roughly
proportional to the number of reflinks, so dedupe and snapshots push
the cost up.  About the only advantage of remove (and balance) is that
it consists of 95% existing btrfs read and write code, and it can handle
any relocation that does not require changing the size or content of an
extent (including all possible conversions).

Arguably this isn't necessary.  Remove could copy a complete block group,
the same way replace does but to a different offset on each drive, and
simply update the chunk tree with the new location of the block group
at the end.  Trouble is, nobody's implemented this approach in btrfs yet.
It would be a whole new code path with its very own new bugs to fix.

> Converting a single drive to raid1 was not slow at all, either.
> RAID 1 ought to be straightforward if there's plenty of free
> space, one would think...

Depends on the disk size, performance, and structure (how big the extents
are and how many references).  Also, "slow" is relative:  100x 2 minutes
is not such a long time.  100x 20 hours is.

> > Add/remove does work for raid1* (i.e. raid1, raid10, raid1c3, raid1c4).
> > At the moment only 'replace' works reliably for raid5/raid6.
> 
> Noted, I'm staying far, far away from raid5/6 :)  Thanks for
> your posts on that topic, by the way.
> 
> > On Thu, Aug 27, 2020 at 07:14:18PM +0200, Goffredo Baroncelli wrote:
> > > Instead of
> > > 
> > >  	btrfs device remove broken /mnt/foo
> > > 
> > > You should do
> > > 
> > > 	btrfs device remove missing /mnt/foo
> > > 
> > > ("missing" has to be write as is, it is a special term, see man page)
> 
> Thanks Goffredo, noted.
> 
> > > and
> > > 
> > > 	btrfs balance start /mnt/foo
> > 
> > If the replacement disks are larger than half the size of the failed disk
> > then device remove may do sufficient data relocation and you won't need
> > balance.  Once all the disks have equal amounts of unallocated space in
> > 'btrfs fi usage' you can cancel any balances that are running.
> > 
> > On the other hand, if the replacement disks are close to half the size
> > of the failed disk, then some careful balance filtering is required in
> > order to utilize all the available space.  This filtering is more than
> > what the stock tool offers.  You have to make sure that there are no block
> > groups with a mirror copy on both of the small disks, as any such block
> > group removes 1GB of available mirror space for data on the largest disk.
> 
> Yikes, that balancing sounds like a pain.  I'm not super-limited
> on space, and a fair bit gets overwritten or replaced as time
> goes on, anyways.
> 
> I wonder how far I could get with some lossless rewrites which
> might make sense, anyways.
> 
> 1) full "git gc" (I have a fair amount of git repos)
>    Maybe setting pack.compression=0 will even help dedupe
>    similar repos (but they'll be no fun to serve over network)

Git pack doesn't do 4K block alignment, which limits filesystem-level
dedupe opportunities.  Git repos are strange:  large ones are full of
duplicate blocks, but only 3 or 4 at a time.  By the time a big pack file
has been cut up into extents that can be deduped, we've burned a gigabyte
of IO, created 60 new extents out of 8, and might save 300K of space.

If you have a lot of related git repos, '.git/objects/info/alternates'
is much more efficient than dedupe.  Set up a repo that pulls refs/*
to different remotes from all the other repos on the filesystem, and
set all the other repos' alternates to point to the central repo.
You'll only have each git object once on the filesystem after git gc.
Aaaand you'll also have various issues with git auto-gc occasionally
eating your reflogs.  So maybe this is not for everyone.

> 2) replacing some manually-compressed files with uncompressed
>    versions (let btrfs compression handle it).  I expect that'll
>    let dedupe work better, too.
> 
>    I have a lot of FLAC that could live as uncompressed .sox
>    files.  I expect FLAC to be more efficient on single files,
>    but dedupe could save on cuts that are/were used for editing.
>    I won't miss FLAC MD5 checksums when btrfs has checksums, either.

If they're analog recordings (or have analog in any part of their mix)
they will have nearly zero duplication.  Dedupe only does bit-for-bit
matches, and two clips that are off by one sample, or anything but
an exact integer multiple of 1024 samples, will not be dedupeable.
FLAC is much better than zstd.

VM image files compress and dedupe well.  Better than xz if you
have more than 2 or 3 big ones, but not as good as zpaq (which
has its own deduper built-in, and it's more flexible than btrfs).

> 3) is this also something defrag can help with?

Not really.  defrag can make the balance run faster, but defrag will
require almost the same amount of IO as the balance does.  If you've
already had to remove a disk, it's too late for defrag--it's something you
have to maintain over time so that it's already done before a disk fails.

> Thanks again.
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: adding new devices to degraded raid1
  2020-08-28  4:36       ` Zygo Blaxell
@ 2020-08-28  5:09         ` Andrei Borzenkov
  2020-08-28 20:56           ` Zygo Blaxell
  2020-08-29  0:42         ` Eric Wong
  1 sibling, 1 reply; 9+ messages in thread
From: Andrei Borzenkov @ 2020-08-28  5:09 UTC (permalink / raw)
  To: Zygo Blaxell, Eric Wong; +Cc: kreijack, linux-btrfs

28.08.2020 07:36, Zygo Blaxell пишет:
> 
> Replace just computes the contents of the filesystem the same way scrub
> does:  except for the occasional metadata seek, it runs at wire speeds
> because it reads blocks in order from one disk and writes in order on
> the other disk, 99.999% of the time.
> 

Does it write them to the same absolute disk locations? IOW - is it
possible to use smaller disk for replace or it must be at least as large
as original disk?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: adding new devices to degraded raid1
  2020-08-28  5:09         ` Andrei Borzenkov
@ 2020-08-28 20:56           ` Zygo Blaxell
  0 siblings, 0 replies; 9+ messages in thread
From: Zygo Blaxell @ 2020-08-28 20:56 UTC (permalink / raw)
  To: Andrei Borzenkov; +Cc: Eric Wong, kreijack, linux-btrfs

On Fri, Aug 28, 2020 at 08:09:26AM +0300, Andrei Borzenkov wrote:
> 28.08.2020 07:36, Zygo Blaxell пишет:
> > 
> > Replace just computes the contents of the filesystem the same way scrub
> > does:  except for the occasional metadata seek, it runs at wire speeds
> > because it reads blocks in order from one disk and writes in order on
> > the other disk, 99.999% of the time.
> > 
> 
> Does it write them to the same absolute disk locations? IOW - is it
> possible to use smaller disk for replace or it must be at least as large
> as original disk?

Replace writes data to the locations recorded in the chunk tree, i.e. the
original disk locations on the missing disk.

In theory, you can resize the offline disk to be smaller than the
replacement disk, then run btrfs replace.  In practice, only some of
the methods work (e.g. you must specify device ID and not device name
when replacing) and only on recent kernel versions.

btrfs dev remove is equivalent to 'btrfs fi resize <devid>:0' followed by
"remove empty device <devid>" so the performance will be very similar
for the portion of the data that is resized; however, a combination of
resize and replace is still much faster than device remove, which does
it the slow way for all of the data.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: adding new devices to degraded raid1
  2020-08-28  4:36       ` Zygo Blaxell
  2020-08-28  5:09         ` Andrei Borzenkov
@ 2020-08-29  0:42         ` Eric Wong
  2020-08-29 18:46           ` Zygo Blaxell
  1 sibling, 1 reply; 9+ messages in thread
From: Eric Wong @ 2020-08-29  0:42 UTC (permalink / raw)
  To: Zygo Blaxell; +Cc: kreijack, linux-btrfs

Zygo Blaxell <ce3g8jdj@umail.furryterror.org> wrote:
> On Fri, Aug 28, 2020 at 02:34:12AM +0000, Eric Wong wrote:
> > Zygo Blaxell <ce3g8jdj@umail.furryterror.org> wrote:
> > > Note that add/remove is orders of magnitude slower than replace.
> > > Replace might take hours or even a day or two on a huge spinning drive.
> > > Add/remove might take _months_, though if you have 8-year-old disks
> > > then it's probably a few days, weeks at most.
> > 
> > Btw, any explanation or profiling done on why remove is so much
> > slower than replace?  Especially since btrfs raid1 ought to be
> > fairly mature at this point (and I run recent stable kernels).
> 
> They do different things.
> 
> Replace just computes the contents of the filesystem the same way scrub
> does:  except for the occasional metadata seek, it runs at wire speeds
> because it reads blocks in order from one disk and writes in order on
> the other disk, 99.999% of the time.

Thanks for the explanations.  I'll heed your note down thread
about doing a partial resize followed by a replace when
possible.

> Remove makes a copy of every extent, updates every reference to the
> extent, then deletes the original extents.  Very seek-heavy--including
> seeks between reads and writes on the same drive--and the work is roughly
> proportional to the number of reflinks, so dedupe and snapshots push
> the cost up.  About the only advantage of remove (and balance) is that
> it consists of 95% existing btrfs read and write code, and it can handle
> any relocation that does not require changing the size or content of an
> extent (including all possible conversions).

Does that mean remove speed would be closer to replace on good SSDs?

> Arguably this isn't necessary.  Remove could copy a complete block group,
> the same way replace does but to a different offset on each drive, and
> simply update the chunk tree with the new location of the block group
> at the end.  Trouble is, nobody's implemented this approach in btrfs yet.
> It would be a whole new code path with its very own new bugs to fix.

Ah, it seems like a ton of work for a use case that mainly
affects hobbyists.  I won't hold my breath for it.

> > Converting a single drive to raid1 was not slow at all, either.
> > RAID 1 ought to be straightforward if there's plenty of free
> > space, one would think...
> 
> Depends on the disk size, performance, and structure (how big the extents
> are and how many references).  Also, "slow" is relative:  100x 2 minutes
> is not such a long time.  100x 20 hours is.

It was a new, quickly filled FS; so probably unfragmented.
I remember it seemed reasonable given the HW it was on.

> > 1) full "git gc" (I have a fair amount of git repos)
> >    Maybe setting pack.compression=0 will even help dedupe
> >    similar repos (but they'll be no fun to serve over network)
> 
> Git pack doesn't do 4K block alignment, which limits filesystem-level
> dedupe opportunities.  Git repos are strange:  large ones are full of
> duplicate blocks, but only 3 or 4 at a time.  By the time a big pack file
> has been cut up into extents that can be deduped, we've burned a gigabyte
> of IO, created 60 new extents out of 8, and might save 300K of space.

Heh.  I'll just let git do its thing independently of btrfs.
btrfs checksumming is great for ref storage, at least :>

> If you have a lot of related git repos, '.git/objects/info/alternates'
> is much more efficient than dedupe.  Set up a repo that pulls refs/*
> to different remotes from all the other repos on the filesystem, and
> set all the other repos' alternates to point to the central repo.
> You'll only have each git object once on the filesystem after git gc.
> Aaaand you'll also have various issues with git auto-gc occasionally
> eating your reflogs.  So maybe this is not for everyone.

Yes, I've been using alternates with a mega repo for many years.
I actually have all the remote fetch+url lines duplicated in the
mega repo config for GC safety.  It's a little more network
traffic, but works with overwritten/throwaway branches in
satellite repos.

<snip> will be sticking to FLAC as-is.

> VM image files compress and dedupe well.  Better than xz if you
> have more than 2 or 3 big ones, but not as good as zpaq (which
> has its own deduper built-in, and it's more flexible than btrfs).

Ah, it's a shame I needed to disable CoW on VM images to get
acceptable performance, though.  I'm using `bup' for backing
up VMs and its a nice savings.

> > 3) is this also something defrag can help with?
> 
> Not really.  defrag can make the balance run faster, but defrag will
> require almost the same amount of IO as the balance does.  If you've
> already had to remove a disk, it's too late for defrag--it's something you
> have to maintain over time so that it's already done before a disk fails.

Alright, I'll make a note to keep things defragmented and avoid
relying too much on reflinks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: adding new devices to degraded raid1
  2020-08-29  0:42         ` Eric Wong
@ 2020-08-29 18:46           ` Zygo Blaxell
  0 siblings, 0 replies; 9+ messages in thread
From: Zygo Blaxell @ 2020-08-29 18:46 UTC (permalink / raw)
  To: Eric Wong; +Cc: kreijack, linux-btrfs

On Sat, Aug 29, 2020 at 12:42:40AM +0000, Eric Wong wrote:
> Zygo Blaxell <ce3g8jdj@umail.furryterror.org> wrote:
> > Remove makes a copy of every extent, updates every reference to the
> > extent, then deletes the original extents.  Very seek-heavy--including
> > seeks between reads and writes on the same drive--and the work is roughly
> > proportional to the number of reflinks, so dedupe and snapshots push
> > the cost up.  About the only advantage of remove (and balance) is that
> > it consists of 95% existing btrfs read and write code, and it can handle
> > any relocation that does not require changing the size or content of an
> > extent (including all possible conversions).
> 
> Does that mean remove speed would be closer to replace on good SSDs?

It will be better, but there is still a cost for reading and writing
non-contiguously.  "Good SSD" depends on what the SSD is good at.
A SSD rated for NAS or caching use would be OK, but a high-performance
desktop SSD could hit big write-multiplication penalties.  A couple of
brand names starting with "S" have 5-second IO stalls when their internal
caches get full.  Proportionally, the ratio between the best and worst
IO latency in these SSD models is as bad as SMR drives.  Also there are
CPU and IO latency costs for 'remove' in the host that don't go away
no matter how good the disks are.

> > Arguably this isn't necessary.  Remove could copy a complete block group,
> > the same way replace does but to a different offset on each drive, and
> > simply update the chunk tree with the new location of the block group
> > at the end.  Trouble is, nobody's implemented this approach in btrfs yet.
> > It would be a whole new code path with its very own new bugs to fix.
> 
> Ah, it seems like a ton of work for a use case that mainly
> affects hobbyists.  I won't hold my breath for it.

Well, by that argument, mdadm and lvm shouldn't be able to do it either,
and yet they have supported this style of reshape for years.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-08-29 18:46 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-27 12:41 adding new devices to degraded raid1 Eric Wong
2020-08-27 17:14 ` Goffredo Baroncelli
2020-08-28  0:30   ` Zygo Blaxell
2020-08-28  2:34     ` Eric Wong
2020-08-28  4:36       ` Zygo Blaxell
2020-08-28  5:09         ` Andrei Borzenkov
2020-08-28 20:56           ` Zygo Blaxell
2020-08-29  0:42         ` Eric Wong
2020-08-29 18:46           ` Zygo Blaxell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).