linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* What exactly is BTRFS Raid 10?
@ 2022-08-19 16:49 George Shammas
  2022-08-19 18:10 ` Phillip Susi
  2022-08-20 11:28 ` Goffredo Baroncelli
  0 siblings, 2 replies; 10+ messages in thread
From: George Shammas @ 2022-08-19 16:49 UTC (permalink / raw)
  To: linux-btrfs

Hello,

 I've been searching and reading docs for a few days now, and btrfs raid 10 is a mystery to me. 

This is mostly a documentation question, as many places reference it but nothing actually describes it and the real question is how does it differ from btrfs raid1.

Both BTRFS Raid1 and Raid10 
 - Allows arbitrary number of drives (>=2), including odd numbers. 
 - Will write duplicate blocks across disks.

Raid 10 is referenced in many places, including being the example on using btrfs on multiple devices

https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices

And while raid0,1,5,6 are described in several places, raid 10 is missing. Including in the layout examples here:

https://btrfs.readthedocs.io/en/latest/mkfs.btrfs.html?highlight=raid10#profile-layout

The Raid 1 example there also likely needs a bit of explanation or validation, as all the blocks are written to one device. In that raid one example three devices could be lost as long as it is not one of them is the first device. It also cannot be accurate once the amount stored is above 1 full drive. 

Since raid10 allows for two devices, is there ever a scenario in which choose raid10 would be bad when you want raid1?

BTRFS defaults to raid1 for Data and Metadata, is there a reason that doesn't default to raid10?

Since BTRFS raid modes aren't like traditional block level raids, it would be very useful to explain this somewhere and the pros and cons of each. Maybe even merging the two modes if they cover the same use cases. 

--George

PS:

There is a lot of misinformation out there about btrfs raid as well. For example:

https://www.reddit.com/r/btrfs/comments/f5unv5/raid_1_vs_raid_10/

None of the comments seem accurate, as they are describing traditional raid setups. The second comment says btrfs raid10 is actively harmful, but I has no references to collaborate that. 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-19 16:49 What exactly is BTRFS Raid 10? George Shammas
@ 2022-08-19 18:10 ` Phillip Susi
  2022-08-19 22:01   ` George Shammas
  2022-08-20 11:28 ` Goffredo Baroncelli
  1 sibling, 1 reply; 10+ messages in thread
From: Phillip Susi @ 2022-08-19 18:10 UTC (permalink / raw)
  To: George Shammas; +Cc: linux-btrfs


"George Shammas" <btrfs@shamm.as> writes:

> Hello,
>
>  I've been searching and reading docs for a few days now, and btrfs raid 10 is a mystery to me. 
>
> This is mostly a documentation question, as many places reference it
> but nothing actually describes it and the real question is how does it
> differ from btrfs raid1.
>
> Both BTRFS Raid1 and Raid10 
>  - Allows arbitrary number of drives (>=2), including odd numbers. 
>  - Will write duplicate blocks across disks.

Btrfs raid10 requires an even number of drives with a minimum of 4.
It's pretty much raid 1+0.


> https://btrfs.readthedocs.io/en/latest/mkfs.btrfs.html?highlight=raid10#profile-layout
>
> The Raid 1 example there also likely needs a bit of explanation or
> validation, as all the blocks are written to one device. In that raid
> one example three devices could be lost as long as it is not one of
> them is the first device. It also cannot be accurate once the amount
> stored is above 1 full drive.

It is meant to show a *possible* layout, not every potential layout.
The data may be stored like than and then yes, you could lose multiple
drives and still recover as long as the lost drives were 2, 3, and 4.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-19 18:10 ` Phillip Susi
@ 2022-08-19 22:01   ` George Shammas
  2022-08-19 22:18     ` Chris Murphy
                       ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: George Shammas @ 2022-08-19 22:01 UTC (permalink / raw)
  To: Phillip Susi; +Cc: linux-btrfs

On Fri, Aug 19, 2022, at 2:10 PM, Phillip Susi wrote:
>> The Raid 1 example there also likely needs a bit of explanation or
>> validation, as all the blocks are written to one device. In that raid
>> one example three devices could be lost as long as it is not one of
>> them is the first device. It also cannot be accurate once the amount
>> stored is above 1 full drive.
>
> It is meant to show a *possible* layout, not every potential layout.
> The data may be stored like than and then yes, you could lose multiple
> drives and still recover as long as the lost drives were 2, 3, and 4.

I wouldn't expect all potential layouts, but maybe the _worst_ possible layout and an text. IE. If the layout blocks is random and only guarantees that each block will be on two disks. That would mean raid1 setup of 4 disks is pretty much guaranteed to have data loss if _any_ two disks fail.  This is important and should be made clear somewhere.

> Btrfs raid10 requires an even number of drives with a minimum of 4.

Is this true? I just experimented with 3 drives and `btrfs device usage`  ends up showing 'RAID10/2' But the data is equally spread across all three drives. Even though the stripe is 2, the data is still being placed evenly across three drives. See  [1] for an actual example I just created.

> It's pretty much raid 1+0.

Again, is it? raid 1+0 would imply that two drives are mirrored, and hence identical, and the blocks are striped  over mirrored sets. This would also force devices being in  multiple of two, which is not the case. 

If I had to take a guess at the actual implementation is that the block is striped and then the individual stripes are randomly placed on drives. Making a possible to have a lay out like the following where, similar to a btrfs raid1 setup, any two disk failures will lead to data loss. 

| SDA | SDB | SDC | SDD |
|-----|-----|-----|-----|
| A1  | A2  | A1  | A2  |
| B1  | B1  | B2  | B2  |
| C1  | D1  | D1  | C1  |
| D2  | C2  | C2  | D2  |

If I am right, and I don't know that I am, that would make raid1 and raid10 have the very similar data loss scenarios that are not completely obvious.

And the question remains if that is the case, is there ever a reason to choose raid1 over raid, and vice versa. 

--George

[1] # btrfs device usage /
/dev/sdc4, ID: 1
   Device size:           917.87GiB
   Device slack:            3.50KiB
   Data,RAID10/2:         341.00GiB
   Metadata,RAID10/2:       7.00GiB
   System,RAID10/2:        32.00MiB
   Unallocated:           569.84GiB

/dev/sdb4, ID: 2
   Device size:           917.87GiB
   Device slack:            3.50KiB
   Data,RAID10/2:         345.00GiB
   Metadata,RAID10/2:       4.00GiB
   Unallocated:           568.87GiB

/dev/sda4, ID: 3
   Device size:           917.87GiB
   Device slack:            3.50KiB
   Data,RAID10/2:         342.00GiB
   Metadata,RAID10/2:       7.00GiB
   System,RAID10/2:        32.00MiB
   Unallocated:           568.84GiB
 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-19 22:01   ` George Shammas
@ 2022-08-19 22:18     ` Chris Murphy
  2022-08-19 22:37       ` George Shammas
  2022-08-19 22:29     ` waxhead
  2022-08-22 19:51     ` Phillip Susi
  2 siblings, 1 reply; 10+ messages in thread
From: Chris Murphy @ 2022-08-19 22:18 UTC (permalink / raw)
  To: George Shammas, Phillip Susi; +Cc: Btrfs BTRFS

man mkfs.btrfs explains some of this. Minimum devices 2.

And keep in mind all btrfs raid is at the chunk level. Not block device level. So there's no such thing as a mirrored device, but rather mirrored chunks (two copies of a block group on separate block devices).

And yes, you can only lose one device with btrfs raid10. 

--
Chris Murphy

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-19 22:01   ` George Shammas
  2022-08-19 22:18     ` Chris Murphy
@ 2022-08-19 22:29     ` waxhead
  2022-08-22 19:51     ` Phillip Susi
  2 siblings, 0 replies; 10+ messages in thread
From: waxhead @ 2022-08-19 22:29 UTC (permalink / raw)
  To: George Shammas, Phillip Susi; +Cc: linux-btrfs



George Shammas wrote:
> On Fri, Aug 19, 2022, at 2:10 PM, Phillip Susi wrote:
>>> The Raid 1 example there also likely needs a bit of explanation or
>>> validation, as all the blocks are written to one device. In that raid
>>> one example three devices could be lost as long as it is not one of
>>> them is the first device. It also cannot be accurate once the amount
>>> stored is above 1 full drive.
>>
>> It is meant to show a *possible* layout, not every potential layout.
>> The data may be stored like than and then yes, you could lose multiple
>> drives and still recover as long as the lost drives were 2, 3, and 4.
> 
> I wouldn't expect all potential layouts, but maybe the _worst_ possible layout and an text. IE. If the layout blocks is random and only guarantees that each block will be on two disks. That would mean raid1 setup of 4 disks is pretty much guaranteed to have data loss if _any_ two disks fail.  This is important and should be made clear somewhere.
>
I am just a regular user , but yes, BTRFS "RAID" is (IMHO) a stupid name 
and confuses people easily. RAID1 means two instances of the data. That 
is why RAID1c3 and RAID1c4 exists which gives you a bit more redundancy 
(3 instances and 4 instances in case it was not obvious).

RAID10 is also just 2 instances of the data spread over as many disks a 
possible. E.g. any two disk lost means you might be in trouble.

Another interesting fact (last time I checked) is that BTRFS allows for 
interesting configurations like data in RAID6 mode and metadata in 
RAID10. This is a problem since small files can be stored directly in 
metadata and since RAID10 can have dataloss with two disks lost and 
RAID6 should not, you have to pick your configuration with care.

Now this sounds awfully critical, but BTRFS is a fantastic filesystem 
and have saved me from silent corruption more than once.

There is also a fantastic little tool on the web (that should have a 
ncurses version to be honest) here : https://carfax.org.uk/btrfs-usage/

That gives you a pretty good idea about what is happening if you play 
around with the values a bit. Good luck :)



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-19 22:18     ` Chris Murphy
@ 2022-08-19 22:37       ` George Shammas
  0 siblings, 0 replies; 10+ messages in thread
From: George Shammas @ 2022-08-19 22:37 UTC (permalink / raw)
  To: Chris Murphy, Phillip Susi; +Cc: Btrfs BTRFS



On Fri, Aug 19, 2022, at 6:18 PM, Chris Murphy wrote:
> man mkfs.btrfs explains some of this. Minimum devices 2.

My first mail included a link the the man page of mkfs.btrfs. It is devoid of information of raid10 other than it is an option.

> And keep in mind all btrfs raid is at the chunk level. Not block device 
> level. So there's no such thing as a mirrored device, but rather 
> mirrored chunks (two copies of a block group on separate block devices).
>
> And yes, you can only lose one device with btrfs raid10. 

And this is exactly why I am asking this question. Given that 
- both raid1 and raid10 can only tolerate a single disk failure
- chucks are placed evenly across drives, effectively making files striped even if the chucks themselves are not striped. 

It seems that both "raid1" and "raid10" are functionally equivalent in btrfs. Or there is a nuance that I'm missing and is not documented. 

These gotchas are not obvious to me, even after 12 years of working with traditional raid setups. 

Perhaps raid1 does not require that chucks are placed evenly, allowing for hotspots? In which case raid10 is almost always preferable unless you have disks of unequal size?

There needs to be some authoritative text on the differences and pros/cons  btrfs raid1 and btrfs raid10, especially since raid5/6 are not recommended.

--George

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-19 16:49 What exactly is BTRFS Raid 10? George Shammas
  2022-08-19 18:10 ` Phillip Susi
@ 2022-08-20 11:28 ` Goffredo Baroncelli
  2022-08-20 18:11   ` Andrei Borzenkov
  1 sibling, 1 reply; 10+ messages in thread
From: Goffredo Baroncelli @ 2022-08-20 11:28 UTC (permalink / raw)
  To: George Shammas, linux-btrfs

On 19/08/2022 18.49, George Shammas wrote:
> Hello,
> 
>   I've been searching and reading docs for a few days now, and btrfs raid 10 is a mystery to me.
> 
> This is mostly a documentation question, as many places reference it but nothing actually describes it and the real question is how does it differ from btrfs raid1.
> 
> Both BTRFS Raid1 and Raid10
>   - Allows arbitrary number of drives (>=2), including odd numbers.
>   - Will write duplicate blocks across disks.
> 
> Raid 10 is referenced in many places, including being the example on using btrfs on multiple devices
> 
> https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
> 
> And while raid0,1,5,6 are described in several places, raid 10 is missing. Including in the layout examples here:
> 
> https://btrfs.readthedocs.io/en/latest/mkfs.btrfs.html?highlight=raid10#profile-layout
> 
> The Raid 1 example there also likely needs a bit of explanation or validation, as all the blocks are written to one device. In that raid one example three devices could be lost as long as it is not one of them is the first device. It also cannot be accurate once the amount stored is above 1 full drive.
> 
> Since raid10 allows for two devices, is there ever a scenario in which choose raid10 would be bad when you want raid1?

The allocation is done by BTRFS not per disk but per "chunk". A chunk is the minimal unit of allocation.
Depending by the layout, the chunk may be spread over multiple disks. The size of chunk generally is about 1G x "disc count".
If you have a pool composed by 7 disk and the chunk is spread over all the disks, the chunk size is 1G x 7 = 7G.
Depending by the redundancy level the space available may be lesser.
When a chunk is filled, a new one is allocated.

Below the description of each profile, with an example of data allocation.

SINGLE:
A new chunk is allocated only in one disk, the one which more space free. There is not any redundancy.

DUP:
A new chunk is allocated only in one disk, the one which more space free.
The data is written two time in the same chunk, so the space available is half of the size of the chunk.

If you got corruption by a bad sector you can recover the data from the other copy.

RAID0:
This is a stripe mode, there is not any redundancy.
When a new unit of allocation is required, BTRFS allocate a new chunk over ALL the disk.
The chunk size is 1GB x disk count.
If you write data to a disk, the first 64k are written in the first disk, the 2nd 64 are written to the 2nd disk and so on..


RAID1:
A new chunk is allocated to the two disks with more space available. Each new chunk has a size of 1GB x 2 = 2GB, but only 1GB is available for the data because the other one contains a copy of the data.
A raid1 layout may have more than two disks. However the data is copied only two times, this means that you can tolerate only the lost of one device.
For example the first chunk is allocated on the first two disks; the 2nd chunk is allocated on the first and the 3rd disk; the 3rd chunk is allocated on the 2nd and 3rd disk....

RAID1C3, RAID1C4:
These profiles are variant of RAID1, where the data is copied 3 times or 4 times.

RAID10:
Is a mix of RAID0 and RAID1: the data is copied two times (so you can tolerate the lost of one device), but it is spread over near all the disks.
If you have 7 disks, a new chunk is allocated over 6 disks (the greatest even number <= to the disk count) with more space available.
If you write data to a disk, the first 64K are written on the 1st disk and and the 2nd disk (as 2nd copy). When you write the 2nd 64 k of data, these are written in the 3rd disk and 4th disk (as 2nd copy). And so on until you fill the chunk.
When the chunk is filled, a new allocation occurred. Likely the 7th disk is used and one of the first 6 isn't for the new chunk.

RAID5 (and 6)
A new chunk is allocated over all the disks. The space available in the new chunk is 1GB x (number of disk -1) (or -2 in case of raid 6).
If you write data to a disk, the first 64k are written in the first disk, the 2nd 64 are written to the 2nd disk and so on..



> 
> BTRFS defaults to raid1 for Data and Metadata, is there a reason that doesn't default to raid10?
> 
> Since BTRFS raid modes aren't like traditional block level raids, it would be very useful to explain this somewhere and the pros and cons of each. Maybe even merging the two modes if they cover the same use cases.
> 
> --George
> 
> PS:
> 
> There is a lot of misinformation out there about btrfs raid as well. For example:
> 
> https://www.reddit.com/r/btrfs/comments/f5unv5/raid_1_vs_raid_10/
> 
> None of the comments seem accurate, as they are describing traditional raid setups. The second comment says btrfs raid10 is actively harmful, but I has no references to collaborate that.

-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-20 11:28 ` Goffredo Baroncelli
@ 2022-08-20 18:11   ` Andrei Borzenkov
  2022-08-21  0:23     ` Qu Wenruo
  0 siblings, 1 reply; 10+ messages in thread
From: Andrei Borzenkov @ 2022-08-20 18:11 UTC (permalink / raw)
  To: kreijack, George Shammas, linux-btrfs

On 20.08.2022 14:28, Goffredo Baroncelli wrote:
> 
> RAID1:
> A new chunk is allocated to the two disks with more space available. Each new chunk has a size of 1GB x 2 = 2GB, but only 1GB is available for the data because the other one contains a copy of the data.
> A raid1 layout may have more than two disks. However the data is copied only two times, this means that you can tolerate only the lost of one device.
> For example the first chunk is allocated on the first two disks; the 2nd chunk is allocated on the first and the 3rd disk; the 3rd chunk is allocated on the 2nd and 3rd disk....
> 
...
> 
> RAID10:
> Is a mix of RAID0 and RAID1: the data is copied two times (so you can tolerate the lost of one device), but it is spread over near all the disks.
> If you have 7 disks, a new chunk is allocated over 6 disks (the greatest even number <= to the disk count) with more space available.
> If you write data to a disk, the first 64K are written on the 1st disk and and the 2nd disk (as 2nd copy). When you write the 2nd 64 k of data, these are written in the 3rd disk and 4th disk (as 2nd copy). And so on until you fill the chunk.
> When the chunk is filled, a new allocation occurred. Likely the 7th disk is used and one of the first 6 isn't for the new chunk.
> 

Is large IO processed in parallel? If I have 8 disks raid10 and issue
256K request - will btrfs submit 4 concurrent 64K requests to each disk?

And for raid1 - will there be single 256K physical disk request or 4 x
64K requests?

What about read requests - will all disks in raid1/raid10 be used
concurrently or btrfs always reads from the "primary" copy (and how it
is determined then)?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-20 18:11   ` Andrei Borzenkov
@ 2022-08-21  0:23     ` Qu Wenruo
  0 siblings, 0 replies; 10+ messages in thread
From: Qu Wenruo @ 2022-08-21  0:23 UTC (permalink / raw)
  To: Andrei Borzenkov, kreijack, George Shammas, linux-btrfs



On 2022/8/21 02:11, Andrei Borzenkov wrote:
> On 20.08.2022 14:28, Goffredo Baroncelli wrote:
>>
>> RAID1:
>> A new chunk is allocated to the two disks with more space available. Each new chunk has a size of 1GB x 2 = 2GB, but only 1GB is available for the data because the other one contains a copy of the data.
>> A raid1 layout may have more than two disks. However the data is copied only two times, this means that you can tolerate only the lost of one device.
>> For example the first chunk is allocated on the first two disks; the 2nd chunk is allocated on the first and the 3rd disk; the 3rd chunk is allocated on the 2nd and 3rd disk....
>>
> ...
>>
>> RAID10:
>> Is a mix of RAID0 and RAID1: the data is copied two times (so you can tolerate the lost of one device), but it is spread over near all the disks.
>> If you have 7 disks, a new chunk is allocated over 6 disks (the greatest even number <= to the disk count) with more space available.
>> If you write data to a disk, the first 64K are written on the 1st disk and and the 2nd disk (as 2nd copy). When you write the 2nd 64 k of data, these are written in the 3rd disk and 4th disk (as 2nd copy). And so on until you fill the chunk.
>> When the chunk is filled, a new allocation occurred. Likely the 7th disk is used and one of the first 6 isn't for the new chunk.
>>
>
> Is large IO processed in parallel? If I have 8 disks raid10 and issue
> 256K request - will btrfs submit 4 concurrent 64K requests to each disk?

That is related to the RAID10/0 stripe size.
For btrfs, it uses fixes stripe size (64K).

So if you have 8 disks raid10, and issue a 256K request, it will be
split into 4 stripes first.

Then the first stripe go to the first 2 disk group (substripe).
The 2nd stripe go to the 2nd substripe.
Until the last stripe go to the last substripe.

All the submission are in parallel.


Although in full technical details, we will never submit a full 256K
request. Btrfs will submit the first 64K as long as the write size
reaches stripe boundary.
(Which may very slightly reduce the parallism, but also very slightly
reduce memory usage).

We have some pending changes to submit larger bio in logical layer, then
do the split.
But the change in performance should not even be observable.

>
> And for raid1 - will there be single 256K physical disk request or 4 x
> 64K requests?

Stripe length only works for RAID0/RAID10/RAID5/RAID6.

DUP/SINGLE/RAID1* doesn't bother the stripe length, thus it's a single
256K bio submitted to all RAID1* disks.

>
> What about read requests - will all disks in raid1/raid10 be used
> concurrently or btrfs always reads from the "primary" copy (and how it
> is determined then)?

Currently we use pid as the criteria to load balance the reads for
DUP/RAID1* profiles.

Anand Jain has some pending patches to allow different load balance
policy to be applied for DUP/RAID1* profiles though.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: What exactly is BTRFS Raid 10?
  2022-08-19 22:01   ` George Shammas
  2022-08-19 22:18     ` Chris Murphy
  2022-08-19 22:29     ` waxhead
@ 2022-08-22 19:51     ` Phillip Susi
  2 siblings, 0 replies; 10+ messages in thread
From: Phillip Susi @ 2022-08-22 19:51 UTC (permalink / raw)
  To: George Shammas; +Cc: linux-btrfs


"George Shammas" <btrfs@shamm.as> writes:

>> Btrfs raid10 requires an even number of drives with a minimum of 4.
>
> Is this true? I just experimented with 3 drives and `btrfs device
> usage` ends up showing 'RAID10/2' But the data is equally spread
> across all three drives. Even though the stripe is 2, the data is
> still being placed evenly across three drives. See [1] for an actual
> example I just created.

mkfs.btrfs won't let me create a raid10 with only 3 drives: it says you
need at least 4.  You can do that with mdraid but btrfs doesn't seem to
let you and I thought it was because of:

>> It's pretty much raid 1+0.

> Again, is it? raid 1+0 would imply that two drives are mirrored, and
> hence identical, and the blocks are striped over mirrored sets. This
> would also force devices being in multiple of two, which is not the
> case.

Don't forget that it is done at the chunk level rather than the drive
level.  I just tried 5 drives though and that does seem to work,
however, btrfs filesystem usage shows that the chunk is only allocated
on disks 2,3,4,5 and disk 1 is unused.  Over time you are going to end
up with the average usage about the same but no single chunk is striped
across all 5 drives.

> If I am right, and I don't know that I am, that would make raid1 and
> raid10 have the very similar data loss scenarios that are not
> completely obvious.

Yes, in general raid1 and raid10 have similar redundancy: they can only
handle the loss of a single drive.  Any more than that and you are
relying on peculiarities of the layout and which particular drives
failed, and that's never a good idea.

> And the question remains if that is the case, is there ever a reason to choose raid1 over raid, and vice versa. 

The reason to choose raid10 over raid1 is for the better sequential
performance you can get from the striping.  I ran an mdraid raid10 on 3
disks for years for the good throughput and redundancy qualities.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-08-22 20:08 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-19 16:49 What exactly is BTRFS Raid 10? George Shammas
2022-08-19 18:10 ` Phillip Susi
2022-08-19 22:01   ` George Shammas
2022-08-19 22:18     ` Chris Murphy
2022-08-19 22:37       ` George Shammas
2022-08-19 22:29     ` waxhead
2022-08-22 19:51     ` Phillip Susi
2022-08-20 11:28 ` Goffredo Baroncelli
2022-08-20 18:11   ` Andrei Borzenkov
2022-08-21  0:23     ` Qu Wenruo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).