linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Possible bio merging breakage in mp bio rework
@ 2019-04-05 16:04 Nikolay Borisov
  2019-04-06  0:16 ` Ming Lei
  0 siblings, 1 reply; 8+ messages in thread
From: Nikolay Borisov @ 2019-04-05 16:04 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, Omar Sandoval, linux-block, LKML, linux-btrfs

Hello Ming, 

Following the mp biovec rework what is the maximum 
data that a bio could contain? Should it be PAGE_SIZE * bio_vec 
or something else? Currently I can see bios as large as 127 megs 
on sequential workloads, I got prompted to this since btrfs has a 
memory allocation that is dependent on the data in the bio and this 
particular memory allocation started failing with order 6 allocs. 
Further debugging showed that with the following xfs_io command line: 


xfs_io -f -c "pwrite -S 0x61 -b 4m 0 10g" /media/scratch/file1

I can easily see very large bios: 

[  188.366540] kworker/-7       3.... 34847519us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 28 bi_vcnt_max: 256
[  188.367129] kworker/-658     2.... 34946536us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 28 bi_vcnt_max: 256
[  188.367714] kworker/-7       3.... 35107967us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 30 bi_vcnt_max: 256
[  188.368319] kworker/-658     2.... 35229894us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 32 bi_vcnt_max: 256
[  188.368909] kworker/-7       3.... 35374809us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 25 bi_vcnt_max: 256
[  188.369498] kworker/-658     2.... 35516194us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 31 bi_vcnt_max: 256
[  188.370086] kworker/-7       3.... 35663669us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 32 bi_vcnt_max: 256
[  188.370696] kworker/-658     2.... 35791006us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 100655104 bi_vcn: 24 bi_vcnt_max: 256
[  188.371335] kworker/-658     2.... 35816114us : btrfs_submit_bio_hook: bio: ffff8dffe99434f0 bi_iter.bi_size = 33591296 bi_vcn: 5 bi_vcnt_max: 256


So that's 127 megs in a single bio? This stems from the new merging logic. 
07173c3ec276 ("block: enable multipage bvecs") made it so that physically 
contiguous pages added to the bio would just modify bi_iter.bi_size and the 
initial page's bio_vec's bv_len. There's no longer the 
page == bv->bv_page portion of the check. 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Possible bio merging breakage in mp bio rework
  2019-04-05 16:04 Possible bio merging breakage in mp bio rework Nikolay Borisov
@ 2019-04-06  0:16 ` Ming Lei
  2019-04-06  6:09   ` Nikolay Borisov
  2019-04-08  9:52   ` Johannes Thumshirn
  0 siblings, 2 replies; 8+ messages in thread
From: Ming Lei @ 2019-04-06  0:16 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Jens Axboe, Omar Sandoval, linux-block, LKML, linux-btrfs

Hi Nikolay,

On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
> Hello Ming, 
> 
> Following the mp biovec rework what is the maximum 
> data that a bio could contain? Should it be PAGE_SIZE * bio_vec 

There isn't any maximum data limit on the bio submitted from fs,
and block layer will make the final bio sent to driver correct
by applying all kinds of queue limit, such as max segment size,
max segment number, max sectors, ...

> or something else? Currently I can see bios as large as 127 megs 
> on sequential workloads, I got prompted to this since btrfs has a 
> memory allocation that is dependent on the data in the bio and this 
> particular memory allocation started failing with order 6 allocs. 

Could you share us the code? I don't see why order 6 allocs is a must.

> Further debugging showed that with the following xfs_io command line: 
> 
> 
> xfs_io -f -c "pwrite -S 0x61 -b 4m 0 10g" /media/scratch/file1
> 
> I can easily see very large bios: 
> 
> [  188.366540] kworker/-7       3.... 34847519us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 28 bi_vcnt_max: 256
> [  188.367129] kworker/-658     2.... 34946536us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 28 bi_vcnt_max: 256
> [  188.367714] kworker/-7       3.... 35107967us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 30 bi_vcnt_max: 256
> [  188.368319] kworker/-658     2.... 35229894us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 32 bi_vcnt_max: 256
> [  188.368909] kworker/-7       3.... 35374809us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 25 bi_vcnt_max: 256
> [  188.369498] kworker/-658     2.... 35516194us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 31 bi_vcnt_max: 256
> [  188.370086] kworker/-7       3.... 35663669us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 32 bi_vcnt_max: 256
> [  188.370696] kworker/-658     2.... 35791006us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 100655104 bi_vcn: 24 bi_vcnt_max: 256
> [  188.371335] kworker/-658     2.... 35816114us : btrfs_submit_bio_hook: bio: ffff8dffe99434f0 bi_iter.bi_size = 33591296 bi_vcn: 5 bi_vcnt_max: 256
> 
> 
> So that's 127 megs in a single bio? This stems from the new merging logic. 
> 07173c3ec276 ("block: enable multipage bvecs") made it so that physically 
> contiguous pages added to the bio would just modify bi_iter.bi_size and the 
> initial page's bio_vec's bv_len. There's no longer the 
> page == bv->bv_page portion of the check. 

bio_add_page() tries best to put physically contiguous pages into one bvec, and
I don't see anything is wrong in the log.

Could you show us what the real problem is?

Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Possible bio merging breakage in mp bio rework
  2019-04-06  0:16 ` Ming Lei
@ 2019-04-06  6:09   ` Nikolay Borisov
  2019-04-06  8:00     ` Qu Wenruo
  2019-04-06 12:30     ` Ming Lei
  2019-04-08  9:52   ` Johannes Thumshirn
  1 sibling, 2 replies; 8+ messages in thread
From: Nikolay Borisov @ 2019-04-06  6:09 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, Omar Sandoval, linux-block, LKML, linux-btrfs



On 6.04.19 г. 3:16 ч., Ming Lei wrote:
> Hi Nikolay,
> 
> On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
>> Hello Ming, 
>>
>> Following the mp biovec rework what is the maximum 
>> data that a bio could contain? Should it be PAGE_SIZE * bio_vec 
> 
> There isn't any maximum data limit on the bio submitted from fs,
> and block layer will make the final bio sent to driver correct
> by applying all kinds of queue limit, such as max segment size,
> max segment number, max sectors, ...
> 
>> or something else? Currently I can see bios as large as 127 megs 
>> on sequential workloads, I got prompted to this since btrfs has a 
>> memory allocation that is dependent on the data in the bio and this 
>> particular memory allocation started failing with order 6 allocs. 
> 
> Could you share us the code? I don't see why order 6 allocs is a must.

When a bio is submitted btrfs has to calculate the checksum for it, this
happens in btrfs_csum_one_bio. Said checksums are stored in an
kmalloc'ed array, whose size is calculated as:

32 + bio_size / btrfs' block size (usually 4k). So for a 127mb bio that
would be: 32 * ((134184960÷4096) * 4) = 127k. We'd make an order 3
allocation. Admittedly the code in btrfs should know better rather than
make unbounded allocations without a fallback, but bio suddenly becoming
rather unbounded in their size caught us offhand.


> 
>> Further debugging showed that with the following xfs_io command line: 
>>
>>
>> xfs_io -f -c "pwrite -S 0x61 -b 4m 0 10g" /media/scratch/file1
>>
>> I can easily see very large bios: 
>>
>> [  188.366540] kworker/-7       3.... 34847519us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 28 bi_vcnt_max: 256
>> [  188.367129] kworker/-658     2.... 34946536us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 28 bi_vcnt_max: 256
>> [  188.367714] kworker/-7       3.... 35107967us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 30 bi_vcnt_max: 256
>> [  188.368319] kworker/-658     2.... 35229894us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 32 bi_vcnt_max: 256
>> [  188.368909] kworker/-7       3.... 35374809us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 25 bi_vcnt_max: 256
>> [  188.369498] kworker/-658     2.... 35516194us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 31 bi_vcnt_max: 256
>> [  188.370086] kworker/-7       3.... 35663669us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 32 bi_vcnt_max: 256
>> [  188.370696] kworker/-658     2.... 35791006us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 100655104 bi_vcn: 24 bi_vcnt_max: 256
>> [  188.371335] kworker/-658     2.... 35816114us : btrfs_submit_bio_hook: bio: ffff8dffe99434f0 bi_iter.bi_size = 33591296 bi_vcn: 5 bi_vcnt_max: 256
>>
>>
>> So that's 127 megs in a single bio? This stems from the new merging logic. 
>> 07173c3ec276 ("block: enable multipage bvecs") made it so that physically 
>> contiguous pages added to the bio would just modify bi_iter.bi_size and the 
>> initial page's bio_vec's bv_len. There's no longer the 
>> page == bv->bv_page portion of the check. 
> 
> bio_add_page() tries best to put physically contiguous pages into one bvec, and
> I don't see anything is wrong in the log.
> 
> Could you show us what the real problem is?
> 
> Thanks,
> Ming
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Possible bio merging breakage in mp bio rework
  2019-04-06  6:09   ` Nikolay Borisov
@ 2019-04-06  8:00     ` Qu Wenruo
  2019-04-06 12:30     ` Ming Lei
  1 sibling, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2019-04-06  8:00 UTC (permalink / raw)
  To: Nikolay Borisov, Ming Lei
  Cc: Jens Axboe, Omar Sandoval, linux-block, LKML, linux-btrfs



On 2019/4/6 下午2:09, Nikolay Borisov wrote:
>
>
> On 6.04.19 г. 3:16 ч., Ming Lei wrote:
>> Hi Nikolay,
>>
>> On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
>>> Hello Ming,
>>>
>>> Following the mp biovec rework what is the maximum
>>> data that a bio could contain? Should it be PAGE_SIZE * bio_vec
>>
>> There isn't any maximum data limit on the bio submitted from fs,
>> and block layer will make the final bio sent to driver correct
>> by applying all kinds of queue limit, such as max segment size,
>> max segment number, max sectors, ...
>>
>>> or something else? Currently I can see bios as large as 127 megs
>>> on sequential workloads, I got prompted to this since btrfs has a
>>> memory allocation that is dependent on the data in the bio and this
>>> particular memory allocation started failing with order 6 allocs.
>>
>> Could you share us the code? I don't see why order 6 allocs is a must.
>
> When a bio is submitted btrfs has to calculate the checksum for it, this
> happens in btrfs_csum_one_bio. Said checksums are stored in an
> kmalloc'ed array, whose size is calculated as:
>
> 32 + bio_size / btrfs' block size (usually 4k). So for a 127mb bio that
> would be: 32 * ((134184960÷4096) * 4) = 127k. We'd make an order 3
> allocation. Admittedly the code in btrfs should know better rather than
> make unbounded allocations without a fallback, but bio suddenly becoming
> rather unbounded in their size caught us offhand.

Can we switch between kmalloc() for small csum while using pages for
larger csum?

Thanks,
Qu

>
>
>>
>>> Further debugging showed that with the following xfs_io command line:
>>>
>>>
>>> xfs_io -f -c "pwrite -S 0x61 -b 4m 0 10g" /media/scratch/file1
>>>
>>> I can easily see very large bios:
>>>
>>> [  188.366540] kworker/-7       3.... 34847519us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 28 bi_vcnt_max: 256
>>> [  188.367129] kworker/-658     2.... 34946536us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 28 bi_vcnt_max: 256
>>> [  188.367714] kworker/-7       3.... 35107967us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 30 bi_vcnt_max: 256
>>> [  188.368319] kworker/-658     2.... 35229894us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 32 bi_vcnt_max: 256
>>> [  188.368909] kworker/-7       3.... 35374809us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 25 bi_vcnt_max: 256
>>> [  188.369498] kworker/-658     2.... 35516194us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 31 bi_vcnt_max: 256
>>> [  188.370086] kworker/-7       3.... 35663669us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 32 bi_vcnt_max: 256
>>> [  188.370696] kworker/-658     2.... 35791006us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 100655104 bi_vcn: 24 bi_vcnt_max: 256
>>> [  188.371335] kworker/-658     2.... 35816114us : btrfs_submit_bio_hook: bio: ffff8dffe99434f0 bi_iter.bi_size = 33591296 bi_vcn: 5 bi_vcnt_max: 256
>>>
>>>
>>> So that's 127 megs in a single bio? This stems from the new merging logic.
>>> 07173c3ec276 ("block: enable multipage bvecs") made it so that physically
>>> contiguous pages added to the bio would just modify bi_iter.bi_size and the
>>> initial page's bio_vec's bv_len. There's no longer the
>>> page == bv->bv_page portion of the check.
>>
>> bio_add_page() tries best to put physically contiguous pages into one bvec, and
>> I don't see anything is wrong in the log.
>>
>> Could you show us what the real problem is?
>>
>> Thanks,
>> Ming
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Possible bio merging breakage in mp bio rework
  2019-04-06  6:09   ` Nikolay Borisov
  2019-04-06  8:00     ` Qu Wenruo
@ 2019-04-06 12:30     ` Ming Lei
  1 sibling, 0 replies; 8+ messages in thread
From: Ming Lei @ 2019-04-06 12:30 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Jens Axboe, Omar Sandoval, linux-block, LKML, linux-btrfs

On Sat, Apr 06, 2019 at 09:09:12AM +0300, Nikolay Borisov wrote:
> 
> 
> On 6.04.19 г. 3:16 ч., Ming Lei wrote:
> > Hi Nikolay,
> > 
> > On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
> >> Hello Ming, 
> >>
> >> Following the mp biovec rework what is the maximum 
> >> data that a bio could contain? Should it be PAGE_SIZE * bio_vec 
> > 
> > There isn't any maximum data limit on the bio submitted from fs,
> > and block layer will make the final bio sent to driver correct
> > by applying all kinds of queue limit, such as max segment size,
> > max segment number, max sectors, ...
> > 
> >> or something else? Currently I can see bios as large as 127 megs 
> >> on sequential workloads, I got prompted to this since btrfs has a 
> >> memory allocation that is dependent on the data in the bio and this 
> >> particular memory allocation started failing with order 6 allocs. 
> > 
> > Could you share us the code? I don't see why order 6 allocs is a must.
> 
> When a bio is submitted btrfs has to calculate the checksum for it, this
> happens in btrfs_csum_one_bio. Said checksums are stored in an
> kmalloc'ed array, whose size is calculated as:
> 
> 32 + bio_size / btrfs' block size (usually 4k). So for a 127mb bio that
> would be: 32 * ((134184960÷4096) * 4) = 127k. We'd make an order 3
> allocation. Admittedly the code in btrfs should know better rather than
> make unbounded allocations without a fallback, but bio suddenly becoming
> rather unbounded in their size caught us offhand.

OK, thanks for your explanation.

Given it is one btrfs specific feature, I'd suggest you set one max size for
btrfs bio, for example, suppose the max checksum array is 4k, then the max
bio size can be calculated as:

	(4k - 32) * btrfs's block size

which should be big enough.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Possible bio merging breakage in mp bio rework
  2019-04-06  0:16 ` Ming Lei
  2019-04-06  6:09   ` Nikolay Borisov
@ 2019-04-08  9:52   ` Johannes Thumshirn
  2019-04-08 10:19     ` Ming Lei
  1 sibling, 1 reply; 8+ messages in thread
From: Johannes Thumshirn @ 2019-04-08  9:52 UTC (permalink / raw)
  To: Ming Lei, Nikolay Borisov
  Cc: Jens Axboe, Omar Sandoval, linux-block, LKML, linux-btrfs

On 06/04/2019 02:16, Ming Lei wrote:
> Hi Nikolay,
> 
> On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
>> Hello Ming, 
>>
>> Following the mp biovec rework what is the maximum 
>> data that a bio could contain? Should it be PAGE_SIZE * bio_vec 
> 
> There isn't any maximum data limit on the bio submitted from fs,
> and block layer will make the final bio sent to driver correct
> by applying all kinds of queue limit, such as max segment size,
> max segment number, max sectors, ...

Naive question, why are we creating possibly huge bios just to split
them according the the LLDD's limits afterwards?

Can't we look at the limits in e.g. bio_add_page() and decide if we need
to split there?

This is just me thinking about it, I haven't though if there are any
resulting performance penalties from it, yet.

Byte,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Possible bio merging breakage in mp bio rework
  2019-04-08  9:52   ` Johannes Thumshirn
@ 2019-04-08 10:19     ` Ming Lei
  2019-04-08 10:22       ` Johannes Thumshirn
  0 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2019-04-08 10:19 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Nikolay Borisov, Jens Axboe, Omar Sandoval, linux-block, LKML,
	linux-btrfs

On Mon, Apr 08, 2019 at 11:52:59AM +0200, Johannes Thumshirn wrote:
> On 06/04/2019 02:16, Ming Lei wrote:
> > Hi Nikolay,
> > 
> > On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
> >> Hello Ming, 
> >>
> >> Following the mp biovec rework what is the maximum 
> >> data that a bio could contain? Should it be PAGE_SIZE * bio_vec 
> > 
> > There isn't any maximum data limit on the bio submitted from fs,
> > and block layer will make the final bio sent to driver correct
> > by applying all kinds of queue limit, such as max segment size,
> > max segment number, max sectors, ...
> 
> Naive question, why are we creating possibly huge bios just to split
> them according the the LLDD's limits afterwards?

bio split is one important IO model in block layer, which simplifies
stacked driver(dm, md, bcache, ...) a lot.

It is very reasonable to apply the queue limits in its. make_request_fn().

Otherwise, it will cause huge mess in stacking driver if queue limits
are applied in bio_add_page(), see previous .merge_bvec_fn's implementation
in these stacking drivers.

Not only bio_add_page(), there is also bio clone involved.

> 
> Can't we look at the limits in e.g. bio_add_page() and decide if we need
> to split there?

bio_add_page() is absolutely the fast path, and it is much more efficient
to apply the limit just once in the queue's .make_request_fn.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Possible bio merging breakage in mp bio rework
  2019-04-08 10:19     ` Ming Lei
@ 2019-04-08 10:22       ` Johannes Thumshirn
  0 siblings, 0 replies; 8+ messages in thread
From: Johannes Thumshirn @ 2019-04-08 10:22 UTC (permalink / raw)
  To: Ming Lei
  Cc: Nikolay Borisov, Jens Axboe, Omar Sandoval, linux-block, LKML,
	linux-btrfs

On 08/04/2019 12:19, Ming Lei wrote:
> bio_add_page() is absolutely the fast path, and it is much more efficient
> to apply the limit just once in the queue's .make_request_fn.

You're right, this makes sense.

Thanks,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-04-08 10:22 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-05 16:04 Possible bio merging breakage in mp bio rework Nikolay Borisov
2019-04-06  0:16 ` Ming Lei
2019-04-06  6:09   ` Nikolay Borisov
2019-04-06  8:00     ` Qu Wenruo
2019-04-06 12:30     ` Ming Lei
2019-04-08  9:52   ` Johannes Thumshirn
2019-04-08 10:19     ` Ming Lei
2019-04-08 10:22       ` Johannes Thumshirn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).