All of lore.kernel.org
 help / color / mirror / Atom feed
* New dm-bufio with shrinker API
@ 2011-09-02 21:34 Mikulas Patocka
  2011-09-05  9:04 ` Joe Thornber
  0 siblings, 1 reply; 14+ messages in thread
From: Mikulas Patocka @ 2011-09-02 21:34 UTC (permalink / raw)
  To: Edward Thornber; +Cc: Christoph Hellwig, dm-devel

Hi

I created new dm-bufio that uses shrinker API and placed it here:
http://people.redhat.com/mpatocka/patches/kernel/dm-thinp-bufio/

Mikulas

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-02 21:34 New dm-bufio with shrinker API Mikulas Patocka
@ 2011-09-05  9:04 ` Joe Thornber
  2011-09-05 14:49   ` Joe Thornber
  0 siblings, 1 reply; 14+ messages in thread
From: Joe Thornber @ 2011-09-05  9:04 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Christoph Hellwig, dm-devel

On Fri, Sep 02, 2011 at 05:34:09PM -0400, Mikulas Patocka wrote:
> Hi
> 
> I created new dm-bufio that uses shrinker API and placed it here:
> http://people.redhat.com/mpatocka/patches/kernel/dm-thinp-bufio/

Thanks Mikulas, I'll merge over the next day or two.

- Joe

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-05  9:04 ` Joe Thornber
@ 2011-09-05 14:49   ` Joe Thornber
  2011-09-05 15:07     ` Christoph Hellwig
  2011-09-05 16:01     ` Mikulas Patocka
  0 siblings, 2 replies; 14+ messages in thread
From: Joe Thornber @ 2011-09-05 14:49 UTC (permalink / raw)
  To: Mikulas Patocka, dm-devel, Christoph Hellwig

On Mon, Sep 05, 2011 at 10:04:29AM +0100, Joe Thornber wrote:
> On Fri, Sep 02, 2011 at 05:34:09PM -0400, Mikulas Patocka wrote:
> > Hi
> > 
> > I created new dm-bufio that uses shrinker API and placed it here:
> > http://people.redhat.com/mpatocka/patches/kernel/dm-thinp-bufio/
> 
> Thanks Mikulas, I'll merge over the next day or two.

Mikulas,

It's merged and pushed to my github repo.

I changed the test suite to reset the peak_allocated parameter before
each test and record it at the end of each test.  It's very hard to
say what is right and wrong when talking about cache sizes, since you
always have to qualify anything by saying 'for this particular load'.
However, I think bufio could be more aggressive about recycling cache
entries.  With the old block manager the test suite ran nicely with
less than 256k, from memory I think I started seeing slow down around
128k.  With bufio I'm seeing consistently larger cache sizes for the
same performance.

For instance the test_overwriting_various_thin_devices scenario from
here
(https://github.com/jthornber/thinp-test-suite/blob/master/basic_tests.rb)
has a peak use of ~1100k, if I change from using dt with random io
pattern to plain old dd then the usage drops to ~900k.  Setting the
max_age parameter to 1 second had very little effect.

With my bm I would trigger a flush if there were fewer than a 1/4 of
the blocks free, and at that point would try and flush half the blocks
(I think; would have to check exact numbers).  Presumably you're doing
something very similar, except with different numbers.  Do you think
we could publish these params to allow some experimentation please?

The allocated_* parameter files always seem to contain 0.  Even when
cache_size is non-zero.

I don't understand the correspondence between 'cache_size' and
'peak_allocated'.  I just ran a test and got these numbers:

cache_size:     7475200
peak_allocated: 974848

Is the cache_size correct?  7M seems an awful lot.

Can we really not avoid using dm-io to submit the ios?  I was suprised
to see that in there when scanning the code for parameter names.

- Joe

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-05 14:49   ` Joe Thornber
@ 2011-09-05 15:07     ` Christoph Hellwig
  2011-09-06  8:50       ` Joe Thornber
                         ` (2 more replies)
  2011-09-05 16:01     ` Mikulas Patocka
  1 sibling, 3 replies; 14+ messages in thread
From: Christoph Hellwig @ 2011-09-05 15:07 UTC (permalink / raw)
  To: Mikulas Patocka, dm-devel, Christoph Hellwig

On Mon, Sep 05, 2011 at 03:49:14PM +0100, Joe Thornber wrote:
> I changed the test suite to reset the peak_allocated parameter before
> each test and record it at the end of each test.  It's very hard to
> say what is right and wrong when talking about cache sizes, since you
> always have to qualify anything by saying 'for this particular load'.
> However, I think bufio could be more aggressive about recycling cache
> entries.  With the old block manager the test suite ran nicely with
> less than 256k, from memory I think I started seeing slow down around
> 128k.  With bufio I'm seeing consistently larger cache sizes for the
> same performance.

IS there any reason you'll need a fixed size?  This is fairly similar in
concept to the XFS buffercache, which does perfectly well by allocation
memory as needed, and letting the shrinker reclaim buffers when under
memory pressure.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-05 14:49   ` Joe Thornber
  2011-09-05 15:07     ` Christoph Hellwig
@ 2011-09-05 16:01     ` Mikulas Patocka
  2011-09-06  9:33       ` Joe Thornber
  1 sibling, 1 reply; 14+ messages in thread
From: Mikulas Patocka @ 2011-09-05 16:01 UTC (permalink / raw)
  To: Joe Thornber; +Cc: Christoph Hellwig, dm-devel

> I don't understand the correspondence between 'cache_size' and
> 'peak_allocated'.  I just ran a test and got these numbers:
> 
> cache_size:     7475200
> peak_allocated: 974848
> 
> Is the cache_size correct?  7M seems an awful lot.

"cache_size" is the value that you set as a maximum cache size. The 
default is 2% of memory or 25% of vmalloc area.

"cache_size" doesn't change with benchmark that you run. You can set 
cache size manually by writing the value to the file.

"peak_allocated" is the maximum number of bytes that was actually in use. 
"peak_allocated" grows as more cache is allocated, but it is never shrunk.

> With the old block manager the test suite ran nicely with
> less than 256k, from memory I think I started seeing slow down around
> 128k.  With bufio I'm seeing consistently larger cache sizes for the
> same performance.

So, reduce cache_size to 256k (or whatever value you want to test) and see 
how it performs.

> For instance the test_overwriting_various_thin_devices scenario from
> here
> (https://github.com/jthornber/thinp-test-suite/blob/master/basic_tests.rb)
> has a peak use of ~1100k, if I change from using dt with random io
> pattern to plain old dd then the usage drops to ~900k. Setting the
> max_age parameter to 1 second had very little effect.

Reduce cache_size and try it.

> Can we really not avoid using dm-io to submit the ios?  I was suprised
> to see that in there when scanning the code for parameter names.

If we didn't use dm-io, then we'd have to submit several bios in parallel. 
It is possible to avoid dm-io, but it makes no sense, because we would be 
duplicating dm-io logic in dm-bufio then.

Mikulas

> - Joe
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-05 15:07     ` Christoph Hellwig
@ 2011-09-06  8:50       ` Joe Thornber
  2011-09-06  9:53       ` Joe Thornber
  2011-09-06 15:57       ` Mikulas Patocka
  2 siblings, 0 replies; 14+ messages in thread
From: Joe Thornber @ 2011-09-06  8:50 UTC (permalink / raw)
  To: device-mapper development; +Cc: Christoph Hellwig, Mikulas Patocka

On Mon, Sep 05, 2011 at 11:07:15AM -0400, Christoph Hellwig wrote:
> On Mon, Sep 05, 2011 at 03:49:14PM +0100, Joe Thornber wrote:
> > I changed the test suite to reset the peak_allocated parameter before
> > each test and record it at the end of each test.  It's very hard to
> > say what is right and wrong when talking about cache sizes, since you
> > always have to qualify anything by saying 'for this particular load'.
> > However, I think bufio could be more aggressive about recycling cache
> > entries.  With the old block manager the test suite ran nicely with
> > less than 256k, from memory I think I started seeing slow down around
> > 128k.  With bufio I'm seeing consistently larger cache sizes for the
> > same performance.
> 
> IS there any reason you'll need a fixed size?  This is fairly similar in
> concept to the XFS buffercache, which does perfectly well by allocation
> memory as needed, and letting the shrinker reclaim buffers when under
> memory pressure.

This is exactly what we're trying to do.

- Joe

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-05 16:01     ` Mikulas Patocka
@ 2011-09-06  9:33       ` Joe Thornber
  2011-09-06 16:08         ` Mikulas Patocka
  0 siblings, 1 reply; 14+ messages in thread
From: Joe Thornber @ 2011-09-06  9:33 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Christoph Hellwig, dm-devel

On Mon, Sep 05, 2011 at 12:01:28PM -0400, Mikulas Patocka wrote:
> "cache_size" is the value that you set as a maximum cache size. The 
> default is 2% of memory or 25% of vmalloc area.

ah, could you rename this variable to 'max_allocated' then please, to
match with the 'total_allocated' field (which I presume gives the
current cache size?).

> > With the old block manager the test suite ran nicely with
> > less than 256k, from memory I think I started seeing slow down around
> > 128k.  With bufio I'm seeing consistently larger cache sizes for the
> > same performance.
> 
> So, reduce cache_size to 256k (or whatever value you want to test) and see 
> how it performs.

But then I'm limited to 256k, my point is we want scaling _and_ to use
less memory.  We cannot tell our users to experiment to find the right
setting for this depending on the number of pools they're running and
the usage of each pool.

> > For instance the test_overwriting_various_thin_devices scenario from
> > here
> > (https://github.com/jthornber/thinp-test-suite/blob/master/basic_tests.rb)
> > has a peak use of ~1100k, if I change from using dt with random io
> > pattern to plain old dd then the usage drops to ~900k. Setting the
> > max_age parameter to 1 second had very little effect.
> 
> Reduce cache_size and try it.

Here are the numbers (best of 3 runs):

| Test                        | 256k cache (M/s) | 2M cache (M/s) |
| unprovisioned thin          |             74.4 |             75 |
| provisioned thin            |             72.8 |           72.6 |
| new snap (complete sharing) |             73.7 |           73.8 |
| old snap (no sharing)       |             72.2 |           72.8 |

So I think that proves my point.  We're getting no benefit from that
extra memory, is there a subsystem that could be making better use of
it? (eg, page cache?).  Or are you telling me that nobody else would
have been using that memory?

(This is all just tweaking, bufio is working very well).

- Joe

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-05 15:07     ` Christoph Hellwig
  2011-09-06  8:50       ` Joe Thornber
@ 2011-09-06  9:53       ` Joe Thornber
  2011-09-06 15:57       ` Mikulas Patocka
  2 siblings, 0 replies; 14+ messages in thread
From: Joe Thornber @ 2011-09-06  9:53 UTC (permalink / raw)
  To: device-mapper development; +Cc: Christoph Hellwig, Mikulas Patocka

On Mon, Sep 05, 2011 at 11:07:15AM -0400, Christoph Hellwig wrote:
> On Mon, Sep 05, 2011 at 03:49:14PM +0100, Joe Thornber wrote:
> > I changed the test suite to reset the peak_allocated parameter before
> > each test and record it at the end of each test.  It's very hard to
> > say what is right and wrong when talking about cache sizes, since you
> > always have to qualify anything by saying 'for this particular load'.
> > However, I think bufio could be more aggressive about recycling cache
> > entries.  With the old block manager the test suite ran nicely with
> > less than 256k, from memory I think I started seeing slow down around
> > 128k.  With bufio I'm seeing consistently larger cache sizes for the
> > same performance.
> 
> IS there any reason you'll need a fixed size?  This is fairly similar in
> concept to the XFS buffercache, which does perfectly well by allocation
> memory as needed, and letting the shrinker reclaim buffers when under
> memory pressure.

Well if the shrinker does such a good job, do we really need to set a
maximum value for the cache size at all? (perhaps this was your
question and I'm being slow).

- Joe

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-05 15:07     ` Christoph Hellwig
  2011-09-06  8:50       ` Joe Thornber
  2011-09-06  9:53       ` Joe Thornber
@ 2011-09-06 15:57       ` Mikulas Patocka
  2011-09-06 16:08         ` Christoph Hellwig
  2 siblings, 1 reply; 14+ messages in thread
From: Mikulas Patocka @ 2011-09-06 15:57 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: dm-devel

On Mon, 5 Sep 2011, Christoph Hellwig wrote:

> On Mon, Sep 05, 2011 at 03:49:14PM +0100, Joe Thornber wrote:
> > I changed the test suite to reset the peak_allocated parameter before
> > each test and record it at the end of each test.  It's very hard to
> > say what is right and wrong when talking about cache sizes, since you
> > always have to qualify anything by saying 'for this particular load'.
> > However, I think bufio could be more aggressive about recycling cache
> > entries.  With the old block manager the test suite ran nicely with
> > less than 256k, from memory I think I started seeing slow down around
> > 128k.  With bufio I'm seeing consistently larger cache sizes for the
> > same performance.
> 
> IS there any reason you'll need a fixed size?  This is fairly similar in
> concept to the XFS buffercache, which does perfectly well by allocation
> memory as needed, and letting the shrinker reclaim buffers when under
> memory pressure.

It is possible to make unlimited size. --- the question: is the shrinker 
run when we exhaust vmalloc arena?

dm-bufio cache uses vmalloc arena under some circumstances. On some 
architectures (for example i386), vmalloc arena is smaller than main 
memory, therefore it may overflow before main memory does.

What does XFS do when vmalloc arena is exhausted?

Mikulas

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-06 15:57       ` Mikulas Patocka
@ 2011-09-06 16:08         ` Christoph Hellwig
  2011-09-07 18:47           ` Mikulas Patocka
  0 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2011-09-06 16:08 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: dm-devel

On Tue, Sep 06, 2011 at 11:57:00AM -0400, Mikulas Patocka wrote:
> > IS there any reason you'll need a fixed size?  This is fairly similar in
> > concept to the XFS buffercache, which does perfectly well by allocation
> > memory as needed, and letting the shrinker reclaim buffers when under
> > memory pressure.
> 
> It is possible to make unlimited size. --- the question: is the shrinker 
> run when we exhaust vmalloc arena?
> 
> dm-bufio cache uses vmalloc arena under some circumstances. On some 
> architectures (for example i386), vmalloc arena is smaller than main 
> memory, therefore it may overflow before main memory does.
> 
> What does XFS do when vmalloc arena is exhausted?

At this point shrinkers do not handle vmalloc space, although we could
add them.  In the default configuration XFS uses very little vmalloc
space in the buffer cache - only the 8 log buffers are vmapped, and
those can't be reclaimed anyway.  During log recovery or if using the
non-standard larger directory block mkfs option it can consume a larger
amount of vmalloc space, and we have run into problems because of that,
e.g. take a look at the loop around vm_map_ram() in _xfs_buf_map_pages()
that we had to add as a workaround, and the commit introducing it for
more details (a19fb380).

Just curious, why do you need the buffers to be vmapped?  If we'd design
the dir2 format these days we'd make sure it is aligned in a way that
we could deal with individually mapped pages.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-06  9:33       ` Joe Thornber
@ 2011-09-06 16:08         ` Mikulas Patocka
  2011-09-07  9:46           ` Joe Thornber
  0 siblings, 1 reply; 14+ messages in thread
From: Mikulas Patocka @ 2011-09-06 16:08 UTC (permalink / raw)
  To: Joe Thornber; +Cc: Christoph Hellwig, dm-devel



On Tue, 6 Sep 2011, Joe Thornber wrote:

> On Mon, Sep 05, 2011 at 12:01:28PM -0400, Mikulas Patocka wrote:
> > "cache_size" is the value that you set as a maximum cache size. The 
> > default is 2% of memory or 25% of vmalloc area.
> 
> ah, could you rename this variable to 'max_allocated' then please, to
> match with the 'total_allocated' field (which I presume gives the
> current cache size?).

OK.

> > > With the old block manager the test suite ran nicely with
> > > less than 256k, from memory I think I started seeing slow down around
> > > 128k.  With bufio I'm seeing consistently larger cache sizes for the
> > > same performance.
> > 
> > So, reduce cache_size to 256k (or whatever value you want to test) and see 
> > how it performs.
> 
> But then I'm limited to 256k, my point is we want scaling _and_ to use
> less memory.  We cannot tell our users to experiment to find the right
> setting for this depending on the number of pools they're running and
> the usage of each pool.
> 
> > > For instance the test_overwriting_various_thin_devices scenario from
> > > here
> > > (https://github.com/jthornber/thinp-test-suite/blob/master/basic_tests.rb)
> > > has a peak use of ~1100k, if I change from using dt with random io
> > > pattern to plain old dd then the usage drops to ~900k. Setting the
> > > max_age parameter to 1 second had very little effect.
> > 
> > Reduce cache_size and try it.
> 
> Here are the numbers (best of 3 runs):
> 
> | Test                        | 256k cache (M/s) | 2M cache (M/s) |
> | unprovisioned thin          |             74.4 |             75 |
> | provisioned thin            |             72.8 |           72.6 |
> | new snap (complete sharing) |             73.7 |           73.8 |
> | old snap (no sharing)       |             72.2 |           72.8 |
> 
> So I think that proves my point.  We're getting no benefit from that
> extra memory, is there a subsystem that could be making better use of
> it? (eg, page cache?).  Or are you telling me that nobody else would
> have been using that memory?
> 
> (This is all just tweaking, bufio is working very well).
> 
> - Joe

So, I can implement a call "void dm_bufio_discard_buffer(struct 
dm_bufio_client *c, sector_t block)" and this call will discard a specific 
buffer at a specific location (the call is not guaranteed to succeed, it 
would not discard if someone is holding the buffer or so). It would be 
like a "bforget" function for filesystem buffers.

You will call this function on metadata sectors that you are freeing.

Do you agree with this interface?


Other than this, I don't know how to reduce cache size, I don't know about 
any algorithm that would guess cache size automatically. In operating 
systems, caches usually grow without limit regardless of whether someone 
needs the cache or not.

Mikulas

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-06 16:08         ` Mikulas Patocka
@ 2011-09-07  9:46           ` Joe Thornber
  2011-09-07 18:39             ` Mikulas Patocka
  0 siblings, 1 reply; 14+ messages in thread
From: Joe Thornber @ 2011-09-07  9:46 UTC (permalink / raw)
  To: Mikulas Patocka; +Cc: Christoph Hellwig, dm-devel

On Tue, Sep 06, 2011 at 12:08:51PM -0400, Mikulas Patocka wrote:
> So, I can implement a call "void dm_bufio_discard_buffer(struct 
> dm_bufio_client *c, sector_t block)" and this call will discard a specific 
> buffer at a specific location (the call is not guaranteed to succeed, it 
> would not discard if someone is holding the buffer or so). It would be 
> like a "bforget" function for filesystem buffers.
> 
> You will call this function on metadata sectors that you are freeing.
> 
> Do you agree with this interface?

That's not really what I was after, but it's a good idea.  Don't do
anything about it now, and I'll instrument to see if I can drive it
effectively.  I try and recycle freed blocks as quickly as possible to
avoid fragmenting free space, which may mean there is little benefit.

> Other than this, I don't know how to reduce cache size, I don't know about 
> any algorithm that would guess cache size automatically. In operating 
> systems, caches usually grow without limit regardless of whether someone 
> needs the cache or not.

ok, let's go with things as they are.  Thx for your hard work.

One other optimisation to think about: As you know, if a non-blocking
lookup of the thinp mapping fails, the bio gets handed across to a
worker thread to do the blocking lookup.  Is there any way to you
could make dm_bm_read_try_lock() pass a preload hint to bufio, since
we know that block is going to be required in the near future?

- Joe

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-07  9:46           ` Joe Thornber
@ 2011-09-07 18:39             ` Mikulas Patocka
  0 siblings, 0 replies; 14+ messages in thread
From: Mikulas Patocka @ 2011-09-07 18:39 UTC (permalink / raw)
  To: Joe Thornber; +Cc: Christoph Hellwig, dm-devel

> One other optimisation to think about: As you know, if a non-blocking
> lookup of the thinp mapping fails, the bio gets handed across to a
> worker thread to do the blocking lookup.  Is there any way to you
> could make dm_bm_read_try_lock() pass a preload hint to bufio, since
> we know that block is going to be required in the near future?
> 
> - Joe

No, you can't submit any IOs in the request handler (i.e. in dm's "map" 
function). Such IOs are queued and delayed until the request handler 
exits. If it I did it, there would be IO hanging which is not going to be 
finished and a deadlock possibility.

For example:
* process 1 submits a bio in the request handler, bio submittion waits 
until the request handler exits
* process 2 takes the dm-bufio mutex and waits for this bio to be finished 
(thus, it waits for the request handler of process 1 to finish)
* process 1 tries to take the dm-bufio mutex again in the same request 
handler, waiting for process 2, which waits for process 1 --- deadlock.

Mikulas

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: New dm-bufio with shrinker API
  2011-09-06 16:08         ` Christoph Hellwig
@ 2011-09-07 18:47           ` Mikulas Patocka
  0 siblings, 0 replies; 14+ messages in thread
From: Mikulas Patocka @ 2011-09-07 18:47 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: dm-devel



On Tue, 6 Sep 2011, Christoph Hellwig wrote:

> On Tue, Sep 06, 2011 at 11:57:00AM -0400, Mikulas Patocka wrote:
> > > IS there any reason you'll need a fixed size?  This is fairly similar in
> > > concept to the XFS buffercache, which does perfectly well by allocation
> > > memory as needed, and letting the shrinker reclaim buffers when under
> > > memory pressure.
> > 
> > It is possible to make unlimited size. --- the question: is the shrinker 
> > run when we exhaust vmalloc arena?
> > 
> > dm-bufio cache uses vmalloc arena under some circumstances. On some 
> > architectures (for example i386), vmalloc arena is smaller than main 
> > memory, therefore it may overflow before main memory does.
> > 
> > What does XFS do when vmalloc arena is exhausted?
> 
> At this point shrinkers do not handle vmalloc space, although we could
> add them.  In the default configuration XFS uses very little vmalloc
> space in the buffer cache - only the 8 log buffers are vmapped, and
> those can't be reclaimed anyway.  During log recovery or if using the
> non-standard larger directory block mkfs option it can consume a larger
> amount of vmalloc space, and we have run into problems because of that,
> e.g. take a look at the loop around vm_map_ram() in _xfs_buf_map_pages()
> that we had to add as a workaround, and the commit introducing it for
> more details (a19fb380).

I see --- but: shouldn't vm_map_ram() do its own cleanup and call 
vm_unmap_aliases() accordingly? Do you mean that any function that 
allocates something from vmalloc area needs to call vm_unmap_aliases() and 
retry in case of failure?

> Just curious, why do you need the buffers to be vmapped?  If we'd design
> the dir2 format these days we'd make sure it is aligned in a way that
> we could deal with individually mapped pages.

I need it if we use larger buffers than pages. They are allocated with 
get_free_pages, but it is unreliable and has its own limit, so I use 
vmalloc as a reliable backup.

Mikulas

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2011-09-07 18:47 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-02 21:34 New dm-bufio with shrinker API Mikulas Patocka
2011-09-05  9:04 ` Joe Thornber
2011-09-05 14:49   ` Joe Thornber
2011-09-05 15:07     ` Christoph Hellwig
2011-09-06  8:50       ` Joe Thornber
2011-09-06  9:53       ` Joe Thornber
2011-09-06 15:57       ` Mikulas Patocka
2011-09-06 16:08         ` Christoph Hellwig
2011-09-07 18:47           ` Mikulas Patocka
2011-09-05 16:01     ` Mikulas Patocka
2011-09-06  9:33       ` Joe Thornber
2011-09-06 16:08         ` Mikulas Patocka
2011-09-07  9:46           ` Joe Thornber
2011-09-07 18:39             ` Mikulas Patocka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.