linux-bcache.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Bcache upstreaming
@ 2013-01-19  8:41 Steven Haigh
       [not found] ` <50FA5C38.60301-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Steven Haigh @ 2013-01-19  8:41 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 1347 bytes --]

> From: Kent Overstreet <koverstreet@... 
> <http://gmane.org/get-address.php?address=koverstreet%2dhpIqsD4AKlfQT0dZR%2bAlfA%40public.gmane.org>>
> Subject:Bcache upstreaming 
> <http://news.gmane.org/find-root.php?message_id=%3c20130104235040.GA26407%40google.com%3e>
> Newsgroups:gmane.linux.kernel.bcache.devel 
> <http://news.gmane.org/gmane.linux.kernel.bcache.devel>
> Date: 2013-01-04 23:50:40 GMT (2 weeks, 8 hours and 47 minutes ago)
> I've (finally!) got a bcache branch hacked up that ought to be suitable
> to go upstream, possibly in staging initially.
>
> It's currently closer to the dev branch than the stable branch, plus
> some additional minor changes to make it all more self contained. The
> code has seen a decent amount of testing and I think it's in good shape,
> but I'd like it if it could see a bit more testing before I see about
> pushing it upstream.
>
> If anyone wants to try it out, checkout the bcache-for-staging branch.
> It's against Linux 3.7.

Hi Kent,

Just wondering if you maintain a patch for this vs kernel 3.7.x?

I build EL6 RPMs for use as a Xen Dom0 and I'd like to include bcache in 
them for testing....

-- 
Steven Haigh

Email: netwiz-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org
Web: http://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299



[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4965 bytes --]

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found] ` <50FA5C38.60301-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org>
@ 2013-01-19 10:35   ` Kent Overstreet
       [not found]     ` <CAC7rs0v=zA-6Lf9kH5jmXxySci6GTLMu_Tq1pZFhHDpYcj0APQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-19 10:35 UTC (permalink / raw)
  To: Steven Haigh; +Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA

Yeah, the bcache master branch is currently against 3.7.

On Sat, Jan 19, 2013 at 12:41 AM, Steven Haigh <netwiz-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org> wrote:
>> From: Kent Overstreet <koverstreet@...
>> <http://gmane.org/get-address.php?address=koverstreet%2dhpIqsD4AKlfQT0dZR%2bAlfA%40public.gmane.org>>
>> Subject:Bcache upstreaming
>> <http://news.gmane.org/find-root.php?message_id=%3c20130104235040.GA26407%40google.com%3e>
>> Newsgroups:gmane.linux.kernel.bcache.devel
>> <http://news.gmane.org/gmane.linux.kernel.bcache.devel>
>> Date: 2013-01-04 23:50:40 GMT (2 weeks, 8 hours and 47 minutes ago)
>>
>> I've (finally!) got a bcache branch hacked up that ought to be suitable
>> to go upstream, possibly in staging initially.
>>
>> It's currently closer to the dev branch than the stable branch, plus
>> some additional minor changes to make it all more self contained. The
>> code has seen a decent amount of testing and I think it's in good shape,
>> but I'd like it if it could see a bit more testing before I see about
>> pushing it upstream.
>>
>> If anyone wants to try it out, checkout the bcache-for-staging branch.
>> It's against Linux 3.7.
>
>
> Hi Kent,
>
> Just wondering if you maintain a patch for this vs kernel 3.7.x?
>
> I build EL6 RPMs for use as a Xen Dom0 and I'd like to include bcache in
> them for testing....
>
> --
> Steven Haigh
>
> Email: netwiz-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org
> Web: http://www.crc.id.au
> Phone: (03) 9001 6090 - 0412 935 897
> Fax: (03) 8338 0299
>
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]     ` <CAC7rs0v=zA-6Lf9kH5jmXxySci6GTLMu_Tq1pZFhHDpYcj0APQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-01-19 10:42       ` Steven Haigh
  0 siblings, 0 replies; 48+ messages in thread
From: Steven Haigh @ 2013-01-19 10:42 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA

Does this include all the kernel source or just a patch?

I'd like to just put a single patch into my kernel RPM to make 
management easier...

--
Steven Haigh

Email: netwiz-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299

On 19/01/2013 9:35 PM, Kent Overstreet wrote:
> Yeah, the bcache master branch is currently against 3.7.
>
> On Sat, Jan 19, 2013 at 12:41 AM, Steven Haigh <netwiz-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org> wrote:
>>> From: Kent Overstreet <koverstreet@...
>>> <http://gmane.org/get-address.php?address=koverstreet%2dhpIqsD4AKlfQT0dZR%2bAlfA%40public.gmane.org>>
>>> Subject:Bcache upstreaming
>>> <http://news.gmane.org/find-root.php?message_id=%3c20130104235040.GA26407%40google.com%3e>
>>> Newsgroups:gmane.linux.kernel.bcache.devel
>>> <http://news.gmane.org/gmane.linux.kernel.bcache.devel>
>>> Date: 2013-01-04 23:50:40 GMT (2 weeks, 8 hours and 47 minutes ago)
>>>
>>> I've (finally!) got a bcache branch hacked up that ought to be suitable
>>> to go upstream, possibly in staging initially.
>>>
>>> It's currently closer to the dev branch than the stable branch, plus
>>> some additional minor changes to make it all more self contained. The
>>> code has seen a decent amount of testing and I think it's in good shape,
>>> but I'd like it if it could see a bit more testing before I see about
>>> pushing it upstream.
>>>
>>> If anyone wants to try it out, checkout the bcache-for-staging branch.
>>> It's against Linux 3.7.
>>
>> Hi Kent,
>>
>> Just wondering if you maintain a patch for this vs kernel 3.7.x?
>>
>> I build EL6 RPMs for use as a Xen Dom0 and I'd like to include bcache in
>> them for testing....
>>
>> --
>> Steven Haigh
>>
>> Email: netwiz-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org
>> Web: http://www.crc.id.au
>> Phone: (03) 9001 6090 - 0412 935 897
>> Fax: (03) 8338 0299
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                                                           ` <20130201203229.GA21110-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01 20:43                                                                                                             ` Tejun Heo
  0 siblings, 0 replies; 48+ messages in thread
From: Tejun Heo @ 2013-02-01 20:43 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Kent Overstreet, Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

Hello,

On Fri, Feb 01, 2013 at 03:32:30PM -0500, Mike Snitzer wrote:
> The need for the same holder refcount is like I thought: a DM device's
> active and inactive tables can open the same block devices.  I looked at
> the prospect of pushing the refcount into DM but I don't think it is as
> clean as having the bd_holder_disk struct continue to provide the

It's a layering thing.  It's dm which is sharing exclusive open.  It
should be dm's responsibility to keep track of who's using what.

> refcount.  Pushing it into DM would still require an explicit call to
> bd_unlink_disk_holder.

While I don't know the code, I can't see why it has to be that way.
If a dm device is holding a device, it'll maintain the link between
old and new tables.  If it's being transferred to another device or
whatnot, it really should release the exclusive open and then
reacquire for the new use.

> The refcount is really pretty benign; so I'm inclined to leave things as
> is.

Yeah, the code isn't horribly complex but it's conceptually pretty
ugly.  If dm can back out of it, it would be awesome.  If that's not
something readily obtainable, ah well, another cruft we have to keep
around, I guess.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                                                       ` <20130201161809.GB31863-9pTldWuhBndy/B6EtB590w@public.gmane.org>
@ 2013-02-01 20:32                                                                                                         ` Mike Snitzer
       [not found]                                                                                                           ` <20130201203229.GA21110-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-02-01 20:32 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Kent Overstreet, Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

On Fri, Feb 01 2013 at 11:18am -0500,
Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:

> Hello, Kent.
> 
> On Fri, Feb 01, 2013 at 08:15:47AM -0800, Kent Overstreet wrote:
> > Eww, not a flag. I meant a completely separate functions, rip out the
> > refcounting entirely and have the refcounting-manipulating versions
> > available as
> 
> No, I mean, internally there needs to be a way whether the currently
> existing linkage is from the old or new interface for exclusive close
> to be able to decide whether it can remove it or not.  Anyways, let's
> wait for Mike for now.

The need for the same holder refcount is like I thought: a DM device's
active and inactive tables can open the same block devices.  I looked at
the prospect of pushing the refcount into DM but I don't think it is as
clean as having the bd_holder_disk struct continue to provide the
refcount.  Pushing it into DM would still require an explicit call to
bd_unlink_disk_holder.

The refcount is really pretty benign; so I'm inclined to leave things as
is.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                                                   ` <20130201161547.GY26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01 16:18                                                                                                     ` Tejun Heo
       [not found]                                                                                                       ` <20130201161809.GB31863-9pTldWuhBndy/B6EtB590w@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Tejun Heo @ 2013-02-01 16:18 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Mike Snitzer, Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

Hello, Kent.

On Fri, Feb 01, 2013 at 08:15:47AM -0800, Kent Overstreet wrote:
> Eww, not a flag. I meant a completely separate functions, rip out the
> refcounting entirely and have the refcounting-manipulating versions
> available as

No, I mean, internally there needs to be a way whether the currently
existing linkage is from the old or new interface for exclusive close
to be able to decide whether it can remove it or not.  Anyways, let's
wait for Mike for now.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                               ` <20130201161227.GA19245-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01 16:17                                                                                 ` Kent Overstreet
  0 siblings, 0 replies; 48+ messages in thread
From: Kent Overstreet @ 2013-02-01 16:17 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Fri, Feb 01, 2013 at 11:12:27AM -0500, Mike Snitzer wrote:
> > Was there still a directory in /sys/fs/bcache?
> 
> Yes, after unmount the /dev/bcacheX device is deleted but the
> associated /sys/fs/bcache/<uuid> still exists.
> 
> Echoing 1 to /sys/fs/bcache/<uuid>/stop cleared it up.

Yep, that's how it's supposed to work.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                                               ` <20130201160820.GA31863-9pTldWuhBndy/B6EtB590w@public.gmane.org>
@ 2013-02-01 16:15                                                                                                 ` Kent Overstreet
       [not found]                                                                                                   ` <20130201161547.GY26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-02-01 16:15 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Mike Snitzer, Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

On Fri, Feb 01, 2013 at 08:08:20AM -0800, Tejun Heo wrote:
> Hey,
> 
> On Fri, Feb 01, 2013 at 07:33:18AM -0800, Kent Overstreet wrote:
> > Could add a new, fixed version that doesn't do the refcounting, bcache
> > and I imagine md could use that right away (maybe even just split the
> > refcounting out into different functions and have dm call those
> > directly, probably an easy way to refactor it anyways)
> 
> I don't know.  We then would have two interfaces doing about the same
> thing and a flag indicating whether the new or old one was used to
> create the link so that exclusive close can decide to remove it or
> not, which seems a bit complicated. 

Eww, not a flag. I meant a completely separate functions, rip out the
refcounting entirely and have the refcounting-manipulating versions
available as

bd_link_disk_holder_broken()
bd_unlink_disk_holder_broken()

or somesuch.

> Let's see whether Mike can remove
> the weirdness from dm side.

That'd be best, but if it can't happen right away it's just a way to
isolate the weirdness.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                           ` <20130201153936.GX26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01 16:12                                                                             ` Mike Snitzer
       [not found]                                                                               ` <20130201161227.GA19245-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-02-01 16:12 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Fri, Feb 01 2013 at 10:39am -0500,
Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, Jan 31, 2013 at 10:38:11PM -0500, Mike Snitzer wrote:
> > I'm just seeing extra newlines (empty lines), e.g.:
> > bcache: run_cache_set() invalidating existing data
> 
> Oh I see it, I screwed that up when I converted a bunch of stuff to
> pr_*() macros and my brain was so used to reading the same stuff I
> didn't notice it. I'll have that fixed in a minute.
> 
> > > > - bcache doesn't appear to be establishing proper holders on the devices
> > > >   it uses for the backing and cache devices.
> > > >   - lsblk doesn't show any associations with bcache devices.
> > > 
> > > How's that created - what function am I looking for?
> > 
> > bd_link_disk_holder and bd_unlink_disk_holder
> 
> Ok, that's done - it's in the testing branch
> 
> > 
> > > >   - the fio utility isn't able to get any stats for the bcache device or
> > > >     the devices bcache uses.
> > > 
> > > I'd been meaning to fix that, never got around to figuring out how those
> > > stats are generated. Function/file you can point me to?
> > 
> > I think you'll get the stats for via genhd (add_disk) -- the stats are
> > the 'disk_stats dkstats' member of the genhd's hd_struct.  But as of
> > now you'll notice that /sys/block/bcacheX/stat only ever contains 0s.
> 
> > 
> > part_stat_inc, part_stat_add are the low-level methods for driving the
> > counters up.  But I'm not sure why bcache isn't getting these stats  --
> > disk stats are something we get for free with DM devices so I haven't
> > really had to dig into these mechanics in detail.
> 
> Ah, I remember coming across that before now. Got that fixed too, it's
> in both the stable and testing branches.
> 
> And dm and md are both calling it from their make_request functions,
> which is why bcache wasn't getting it.

stats and lsblk look good.

> > > > - if I 'stop' a bcache device (using sysfs) while it is mounted; once I
> > > >   unmount the filesystem the device that bcache was using as a cache
> > > >   still has an open count of 1 but the bcache device then no longer
> > > >   exists
> > > 
> > > You mean the backing device isn't open, just the cache device?
> > >
> > > That's intended behaviour, backing and cache devices have separate
> > > lifetimes (and you can attach many backing devices to a single cache).
> > > 
> > > You just have to stop the cache set separately, via
> > > /sys/fs/bcache/<uuid>/stop or /sys/block/<cache device>/bcache/set/stop
> > 
> > The /dev/bcacheX device was mounted.  I issued 1 to
> > /sys/block/#{bcache_name}/bcache/stop before unmounting.  I then
> > unmounted /dev/bcacheX, the remaining bcache device infrastructure was
> > torn down.  But after that the device that was the cache device was
> > still held open -- seems like a reference leaked.
> 
> Was there still a directory in /sys/fs/bcache?

Yes, after unmount the /dev/bcacheX device is deleted but the
associated /sys/fs/bcache/<uuid> still exists.

Echoing 1 to /sys/fs/bcache/<uuid>/stop cleared it up.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                                           ` <20130201153318.GW26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01 16:08                                                                                             ` Tejun Heo
       [not found]                                                                                               ` <20130201160820.GA31863-9pTldWuhBndy/B6EtB590w@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Tejun Heo @ 2013-02-01 16:08 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Mike Snitzer, Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

Hey,

On Fri, Feb 01, 2013 at 07:33:18AM -0800, Kent Overstreet wrote:
> Could add a new, fixed version that doesn't do the refcounting, bcache
> and I imagine md could use that right away (maybe even just split the
> refcounting out into different functions and have dm call those
> directly, probably an easy way to refactor it anyways)

I don't know.  We then would have two interfaces doing about the same
thing and a flag indicating whether the new or old one was used to
create the link so that exclusive close can decide to remove it or
not, which seems a bit complicated.  Let's see whether Mike can remove
the weirdness from dm side.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                       ` <20130201033810.GA14867-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01 15:39                                                                         ` Kent Overstreet
       [not found]                                                                           ` <20130201153936.GX26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-02-01 15:39 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Thu, Jan 31, 2013 at 10:38:11PM -0500, Mike Snitzer wrote:
> I'm just seeing extra newlines (empty lines), e.g.:
> bcache: run_cache_set() invalidating existing data

Oh I see it, I screwed that up when I converted a bunch of stuff to
pr_*() macros and my brain was so used to reading the same stuff I
didn't notice it. I'll have that fixed in a minute.

> > > - bcache doesn't appear to be establishing proper holders on the devices
> > >   it uses for the backing and cache devices.
> > >   - lsblk doesn't show any associations with bcache devices.
> > 
> > How's that created - what function am I looking for?
> 
> bd_link_disk_holder and bd_unlink_disk_holder

Ok, that's done - it's in the testing branch

> 
> > >   - the fio utility isn't able to get any stats for the bcache device or
> > >     the devices bcache uses.
> > 
> > I'd been meaning to fix that, never got around to figuring out how those
> > stats are generated. Function/file you can point me to?
> 
> I think you'll get the stats for via genhd (add_disk) -- the stats are
> the 'disk_stats dkstats' member of the genhd's hd_struct.  But as of
> now you'll notice that /sys/block/bcacheX/stat only ever contains 0s.

> 
> part_stat_inc, part_stat_add are the low-level methods for driving the
> counters up.  But I'm not sure why bcache isn't getting these stats  --
> disk stats are something we get for free with DM devices so I haven't
> really had to dig into these mechanics in detail.

Ah, I remember coming across that before now. Got that fixed too, it's
in both the stable and testing branches.

And dm and md are both calling it from their make_request functions,
which is why bcache wasn't getting it.

> > > - if I 'stop' a bcache device (using sysfs) while it is mounted; once I
> > >   unmount the filesystem the device that bcache was using as a cache
> > >   still has an open count of 1 but the bcache device then no longer
> > >   exists
> > 
> > You mean the backing device isn't open, just the cache device?
> >
> > That's intended behaviour, backing and cache devices have separate
> > lifetimes (and you can attach many backing devices to a single cache).
> > 
> > You just have to stop the cache set separately, via
> > /sys/fs/bcache/<uuid>/stop or /sys/block/<cache device>/bcache/set/stop
> 
> The /dev/bcacheX device was mounted.  I issued 1 to
> /sys/block/#{bcache_name}/bcache/stop before unmounting.  I then
> unmounted /dev/bcacheX, the remaining bcache device infrastructure was
> torn down.  But after that the device that was the cache device was
> still held open -- seems like a reference leaked.

Was there still a directory in /sys/fs/bcache?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                                       ` <20130201153019.GT6824-9pTldWuhBndy/B6EtB590w@public.gmane.org>
@ 2013-02-01 15:33                                                                                         ` Kent Overstreet
       [not found]                                                                                           ` <20130201153318.GW26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-02-01 15:33 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Mike Snitzer, Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

On Fri, Feb 01, 2013 at 07:30:19AM -0800, Tejun Heo wrote:
> On Fri, Feb 01, 2013 at 07:27:43AM -0800, Kent Overstreet wrote:
> > > > Kent was talking about using MD (and though he isn't opposed to DM he
> > > > doesn't care to integrate with DM himself).  Either DM or MD would
> > > > implicitly enable bcache to use this interface.  But in the near-term I
> > > > cannot see why Kent shouldn't be able to use bd_link_disk_holder too.
> > > 
> > > Being part of dm or dm should make this mostly irrelevant, no?
> > 
> > Yeah, but who knows when that'll actually happen and since this is for
> > userspace I'm just going to call it. The refcounting won't affect me,
> > and using it in bcache won't affect ripping that out.
> 
> Yeah, I don't see any problem regarding user-visible behavior.  Please
> go ahead.  It's just gross internally and I wanted someone to do
> something about it before spreading its misuse (depending on the
> refs).

Could add a new, fixed version that doesn't do the refcounting, bcache
and I imagine md could use that right away (maybe even just split the
refcounting out into different functions and have dm call those
directly, probably an easy way to refactor it anyways)

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                                   ` <20130201152743.GV26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01 15:30                                                                                     ` Tejun Heo
       [not found]                                                                                       ` <20130201153019.GT6824-9pTldWuhBndy/B6EtB590w@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Tejun Heo @ 2013-02-01 15:30 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Mike Snitzer, Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

On Fri, Feb 01, 2013 at 07:27:43AM -0800, Kent Overstreet wrote:
> > > Kent was talking about using MD (and though he isn't opposed to DM he
> > > doesn't care to integrate with DM himself).  Either DM or MD would
> > > implicitly enable bcache to use this interface.  But in the near-term I
> > > cannot see why Kent shouldn't be able to use bd_link_disk_holder too.
> > 
> > Being part of dm or dm should make this mostly irrelevant, no?
> 
> Yeah, but who knows when that'll actually happen and since this is for
> userspace I'm just going to call it. The refcounting won't affect me,
> and using it in bcache won't affect ripping that out.

Yeah, I don't see any problem regarding user-visible behavior.  Please
go ahead.  It's just gross internally and I wanted someone to do
something about it before spreading its misuse (depending on the
refs).

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                               ` <20130201145504.GS6824-9pTldWuhBndy/B6EtB590w@public.gmane.org>
  2013-02-01 15:16                                                                                 ` Mike Snitzer
@ 2013-02-01 15:27                                                                                 ` Kent Overstreet
       [not found]                                                                                   ` <20130201152743.GV26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-02-01 15:27 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Mike Snitzer, Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

On Fri, Feb 01, 2013 at 06:55:04AM -0800, Tejun Heo wrote:
> Hey, Mike.
> 
> On Fri, Feb 01, 2013 at 09:10:03AM -0500, Mike Snitzer wrote:
> > Well judging by the header for commit 49731baa41df404c2c3f44555869ab387363af43  
> > ("block: restore multiple bd_link_disk_holder() support") it just looks
> > like Tejun hates the fact that DM and MD are using this interface.  No
> > alternative is provided; so the "DON'T USE THIS UNLESS YOU'RE ALREADY
> > USING IT." rings hollow.
> 
> The original code was gross regarding kobj handling there so I might
> have overreacted.  Ah right, the refcnt doesn't belong there.  The
> caller should already own both the master and slave devices (creator
> of the former, exclusive opener of the latter) and that really should
> be the extent of ownership that block layer tracks.
> bk_[un]link_disk_holder() implements completely isolated refcnting
> because dm somehow calls the function for the same pair multiple
> times.
> 
> ISTR the problem w/ block layer was that because this adds a separate
> layer of refcnting, it can't be tied to the usual rule of block device
> access.  ie. we really shouldn't need bd_unlink_disk_holder() but the
> linkage's lifetime should be bound to the exclusive open of the slave
> device, which can't currently be done.

Ah ok. Yeah, that refcounting is odd.

> IIRC, there was only one case where this happens in dm, would you be
> interested in tracking that down?  I'd be happy to lose the extra
> refcnting code and tie it back to bdev exclusive open.
> 
> > Kent was talking about using MD (and though he isn't opposed to DM he
> > doesn't care to integrate with DM himself).  Either DM or MD would
> > implicitly enable bcache to use this interface.  But in the near-term I
> > cannot see why Kent shouldn't be able to use bd_link_disk_holder too.
> 
> Being part of dm or dm should make this mostly irrelevant, no?

Yeah, but who knows when that'll actually happen and since this is for
userspace I'm just going to call it. The refcounting won't affect me,
and using it in bcache won't affect ripping that out.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                               ` <20130201145504.GS6824-9pTldWuhBndy/B6EtB590w@public.gmane.org>
@ 2013-02-01 15:16                                                                                 ` Mike Snitzer
  2013-02-01 15:27                                                                                 ` Kent Overstreet
  1 sibling, 0 replies; 48+ messages in thread
From: Mike Snitzer @ 2013-02-01 15:16 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA, Kent Overstreet,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

On Fri, Feb 01 2013 at  9:55am -0500,
Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:

> Hey, Mike.
> 
> On Fri, Feb 01, 2013 at 09:10:03AM -0500, Mike Snitzer wrote:
> > Well judging by the header for commit 49731baa41df404c2c3f44555869ab387363af43  
> > ("block: restore multiple bd_link_disk_holder() support") it just looks
> > like Tejun hates the fact that DM and MD are using this interface.  No
> > alternative is provided; so the "DON'T USE THIS UNLESS YOU'RE ALREADY
> > USING IT." rings hollow.
> 
> The original code was gross regarding kobj handling there so I might
> have overreacted.  Ah right, the refcnt doesn't belong there.  The
> caller should already own both the master and slave devices (creator
> of the former, exclusive opener of the latter) and that really should
> be the extent of ownership that block layer tracks.
> bk_[un]link_disk_holder() implements completely isolated refcnting
> because dm somehow calls the function for the same pair multiple
> times.

You're likely referring to how DM can load an inactive table while a
table is already active.  These active and inactive DM tables can have
the same block devices associated with them.  Loading a table causes the
devices to be opened exclusively with blkdev_get_by_dev.  See
open_dev and close_dev in drivers/md/dm-table.c

> ISTR the problem w/ block layer was that because this adds a separate
> layer of refcnting, it can't be tied to the usual rule of block device
> access.  ie. we really shouldn't need bd_unlink_disk_holder() but the
> linkage's lifetime should be bound to the exclusive open of the slave
> device, which can't currently be done.
> 
> IIRC, there was only one case where this happens in dm, would you be
> interested in tracking that down?  I'd be happy to lose the extra
> refcnting code and tie it back to bdev exclusive open.

I'll have a closer look.

> > Kent was talking about using MD (and though he isn't opposed to DM he
> > doesn't care to integrate with DM himself).  Either DM or MD would
> > implicitly enable bcache to use this interface.  But in the near-term I
> > cannot see why Kent shouldn't be able to use bd_link_disk_holder too.
> 
> Being part of dm or dm should make this mostly irrelevant, no?

Sure, but I don't pretend to know when bcache will make use of either.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                           ` <20130201141003.GA18095-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01 14:55                                                                             ` Tejun Heo
       [not found]                                                                               ` <20130201145504.GS6824-9pTldWuhBndy/B6EtB590w@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Tejun Heo @ 2013-02-01 14:55 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Lars Ellenberg, dm-devel-H+wXaHxf7aLQT0dZR+AlfA, Kent Overstreet,
	linux-bcache-u79uwXL29TY76Z2rM5mHXA

Hey, Mike.

On Fri, Feb 01, 2013 at 09:10:03AM -0500, Mike Snitzer wrote:
> Well judging by the header for commit 49731baa41df404c2c3f44555869ab387363af43  
> ("block: restore multiple bd_link_disk_holder() support") it just looks
> like Tejun hates the fact that DM and MD are using this interface.  No
> alternative is provided; so the "DON'T USE THIS UNLESS YOU'RE ALREADY
> USING IT." rings hollow.

The original code was gross regarding kobj handling there so I might
have overreacted.  Ah right, the refcnt doesn't belong there.  The
caller should already own both the master and slave devices (creator
of the former, exclusive opener of the latter) and that really should
be the extent of ownership that block layer tracks.
bk_[un]link_disk_holder() implements completely isolated refcnting
because dm somehow calls the function for the same pair multiple
times.

ISTR the problem w/ block layer was that because this adds a separate
layer of refcnting, it can't be tied to the usual rule of block device
access.  ie. we really shouldn't need bd_unlink_disk_holder() but the
linkage's lifetime should be bound to the exclusive open of the slave
device, which can't currently be done.

IIRC, there was only one case where this happens in dm, would you be
interested in tracking that down?  I'd be happy to lose the extra
refcnting code and tie it back to bdev exclusive open.

> Kent was talking about using MD (and though he isn't opposed to DM he
> doesn't care to integrate with DM himself).  Either DM or MD would
> implicitly enable bcache to use this interface.  But in the near-term I
> cannot see why Kent shouldn't be able to use bd_link_disk_holder too.

Being part of dm or dm should make this mostly irrelevant, no?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                       ` <20130201103944.GM8837@soda.linbit>
@ 2013-02-01 14:10                                                                         ` Mike Snitzer
       [not found]                                                                           ` <20130201141003.GA18095-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-02-01 14:10 UTC (permalink / raw)
  To: Lars Ellenberg; +Cc: linux-bcache, tj, dm-devel, Kent Overstreet

[cc'ing Tejun, adding Kent and linux-bcache back to the cc]

On Fri, Feb 01 2013 at  5:39am -0500,
Lars Ellenberg <lars.ellenberg@linbit.com> wrote:

> On Thu, Jan 31, 2013 at 10:38:11PM -0500, Mike Snitzer wrote:
> > On Thu, Jan 31 2013 at  7:33pm -0500,
> > Kent Overstreet <koverstreet@google.com> wrote:
> > 
> > > On Thu, Jan 31, 2013 at 06:08:00PM -0500, Mike Snitzer wrote:
> > > > On Thu, Jan 31 2013 at  5:25pm -0500,
> > > > Kent Overstreet <kent.overstreet@gmail.com> wrote:
> > > > 
> > > > > On Thu, Jan 31, 2013 at 2:17 PM, Mike Snitzer <snitzer@redhat.com> wrote:
> > > > > > Ah, yeah I had a typo in my script.  When I fixed it I get a BUG (with
> > > > > > your latest bcache code) when I try to mkfs.xfs /dev/bcache0:
> > > > > 
> > > > > Heh, that's the dev branch - you don't want to be running the dev
> > > > > branch, there's a lot of buggy crap in there and it almost definitely
> > > > > corrupts data. Testing branch should be good, though.
> > > > 
> > > > OK, I'll pick up changes from -testing until directed elsewhere.
> > > > 
> > > > BTW, here are couple things I've noticed with bcache:
> > > > 
> > > > - The log messages seem to have an extra newline at the end.
> > > 
> > > How are you seeing that/which log messages? I hadn't noticed that
> > > myself (do they not get printed somehow?)
> > 
> > I'm just seeing extra newlines (empty lines), e.g.:
> > bcache: run_cache_set() invalidating existing data
> > 
> > bcache: bch_cached_dev_attach() Caching dm-4 as bcache1 on set 5c0ba0d0-df36-4684-acc5-45f5b4683788
> > 
> > bcache: register_cache() registered cache device dm-3
> > 
> > EXT4-fs (bcache1): mounted filesystem with ordered data mode. Opts: discard
> > 
> > > > - bcache doesn't appear to be establishing proper holders on the devices
> > > >   it uses for the backing and cache devices.
> > > >   - lsblk doesn't show any associations with bcache devices.
> > > 
> > > How's that created - what function am I looking for?
> > 
> > bd_link_disk_holder and bd_unlink_disk_holder
> 
> Upstream says:
>  * bd_link_disk_holder - create symlinks between holding disk and slave bdev
>  * @bdev: the claimed slave bdev
>  * @disk: the holding disk
>  *
>  * DON'T USE THIS UNLESS YOU'RE ALREADY USING IT.
> ...
> 
>  ?

Well judging by the header for commit 49731baa41df404c2c3f44555869ab387363af43  
("block: restore multiple bd_link_disk_holder() support") it just looks
like Tejun hates the fact that DM and MD are using this interface.  No
alternative is provided; so the "DON'T USE THIS UNLESS YOU'RE ALREADY
USING IT." rings hollow.

Kent was talking about using MD (and though he isn't opposed to DM he
doesn't care to integrate with DM himself).  Either DM or MD would
implicitly enable bcache to use this interface.  But in the near-term I
cannot see why Kent shouldn't be able to use bd_link_disk_holder too.

> > > >   - the fio utility isn't able to get any stats for the bcache device or
> > > >     the devices bcache uses.
> > > 
> > > I'd been meaning to fix that, never got around to figuring out how those
> > > stats are generated. Function/file you can point me to?
> > 
> > I think you'll get the stats for via genhd (add_disk) -- the stats are
> > the 'disk_stats dkstats' member of the genhd's hd_struct.  But as of
> > now you'll notice that /sys/block/bcacheX/stat only ever contains 0s.
> > 
> > part_stat_inc, part_stat_add are the low-level methods for driving the
> > counters up.  But I'm not sure why bcache isn't getting these stats  --
> > disk stats are something we get for free with DM devices so I haven't
> > really had to dig into these mechanics in detail.
> 
> dm core does that for you.
> for other block devices, you'll have to do that yourself.
> 
> My understanding is that you need to do,
> in your make_request, where "disk" is your own struct gendisk*:
>         const int rw = bio_data_dir(bio);
>         int cpu;
>         cpu = part_stat_lock();
>         part_round_stats(cpu, &disk->part0);
>         part_stat_inc(cpu, &disk->part0, ios[rw]);
>         part_stat_add(cpu, &disk->part0, sectors[rw], bio_sectors(bio));
>         part_inc_in_flight(&disk->part0, rw);
>         part_stat_unlock();
> 
> and on completion,
> (you would need to track the "start time" jiffies somewhere):
> 	int rw = bio_data_dir(bio);
>         unsigned long duration = jiffies - start_time;
>         int cpu;
>         cpu = part_stat_lock();
>         part_stat_add(cpu, &disk->part0, ticks[rw], duration);
>         part_round_stats(cpu, &disk->part0);
>         part_dec_in_flight(&disk->part0, rw);
>         part_stat_unlock();

Thanks for clearing that up.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                                   ` <20130201003311.GJ12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
@ 2013-02-01  3:38                                                                     ` Mike Snitzer
       [not found]                                                                       ` <20130201103944.GM8837@soda.linbit>
       [not found]                                                                       ` <20130201033810.GA14867-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 2 replies; 48+ messages in thread
From: Mike Snitzer @ 2013-02-01  3:38 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Thu, Jan 31 2013 at  7:33pm -0500,
Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, Jan 31, 2013 at 06:08:00PM -0500, Mike Snitzer wrote:
> > On Thu, Jan 31 2013 at  5:25pm -0500,
> > Kent Overstreet <kent.overstreet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > 
> > > On Thu, Jan 31, 2013 at 2:17 PM, Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > > Ah, yeah I had a typo in my script.  When I fixed it I get a BUG (with
> > > > your latest bcache code) when I try to mkfs.xfs /dev/bcache0:
> > > 
> > > Heh, that's the dev branch - you don't want to be running the dev
> > > branch, there's a lot of buggy crap in there and it almost definitely
> > > corrupts data. Testing branch should be good, though.
> > 
> > OK, I'll pick up changes from -testing until directed elsewhere.
> > 
> > BTW, here are couple things I've noticed with bcache:
> > 
> > - The log messages seem to have an extra newline at the end.
> 
> How are you seeing that/which log messages? I hadn't noticed that
> myself (do they not get printed somehow?)

I'm just seeing extra newlines (empty lines), e.g.:
bcache: run_cache_set() invalidating existing data

bcache: bch_cached_dev_attach() Caching dm-4 as bcache1 on set 5c0ba0d0-df36-4684-acc5-45f5b4683788

bcache: register_cache() registered cache device dm-3

EXT4-fs (bcache1): mounted filesystem with ordered data mode. Opts: discard

> > - bcache doesn't appear to be establishing proper holders on the devices
> >   it uses for the backing and cache devices.
> >   - lsblk doesn't show any associations with bcache devices.
> 
> How's that created - what function am I looking for?

bd_link_disk_holder and bd_unlink_disk_holder

> >   - the fio utility isn't able to get any stats for the bcache device or
> >     the devices bcache uses.
> 
> I'd been meaning to fix that, never got around to figuring out how those
> stats are generated. Function/file you can point me to?

I think you'll get the stats for via genhd (add_disk) -- the stats are
the 'disk_stats dkstats' member of the genhd's hd_struct.  But as of
now you'll notice that /sys/block/bcacheX/stat only ever contains 0s.

part_stat_inc, part_stat_add are the low-level methods for driving the
counters up.  But I'm not sure why bcache isn't getting these stats  --
disk stats are something we get for free with DM devices so I haven't
really had to dig into these mechanics in detail.

> > - if I 'stop' a bcache device (using sysfs) while it is mounted; once I
> >   unmount the filesystem the device that bcache was using as a cache
> >   still has an open count of 1 but the bcache device then no longer
> >   exists
> 
> You mean the backing device isn't open, just the cache device?
>
> That's intended behaviour, backing and cache devices have separate
> lifetimes (and you can attach many backing devices to a single cache).
> 
> You just have to stop the cache set separately, via
> /sys/fs/bcache/<uuid>/stop or /sys/block/<cache device>/bcache/set/stop

The /dev/bcacheX device was mounted.  I issued 1 to
/sys/block/#{bcache_name}/bcache/stop before unmounting.  I then
unmounted /dev/bcacheX, the remaining bcache device infrastructure was
torn down.  But after that the device that was the cache device was
still held open -- seems like a reference leaked.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                               ` <20130131230800.GB13540-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-02-01  0:33                                                                 ` Kent Overstreet
       [not found]                                                                   ` <20130201003311.GJ12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-02-01  0:33 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Thu, Jan 31, 2013 at 06:08:00PM -0500, Mike Snitzer wrote:
> On Thu, Jan 31 2013 at  5:25pm -0500,
> Kent Overstreet <kent.overstreet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> 
> > On Thu, Jan 31, 2013 at 2:17 PM, Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > Ah, yeah I had a typo in my script.  When I fixed it I get a BUG (with
> > > your latest bcache code) when I try to mkfs.xfs /dev/bcache0:
> > 
> > Heh, that's the dev branch - you don't want to be running the dev
> > branch, there's a lot of buggy crap in there and it almost definitely
> > corrupts data. Testing branch should be good, though.
> 
> OK, I'll pick up changes from -testing until directed elsewhere.
> 
> BTW, here are couple things I've noticed with bcache:
> 
> - The log messages seem to have an extra newline at the end.

How are you seeing that/which log messages? I hadn't noticed that
myself (do they not get printed somehow?)

> - bcache doesn't appear to be establishing proper holders on the devices
>   it uses for the backing and cache devices.
>   - lsblk doesn't show any associations with bcache devices.

How's that created - what function am I looking for?

>   - the fio utility isn't able to get any stats for the bcache device or
>     the devices bcache uses.

I'd been meaning to fix that, never got around to figuring out how those
stats are generated. Function/file you can point me to?

> - if I 'stop' a bcache device (using sysfs) while it is mounted; once I
>   unmount the filesystem the device that bcache was using as a cache
>   still has an open count of 1 but the bcache device then no longer
>   exists

You mean the backing device isn't open, just the cache device?

That's intended behaviour, backing and cache devices have separate
lifetimes (and you can attach many backing devices to a single cache).

You just have to stop the cache set separately, via
/sys/fs/bcache/<uuid>/stop or /sys/block/<cache device>/bcache/set/stop

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                           ` <CAC7rs0ue6YgqrX9Nc18GdnVtJd558F6W=BZiMXZdRqig3s7sBA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-01-31 23:08                                                             ` Mike Snitzer
       [not found]                                                               ` <20130131230800.GB13540-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-31 23:08 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Thu, Jan 31 2013 at  5:25pm -0500,
Kent Overstreet <kent.overstreet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> On Thu, Jan 31, 2013 at 2:17 PM, Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > Ah, yeah I had a typo in my script.  When I fixed it I get a BUG (with
> > your latest bcache code) when I try to mkfs.xfs /dev/bcache0:
> 
> Heh, that's the dev branch - you don't want to be running the dev
> branch, there's a lot of buggy crap in there and it almost definitely
> corrupts data. Testing branch should be good, though.

OK, I'll pick up changes from -testing until directed elsewhere.

BTW, here are couple things I've noticed with bcache:

- The log messages seem to have an extra newline at the end.

- bcache doesn't appear to be establishing proper holders on the devices
  it uses for the backing and cache devices.
  - lsblk doesn't show any associations with bcache devices.
  - the fio utility isn't able to get any stats for the bcache device or
    the devices bcache uses.

- if I 'stop' a bcache device (using sysfs) while it is mounted; once I
  unmount the filesystem the device that bcache was using as a cache
  still has an open count of 1 but the bcache device then no longer
  exists

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                       ` <20130131221711.GA13540-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-01-31 22:25                                                         ` Kent Overstreet
       [not found]                                                           ` <CAC7rs0ue6YgqrX9Nc18GdnVtJd558F6W=BZiMXZdRqig3s7sBA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-31 22:25 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Thu, Jan 31, 2013 at 2:17 PM, Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> Ah, yeah I had a typo in my script.  When I fixed it I get a BUG (with
> your latest bcache code) when I try to mkfs.xfs /dev/bcache0:

Heh, that's the dev branch - you don't want to be running the dev
branch, there's a lot of buggy crap in there and it almost definitely
corrupts data. Testing branch should be good, though.

>
> bcache: uuid_inode_write_new_fn() inserting inode 0, unused_inode_hint now 1
> bcache: bch_cached_dev_attach() attached inode 0
> bcache: bch_cached_dev_attach() Caching dm-12 as bcache0 on set 71a7eb63-ed26-48df-8147-59a7d366f242
>
> bcache: register_cache() registered cache device dm-15
>
> ------------[ cut here ]------------
> kernel BUG at drivers/md/bcache/btree.c:1728!
> invalid opcode: 0000 [#1] SMP
> Modules linked in: bcache(O) ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe libfcoe libfc 8021q garp scsi_transport_fc stp llc scsi_tgt sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi dm_mirror dm_region_hash dm_log dm_round_robin dm_multipath vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb i7core_edac edac_core iomemory_vsl(O) skd(O) ixg
 be dca ptp pps_core mdio ses enclosure dm_mod sg ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix megaraid_sas
> CPU 9
> Pid: 100, comm: kworker/9:1 Tainted: G        W  O 3.8.0-rc4.snitm+ #39 FUJITSU                          PRIMERGY RX300 S6             /D2619
> RIP: 0010:[<ffffffffa0703258>]  [<ffffffffa0703258>] subtract_dirty.45211+0xa8/0xb0 [bcache]
> RSP: 0018:ffff88032dd758b8  EFLAGS: 00010246
> RAX: 0000000000000200 RBX: 0000000000000000 RCX: 0000000000000000
> RDX: ffff8802f4080200 RSI: 0000000000000000 RDI: ffff8802f4080220
> RBP: ffff88032dd758d8 R08: 0000000000000300 R09: 0000000000000010
> R10: ffff88032dd759a8 R11: 0000000000000000 R12: ffff8802f4080220
> R13: ffff88032dd759a8 R14: 0000000000004010 R15: 0000000000000020
> FS:  0000000000000000(0000) GS:ffff88033fd20000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: 000000000041e450 CR3: 00000002f9a88000 CR4: 00000000000007e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process kworker/9:1 (pid: 100, threadinfo ffff88032dd74000, task ffff88032dcd20c0)
> Stack:
>  00ffffffffffffff ffff8802f4080220 00ffffffffffffff ffff8803301b4e08
>  ffff88032dd759e8 ffffffffa07081a0 ffff88032dd75948 ffffffff810018d7
>  000000002ca26c00 ffff88032e1a75c8 000000092dd75958 0000000000000020
> Call Trace:
>  [<ffffffffa07081a0>] fix_overlapping_extents+0x360/0x640 [bcache]
>  [<ffffffff810018d7>] ? __switch_to+0x157/0x4f0
>  [<ffffffff810521dc>] ? lock_timer_base+0x3c/0x70
>  [<ffffffffa070cd77>] ? __bch_btree_iter_init+0x87/0xd0 [bcache]
>  [<ffffffffa070860d>] btree_insert_key+0x18d/0x680 [bcache]
>  [<ffffffff814f3eda>] ? schedule_timeout+0x13a/0x220
>  [<ffffffff81250a39>] ? cpumask_next_and+0x29/0x50
>  [<ffffffff81250a39>] ? cpumask_next_and+0x29/0x50
>  [<ffffffffa0708c6b>] bch_btree_insert_keys+0x16b/0x350 [bcache]
>  [<ffffffffa070959b>] bch_btree_insert_node+0xbb/0x240 [bcache]
>  [<ffffffffa0709a30>] bch_btree_insert_recurse+0x140/0x190 [bcache]
>  [<ffffffffa0709b8a>] bch_btree_insert+0x10a/0x1a0 [bcache]
>  [<ffffffffa0714fc2>] bch_data_insert_keys+0x52/0x150 [bcache]
>  [<ffffffff8105f327>] process_one_work+0x177/0x430
>  [<ffffffffa0714f70>] ? bch_data_insert_endio+0xc0/0xc0 [bcache]
>  [<ffffffff810612be>] worker_thread+0x12e/0x380
>  [<ffffffff81061190>] ? manage_workers+0x180/0x180
>  [<ffffffff8106652e>] kthread+0xce/0xe0
>  [<ffffffff81066460>] ? kthread_freezable_should_stop+0x70/0x70
>  [<ffffffff814ff1ac>] ret_from_fork+0x7c/0xb0
>  [<ffffffff81066460>] ? kthread_freezable_should_stop+0x70/0x70
> Code: 63 db f0 48 29 98 28 01 00 00 49 8b 45 00 48 8b 80 80 00 00 00 66 83 80 d0 0c 00 00 01 48 8b 5d e8 4c 8b 65 f0 4c 8b 6d f8 c9 c3 <0f> 0b eb fe 0f 1f 40 00 55 48 89 e5 48 83 ec 10 66 66 66 66 90
> RIP  [<ffffffffa0703258>] subtract_dirty.45211+0xa8/0xb0 [bcache]
>  RSP <ffff88032dd758b8>
> ---[ end trace 66c7f74f4fc71e5f ]---

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                                   ` <CAC7rs0u_aJS5BsJ0E7wH98z2VxXr=SK1z8yL0-m0Pc85ncJNHg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-01-31 22:17                                                     ` Mike Snitzer
       [not found]                                                       ` <20130131221711.GA13540-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-31 22:17 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Thu, Jan 31 2013 at  4:08pm -0500,
Kent Overstreet <kent.overstreet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> On Thu, Jan 31, 2013 at 11:02 AM, Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > On Wed, Jan 30 2013 at  8:48pm -0500,
> > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> >>
> >> This is going to take some thought. For now, just disable the shrinker:
> >>
> >> echo 1 > /sys/fs/bcache/<uuid>/internal/btree_shrinker_disabled
> >
> > Oddly I don't have a /sys/fs/bcache/<uuid> even though I have created
> > /dev/bcache0
> >
> > The only files I have in /sys/fs/bcache/ are: register  register_quiet
> 
> That means you have a backing device registered, but not a cache device

Ah, yeah I had a typo in my script.  When I fixed it I get a BUG (with
your latest bcache code) when I try to mkfs.xfs /dev/bcache0:

bcache: uuid_inode_write_new_fn() inserting inode 0, unused_inode_hint now 1
bcache: bch_cached_dev_attach() attached inode 0
bcache: bch_cached_dev_attach() Caching dm-12 as bcache0 on set 71a7eb63-ed26-48df-8147-59a7d366f242

bcache: register_cache() registered cache device dm-15

------------[ cut here ]------------
kernel BUG at drivers/md/bcache/btree.c:1728!
invalid opcode: 0000 [#1] SMP 
Modules linked in: bcache(O) ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe libfcoe libfc 8021q garp scsi_transport_fc stp llc scsi_tgt sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi dm_mirror dm_region_hash dm_log dm_round_robin dm_multipath vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb i7core_edac edac_core iomemory_vsl(O) skd(O) ixgbe
  dca ptp pps_core mdio ses enclosure dm_mod sg ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix megaraid_sas
CPU 9 
Pid: 100, comm: kworker/9:1 Tainted: G        W  O 3.8.0-rc4.snitm+ #39 FUJITSU                          PRIMERGY RX300 S6             /D2619
RIP: 0010:[<ffffffffa0703258>]  [<ffffffffa0703258>] subtract_dirty.45211+0xa8/0xb0 [bcache]
RSP: 0018:ffff88032dd758b8  EFLAGS: 00010246
RAX: 0000000000000200 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff8802f4080200 RSI: 0000000000000000 RDI: ffff8802f4080220
RBP: ffff88032dd758d8 R08: 0000000000000300 R09: 0000000000000010
R10: ffff88032dd759a8 R11: 0000000000000000 R12: ffff8802f4080220
R13: ffff88032dd759a8 R14: 0000000000004010 R15: 0000000000000020
FS:  0000000000000000(0000) GS:ffff88033fd20000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 000000000041e450 CR3: 00000002f9a88000 CR4: 00000000000007e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process kworker/9:1 (pid: 100, threadinfo ffff88032dd74000, task ffff88032dcd20c0)
Stack:
 00ffffffffffffff ffff8802f4080220 00ffffffffffffff ffff8803301b4e08
 ffff88032dd759e8 ffffffffa07081a0 ffff88032dd75948 ffffffff810018d7
 000000002ca26c00 ffff88032e1a75c8 000000092dd75958 0000000000000020
Call Trace:
 [<ffffffffa07081a0>] fix_overlapping_extents+0x360/0x640 [bcache]
 [<ffffffff810018d7>] ? __switch_to+0x157/0x4f0
 [<ffffffff810521dc>] ? lock_timer_base+0x3c/0x70
 [<ffffffffa070cd77>] ? __bch_btree_iter_init+0x87/0xd0 [bcache]
 [<ffffffffa070860d>] btree_insert_key+0x18d/0x680 [bcache]
 [<ffffffff814f3eda>] ? schedule_timeout+0x13a/0x220
 [<ffffffff81250a39>] ? cpumask_next_and+0x29/0x50
 [<ffffffff81250a39>] ? cpumask_next_and+0x29/0x50
 [<ffffffffa0708c6b>] bch_btree_insert_keys+0x16b/0x350 [bcache]
 [<ffffffffa070959b>] bch_btree_insert_node+0xbb/0x240 [bcache]
 [<ffffffffa0709a30>] bch_btree_insert_recurse+0x140/0x190 [bcache]
 [<ffffffffa0709b8a>] bch_btree_insert+0x10a/0x1a0 [bcache]
 [<ffffffffa0714fc2>] bch_data_insert_keys+0x52/0x150 [bcache]
 [<ffffffff8105f327>] process_one_work+0x177/0x430
 [<ffffffffa0714f70>] ? bch_data_insert_endio+0xc0/0xc0 [bcache]
 [<ffffffff810612be>] worker_thread+0x12e/0x380
 [<ffffffff81061190>] ? manage_workers+0x180/0x180
 [<ffffffff8106652e>] kthread+0xce/0xe0
 [<ffffffff81066460>] ? kthread_freezable_should_stop+0x70/0x70
 [<ffffffff814ff1ac>] ret_from_fork+0x7c/0xb0
 [<ffffffff81066460>] ? kthread_freezable_should_stop+0x70/0x70
Code: 63 db f0 48 29 98 28 01 00 00 49 8b 45 00 48 8b 80 80 00 00 00 66 83 80 d0 0c 00 00 01 48 8b 5d e8 4c 8b 65 f0 4c 8b 6d f8 c9 c3 <0f> 0b eb fe 0f 1f 40 00 55 48 89 e5 48 83 ec 10 66 66 66 66 90 
RIP  [<ffffffffa0703258>] subtract_dirty.45211+0xa8/0xb0 [bcache]
 RSP <ffff88032dd758b8>
---[ end trace 66c7f74f4fc71e5f ]---

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                       ` <20130131012747.GG12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  2013-01-31  1:48                                         ` Kent Overstreet
@ 2013-01-31 22:01                                         ` Kent Overstreet
  1 sibling, 0 replies; 48+ messages in thread
From: Kent Overstreet @ 2013-01-31 22:01 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 30, 2013 at 05:27:47PM -0800, Kent Overstreet wrote:
> On Wed, Jan 30, 2013 at 05:26:27PM -0800, Kent Overstreet wrote:
> > On Wed, Jan 30, 2013 at 07:10:21PM -0500, Mike Snitzer wrote:
> > > On Wed, Jan 30 2013 at  6:36pm -0500,
> > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > 
> > > > On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> > > > > On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> > > > > > On Mon, Jan 14 2013 at  5:53pm -0500,
> > > > > > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > > > > 
> > > > > > > On Mon, Jan 14 2013 at  5:37pm -0500,
> > > > > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > > > > 
> > > > > > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > > > > > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > > > > > > I'd appreciate if you verify I did in fact fix that issue.
> > > > > > > 
> > > > > > > Will do, thanks Kent.
> > > > > > 
> > > > > > I hit the crash below if I do this in a script:
> > > > > > 
> > > > > > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> > > > > > echo 1 > /sys/block/bcache0/bcache/stop
> > > > > 
> > > > > Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> > > > > This is not the prettiest area of the code :P
> > > > 
> > > > And, I finally have a fix for it up.
> > > > 
> > > > Fixed a bunch of other bugs today too... notably the bug where it'd
> > > > crash if you enabled discards. Was there anything else you or anyone
> > > > else was hitting?
> > > 
> > > Great, thanks, the other outstanding report was this one:
> > > https://lkml.org/lkml/2013/1/17/554
> > 
> > Yep, that was the discard bug I just fixed.
> 
> Err I misread - nope, I missed that one. Taking a look now.

And, it's fixed.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                               ` <20130131190249.GA12786-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-01-31 21:08                                                 ` Kent Overstreet
       [not found]                                                   ` <CAC7rs0u_aJS5BsJ0E7wH98z2VxXr=SK1z8yL0-m0Pc85ncJNHg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-31 21:08 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: Kent Overstreet, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Thu, Jan 31, 2013 at 11:02 AM, Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> On Wed, Jan 30 2013 at  8:48pm -0500,
> Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>
>> On Wed, Jan 30, 2013 at 05:27:47PM -0800, Kent Overstreet wrote:
>> > On Wed, Jan 30, 2013 at 05:26:27PM -0800, Kent Overstreet wrote:
>> > > On Wed, Jan 30, 2013 at 07:10:21PM -0500, Mike Snitzer wrote:
>> > > > On Wed, Jan 30 2013 at  6:36pm -0500,
>> > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>> > > >
>> > > > > Fixed a bunch of other bugs today too... notably the bug where it'd
>> > > > > crash if you enabled discards. Was there anything else you or anyone
>> > > > > else was hitting?
>> > > >
>> > > > Great, thanks, the other outstanding report was this one:
>> > > > https://lkml.org/lkml/2013/1/17/554
>> > >
>> > > Yep, that was the discard bug I just fixed.
>> >
>> > Err I misread - nope, I missed that one. Taking a look now.
>>
>> Fucking shrinkers, i swear that's one of the most nonsensical APIs I've
>> yet encountered.
>>
>> This is going to take some thought. For now, just disable the shrinker:
>>
>> echo 1 > /sys/fs/bcache/<uuid>/internal/btree_shrinker_disabled
>
> Oddly I don't have a /sys/fs/bcache/<uuid> even though I have created
> /dev/bcache0
>
> The only files I have in /sys/fs/bcache/ are: register  register_quiet

That means you have a backing device registered, but not a cache device

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                           ` <20130131014835.GH12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
@ 2013-01-31 19:02                                             ` Mike Snitzer
       [not found]                                               ` <20130131190249.GA12786-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-31 19:02 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 30 2013 at  8:48pm -0500,
Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:

> On Wed, Jan 30, 2013 at 05:27:47PM -0800, Kent Overstreet wrote:
> > On Wed, Jan 30, 2013 at 05:26:27PM -0800, Kent Overstreet wrote:
> > > On Wed, Jan 30, 2013 at 07:10:21PM -0500, Mike Snitzer wrote:
> > > > On Wed, Jan 30 2013 at  6:36pm -0500,
> > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > 
> > > > > Fixed a bunch of other bugs today too... notably the bug where it'd
> > > > > crash if you enabled discards. Was there anything else you or anyone
> > > > > else was hitting?
> > > > 
> > > > Great, thanks, the other outstanding report was this one:
> > > > https://lkml.org/lkml/2013/1/17/554
> > > 
> > > Yep, that was the discard bug I just fixed.
> > 
> > Err I misread - nope, I missed that one. Taking a look now.
> 
> Fucking shrinkers, i swear that's one of the most nonsensical APIs I've
> yet encountered.
> 
> This is going to take some thought. For now, just disable the shrinker:
> 
> echo 1 > /sys/fs/bcache/<uuid>/internal/btree_shrinker_disabled

Oddly I don't have a /sys/fs/bcache/<uuid> even though I have created
/dev/bcache0

The only files I have in /sys/fs/bcache/ are: register  register_quiet

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                   ` <20130131170103.GT26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-01-31 17:26                                     ` Mike Snitzer
  0 siblings, 0 replies; 48+ messages in thread
From: Mike Snitzer @ 2013-01-31 17:26 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Thu, Jan 31 2013 at 12:01pm -0500,
Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, Jan 31, 2013 at 11:52:23AM -0500, Mike Snitzer wrote:
> > On Wed, Jan 30 2013 at  6:36pm -0500,
> > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > 
> > > And, I finally have a fix for it up.
> > > 
> > > Fixed a bunch of other bugs today too...
> > 
> > Hey Kent,
> > 
> > Which branch did you push your latest fixes to?  I cannot seem to find
> > them.
> 
> They're in the master bcache branch (also -dev and -testing)

Ah ok, I just missed them when I looked with 'git log' because they
weren't in chronological order.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                               ` <20130131165223.GB11894-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-01-31 17:01                                 ` Kent Overstreet
       [not found]                                   ` <20130131170103.GT26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-31 17:01 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Thu, Jan 31, 2013 at 11:52:23AM -0500, Mike Snitzer wrote:
> On Wed, Jan 30 2013 at  6:36pm -0500,
> Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> 
> > On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> > > On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> > > > On Mon, Jan 14 2013 at  5:53pm -0500,
> > > > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > > 
> > > > > On Mon, Jan 14 2013 at  5:37pm -0500,
> > > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > > 
> > > > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > > > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > > > > I'd appreciate if you verify I did in fact fix that issue.
> > > > > 
> > > > > Will do, thanks Kent.
> > > > 
> > > > I hit the crash below if I do this in a script:
> > > > 
> > > > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> > > > echo 1 > /sys/block/bcache0/bcache/stop
> > > 
> > > Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> > > This is not the prettiest area of the code :P
> > 
> > And, I finally have a fix for it up.
> > 
> > Fixed a bunch of other bugs today too...
> 
> Hey Kent,
> 
> Which branch did you push your latest fixes to?  I cannot seem to find
> them.

They're in the master bcache branch (also -dev and -testing)

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                           ` <20130130233643.GD12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  2013-01-30 23:48                             ` Joseph Glanville
  2013-01-31  0:10                             ` Mike Snitzer
@ 2013-01-31 16:52                             ` Mike Snitzer
       [not found]                               ` <20130131165223.GB11894-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-31 16:52 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 30 2013 at  6:36pm -0500,
Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> > On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> > > On Mon, Jan 14 2013 at  5:53pm -0500,
> > > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > 
> > > > On Mon, Jan 14 2013 at  5:37pm -0500,
> > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > 
> > > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > > > I'd appreciate if you verify I did in fact fix that issue.
> > > > 
> > > > Will do, thanks Kent.
> > > 
> > > I hit the crash below if I do this in a script:
> > > 
> > > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> > > echo 1 > /sys/block/bcache0/bcache/stop
> > 
> > Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> > This is not the prettiest area of the code :P
> 
> And, I finally have a fix for it up.
> 
> Fixed a bunch of other bugs today too...

Hey Kent,

Which branch did you push your latest fixes to?  I cannot seem to find
them.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                       ` <20130131012747.GG12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
@ 2013-01-31  1:48                                         ` Kent Overstreet
       [not found]                                           ` <20130131014835.GH12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  2013-01-31 22:01                                         ` Kent Overstreet
  1 sibling, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-31  1:48 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 30, 2013 at 05:27:47PM -0800, Kent Overstreet wrote:
> On Wed, Jan 30, 2013 at 05:26:27PM -0800, Kent Overstreet wrote:
> > On Wed, Jan 30, 2013 at 07:10:21PM -0500, Mike Snitzer wrote:
> > > On Wed, Jan 30 2013 at  6:36pm -0500,
> > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > 
> > > > On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> > > > > On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> > > > > > On Mon, Jan 14 2013 at  5:53pm -0500,
> > > > > > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > > > > 
> > > > > > > On Mon, Jan 14 2013 at  5:37pm -0500,
> > > > > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > > > > 
> > > > > > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > > > > > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > > > > > > I'd appreciate if you verify I did in fact fix that issue.
> > > > > > > 
> > > > > > > Will do, thanks Kent.
> > > > > > 
> > > > > > I hit the crash below if I do this in a script:
> > > > > > 
> > > > > > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> > > > > > echo 1 > /sys/block/bcache0/bcache/stop
> > > > > 
> > > > > Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> > > > > This is not the prettiest area of the code :P
> > > > 
> > > > And, I finally have a fix for it up.
> > > > 
> > > > Fixed a bunch of other bugs today too... notably the bug where it'd
> > > > crash if you enabled discards. Was there anything else you or anyone
> > > > else was hitting?
> > > 
> > > Great, thanks, the other outstanding report was this one:
> > > https://lkml.org/lkml/2013/1/17/554
> > 
> > Yep, that was the discard bug I just fixed.
> 
> Err I misread - nope, I missed that one. Taking a look now.

Fucking shrinkers, i swear that's one of the most nonsensical APIs I've
yet encountered.

This is going to take some thought. For now, just disable the shrinker:

echo 1 > /sys/fs/bcache/<uuid>/internal/btree_shrinker_disabled

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                                   ` <20130131012627.GF12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
@ 2013-01-31  1:27                                     ` Kent Overstreet
       [not found]                                       ` <20130131012747.GG12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-31  1:27 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 30, 2013 at 05:26:27PM -0800, Kent Overstreet wrote:
> On Wed, Jan 30, 2013 at 07:10:21PM -0500, Mike Snitzer wrote:
> > On Wed, Jan 30 2013 at  6:36pm -0500,
> > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > 
> > > On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> > > > On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> > > > > On Mon, Jan 14 2013 at  5:53pm -0500,
> > > > > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > > > 
> > > > > > On Mon, Jan 14 2013 at  5:37pm -0500,
> > > > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > > > 
> > > > > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > > > > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > > > > > I'd appreciate if you verify I did in fact fix that issue.
> > > > > > 
> > > > > > Will do, thanks Kent.
> > > > > 
> > > > > I hit the crash below if I do this in a script:
> > > > > 
> > > > > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> > > > > echo 1 > /sys/block/bcache0/bcache/stop
> > > > 
> > > > Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> > > > This is not the prettiest area of the code :P
> > > 
> > > And, I finally have a fix for it up.
> > > 
> > > Fixed a bunch of other bugs today too... notably the bug where it'd
> > > crash if you enabled discards. Was there anything else you or anyone
> > > else was hitting?
> > 
> > Great, thanks, the other outstanding report was this one:
> > https://lkml.org/lkml/2013/1/17/554
> 
> Yep, that was the discard bug I just fixed.

Err I misread - nope, I missed that one. Taking a look now.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                               ` <20130131001020.GA7541-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-01-31  1:26                                 ` Kent Overstreet
       [not found]                                   ` <20130131012627.GF12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-31  1:26 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 30, 2013 at 07:10:21PM -0500, Mike Snitzer wrote:
> On Wed, Jan 30 2013 at  6:36pm -0500,
> Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> 
> > On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> > > On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> > > > On Mon, Jan 14 2013 at  5:53pm -0500,
> > > > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > > 
> > > > > On Mon, Jan 14 2013 at  5:37pm -0500,
> > > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > > 
> > > > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > > > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > > > > I'd appreciate if you verify I did in fact fix that issue.
> > > > > 
> > > > > Will do, thanks Kent.
> > > > 
> > > > I hit the crash below if I do this in a script:
> > > > 
> > > > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> > > > echo 1 > /sys/block/bcache0/bcache/stop
> > > 
> > > Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> > > This is not the prettiest area of the code :P
> > 
> > And, I finally have a fix for it up.
> > 
> > Fixed a bunch of other bugs today too... notably the bug where it'd
> > crash if you enabled discards. Was there anything else you or anyone
> > else was hitting?
> 
> Great, thanks, the other outstanding report was this one:
> https://lkml.org/lkml/2013/1/17/554

Yep, that was the discard bug I just fixed.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                               ` <CAOzFzEho6Jn8nd+vSZXEQR5_oxPEZRej=6mivJDz0MsAj5VAZg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-01-31  1:25                                 ` Kent Overstreet
  0 siblings, 0 replies; 48+ messages in thread
From: Kent Overstreet @ 2013-01-31  1:25 UTC (permalink / raw)
  To: Joseph Glanville
  Cc: Mike Snitzer, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On Thu, Jan 31, 2013 at 10:48:59AM +1100, Joseph Glanville wrote:
> On 31 January 2013 10:36, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> >> On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> >> > On Mon, Jan 14 2013 at  5:53pm -0500,
> >> > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> >> >
> >> > > On Mon, Jan 14 2013 at  5:37pm -0500,
> >> > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> >> > >
> >> > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> >> > > > all that sysfs stuff, but I wasn't seeing the original build error so
> >> > > > I'd appreciate if you verify I did in fact fix that issue.
> >> > >
> >> > > Will do, thanks Kent.
> >> >
> >> > I hit the crash below if I do this in a script:
> >> >
> >> > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> >> > echo 1 > /sys/block/bcache0/bcache/stop
> >>
> >> Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> >> This is not the prettiest area of the code :P
> >
> > And, I finally have a fix for it up.
> >
> > Fixed a bunch of other bugs today too... notably the bug where it'd
> > crash if you enabled discards. Was there anything else you or anyone
> > else was hitting?
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> Last time I checked, dirty data could still appear as a negative
> number, did you get around to fixing that one?

I did!

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                           ` <20130130233643.GD12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  2013-01-30 23:48                             ` Joseph Glanville
@ 2013-01-31  0:10                             ` Mike Snitzer
       [not found]                               ` <20130131001020.GA7541-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2013-01-31 16:52                             ` Mike Snitzer
  2 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-31  0:10 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 30 2013 at  6:36pm -0500,
Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> > On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> > > On Mon, Jan 14 2013 at  5:53pm -0500,
> > > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > > 
> > > > On Mon, Jan 14 2013 at  5:37pm -0500,
> > > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > 
> > > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > > > I'd appreciate if you verify I did in fact fix that issue.
> > > > 
> > > > Will do, thanks Kent.
> > > 
> > > I hit the crash below if I do this in a script:
> > > 
> > > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> > > echo 1 > /sys/block/bcache0/bcache/stop
> > 
> > Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> > This is not the prettiest area of the code :P
> 
> And, I finally have a fix for it up.
> 
> Fixed a bunch of other bugs today too... notably the bug where it'd
> crash if you enabled discards. Was there anything else you or anyone
> else was hitting?

Great, thanks, the other outstanding report was this one:
https://lkml.org/lkml/2013/1/17/554

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                           ` <20130130233643.GD12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
@ 2013-01-30 23:48                             ` Joseph Glanville
       [not found]                               ` <CAOzFzEho6Jn8nd+vSZXEQR5_oxPEZRej=6mivJDz0MsAj5VAZg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-01-31  0:10                             ` Mike Snitzer
  2013-01-31 16:52                             ` Mike Snitzer
  2 siblings, 1 reply; 48+ messages in thread
From: Joseph Glanville @ 2013-01-30 23:48 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: Mike Snitzer, linux-bcache-u79uwXL29TY76Z2rM5mHXA,
	device-mapper development

On 31 January 2013 10:36, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
>> On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
>> > On Mon, Jan 14 2013 at  5:53pm -0500,
>> > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>> >
>> > > On Mon, Jan 14 2013 at  5:37pm -0500,
>> > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>> > >
>> > > > Want to try again with the latest bcache-for-upstream branch? I fixed
>> > > > all that sysfs stuff, but I wasn't seeing the original build error so
>> > > > I'd appreciate if you verify I did in fact fix that issue.
>> > >
>> > > Will do, thanks Kent.
>> >
>> > I hit the crash below if I do this in a script:
>> >
>> > echo 1 > /sys/block/bcache0/bcache/cache/unregister
>> > echo 1 > /sys/block/bcache0/bcache/stop
>>
>> Thanks - I reproduced it, trying to figure out the sanest way to fix it.
>> This is not the prettiest area of the code :P
>
> And, I finally have a fix for it up.
>
> Fixed a bunch of other bugs today too... notably the bug where it'd
> crash if you enabled discards. Was there anything else you or anyone
> else was hitting?
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Last time I checked, dirty data could still appear as a negative
number, did you get around to fixing that one?

-- 
CTO | Orion Virtualisation Solutions | www.orionvm.com.au
Phone: 1300 56 99 52 | Mobile: 0428 754 846

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                       ` <20130117114104.GJ10411-jC9Py7bek1znysI04z7BkA@public.gmane.org>
@ 2013-01-30 23:36                         ` Kent Overstreet
       [not found]                           ` <20130130233643.GD12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-30 23:36 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Thu, Jan 17, 2013 at 03:41:04AM -0800, Kent Overstreet wrote:
> On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> > On Mon, Jan 14 2013 at  5:53pm -0500,
> > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > 
> > > On Mon, Jan 14 2013 at  5:37pm -0500,
> > > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > 
> > > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > > I'd appreciate if you verify I did in fact fix that issue.
> > > 
> > > Will do, thanks Kent.
> > 
> > I hit the crash below if I do this in a script:
> > 
> > echo 1 > /sys/block/bcache0/bcache/cache/unregister
> > echo 1 > /sys/block/bcache0/bcache/stop
> 
> Thanks - I reproduced it, trying to figure out the sanest way to fix it.
> This is not the prettiest area of the code :P

And, I finally have a fix for it up.

Fixed a bunch of other bugs today too... notably the bug where it'd
crash if you enabled discards. Was there anything else you or anyone
else was hitting?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]                   ` <20130117022728.GA16148-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-01-17 11:41                     ` Kent Overstreet
       [not found]                       ` <20130117114104.GJ10411-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-17 11:41 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 16, 2013 at 09:27:28PM -0500, Mike Snitzer wrote:
> On Mon, Jan 14 2013 at  5:53pm -0500,
> Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> 
> > On Mon, Jan 14 2013 at  5:37pm -0500,
> > Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > 
> > > Want to try again with the latest bcache-for-upstream branch? I fixed
> > > all that sysfs stuff, but I wasn't seeing the original build error so
> > > I'd appreciate if you verify I did in fact fix that issue.
> > 
> > Will do, thanks Kent.
> 
> I hit the crash below if I do this in a script:
> 
> echo 1 > /sys/block/bcache0/bcache/cache/unregister
> echo 1 > /sys/block/bcache0/bcache/stop

Thanks - I reproduced it, trying to figure out the sanest way to fix it.
This is not the prettiest area of the code :P

> 
> I have a workaround for this issue (just wait a few seconds between
> commands).  I'm still carrying on testing and will share some
> preliminary performance results vs dm-cache soon.
> 
> ------------[ cut here ]------------
> bcache: bcache0 stopped
> WARNING: at fs/sysfs/inode.c:324 sysfs_hash_and_remove+0xa4/0xb0()
> Hardware name: PRIMERGY RX300 S6
> sysfs: can not remove 'cache', no directory
> Modules linked in: dm_cache_cleaner(O) dm_cache_mq(O) dm_cache_basic(O) dm_cache(O) dm_thin_pool(O) dm_bio_prison dm_persistent_data(O) dm_bufio libcrc32c dm_mod(O) bc
> ache ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe libfcoe
> 8021q libfc garp stp llc scsi_transport_fc scsi_tgt sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip
> _tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp
> libiscsi_tcp libiscsi scsi_transport_iscsi vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb i7
> core_edac edac_core iomemory_vsl(O) skd(O) ixgbe dca ptp pps_core mdio sg ses enclosure ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix
>  megaraid_sas [last unloaded: dm_cache_basic]
> Pid: 81, comm: kworker/8:1 Tainted: G        W  O 3.8.0-rc3.snitm+ #37
> Call Trace:
>  [<ffffffff810423bf>] warn_slowpath_common+0x7f/0xc0
>  [<ffffffff810424b6>] warn_slowpath_fmt+0x46/0x50
>  [<ffffffff811db784>] sysfs_hash_and_remove+0xa4/0xb0
>  [<ffffffff811de476>] sysfs_remove_link+0x26/0x30
>  [<ffffffffa0719e26>] cached_dev_detach_finish+0x86/0x150 [bcache]
>  [<ffffffff81236507>] ? ioc_release_fn+0x87/0xc0
>  [<ffffffff8105f307>] process_one_work+0x177/0x430
>  [<ffffffffa0719da0>] ? flash_dev_free+0x30/0x30 [bcache]
>  [<ffffffff8106129e>] worker_thread+0x12e/0x380
>  [<ffffffff81061170>] ? manage_workers+0x180/0x180
>  [<ffffffff8106650e>] kthread+0xce/0xe0
>  [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
>  [<ffffffff814ff06c>] ret_from_fork+0x7c/0xb0
>  [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
> ---[ end trace 50d16ffe964021b2 ]---
> bcache: Cache set 4e51cb30-a889-48b2-88cd-a61bd788eac0 unregistered
> BUG: unable to handle kernel NULL pointer dereference at 0000000000000cc8
> IP: [<ffffffffa0719bc7>] bcache_device_detach+0x77/0xb0 [bcache]
> PGD 0
> Oops: 0000 [#1] SMP
> Modules linked in: dm_cache_cleaner(O) dm_cache_mq(O) dm_cache_basic(O) dm_cache(O) dm_thin_pool(O) dm_bio_prison dm_persistent_data(O) dm_bufio libcrc32c dm_mod(O) bc
> ache ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe libfcoe
> 8021q libfc garp stp llc scsi_transport_fc scsi_tgt sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip
> _tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp
> libiscsi_tcp libiscsi scsi_transport_iscsi vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb i7
> core_edac edac_core iomemory_vsl(O) skd(O) ixgbe dca ptp pps_core mdio sg ses enclosure ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix
>  megaraid_sas [last unloaded: dm_cache_basic]
> CPU 8
> Pid: 81, comm: kworker/8:1 Tainted: G        W  O 3.8.0-rc3.snitm+ #37 FUJITSU                          PRIMERGY RX300 S6             /D2619
> RIP: 0010:[<ffffffffa0719bc7>]  [<ffffffffa0719bc7>] bcache_device_detach+0x77/0xb0 [bcache]
> RSP: 0018:ffff88032dc69d38  EFLAGS: 00010246
> RAX: 0000000000000000 RBX: ffff88031c950010 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: ffff880331a2a0c0 RDI: ffff88031c950010
> RBP: ffff88032dc69d48 R08: ffff88032dc68000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000000 R12: ffff88031c950aa8
> R13: ffff88032dc69d58 R14: 0000000000000000 R15: ffff88033fd15c05
> FS:  0000000000000000(0000) GS:ffff88033fd00000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: 0000000000000cc8 CR3: 0000000001a0c000 CR4: 00000000000007e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process kworker/8:1 (pid: 81, threadinfo ffff88032dc68000, task ffff880331a2a0c0)
> Stack:
>  ffff88031c950000 ffff88031c950aa8 ffff88032dc69dd8 ffffffffa0719e76
>  0000000000000000 ffff880331a2a0c0 0000000000000000 0000000000000000
>  0000000000000000 00000000a0200001 ffff88032c22d5d8 0000000000000292
> Call Trace:
>  [<ffffffffa0719e76>] cached_dev_detach_finish+0xd6/0x150 [bcache]
>  [<ffffffff81236507>] ? ioc_release_fn+0x87/0xc0
>  [<ffffffff8105f307>] process_one_work+0x177/0x430
>  [<ffffffffa0719da0>] ? flash_dev_free+0x30/0x30 [bcache]
>  [<ffffffff8106129e>] worker_thread+0x12e/0x380
>  [<ffffffff81061170>] ? manage_workers+0x180/0x180
>  [<ffffffff8106650e>] kthread+0xce/0xe0
>  [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
>  [<ffffffff814ff06c>] ret_from_fork+0x7c/0xb0
>  [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
> Code: 00 00 49 89 44 24 08 e8 f8 ae 97 e0 41 89 44 24 38 48 8b 7b 70 e8 5a ff ff ff c7 83 94 00 00 00 00 00 00 00 48 8b 43 70 8b 53 78 <48> 8b 80 c8 0c 00 00 48 c7 04
> d0 00 00 00 00 48 8b 7b 70 48 81
> RIP  [<ffffffffa0719bc7>] bcache_device_detach+0x77/0xb0 [bcache]
>  RSP <ffff88032dc69d38>
> CR2: 0000000000000cc8
> ---[ end trace 50d16ffe964021b3 ]---
> BUG: unable to handle kernel paging request at ffffffffffffffd8
> IP: [<ffffffff81065e40>] kthread_data+0x10/0x20
> PGD 1a0e067 PUD 1a0f067 PMD 0
> Oops: 0000 [#2] SMP
> Modules linked in: dm_cache_cleaner(O) dm_cache_mq(O) dm_cache_basic(O) dm_cache(O) dm_thin_pool(O) dm_bio_prison dm_persistent_data(O) dm_bufio libcrc32c dm_mod(O) bc
> ache ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe libfcoe
> 8021q libfc garp stp llc scsi_transport_fc scsi_tgt sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip
> _tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp
> libiscsi_tcp libiscsi scsi_transport_iscsi vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb i7
> core_edac edac_core iomemory_vsl(O) skd(O) ixgbe dca ptp pps_core mdio sg ses enclosure ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix
>  megaraid_sas [last unloaded: dm_cache_basic]
> CPU 8
> Pid: 81, comm: kworker/8:1 Tainted: G      D W  O 3.8.0-rc3.snitm+ #37 FUJITSU                          PRIMERGY RX300 S6             /D2619
> RIP: 0010:[<ffffffff81065e40>]  [<ffffffff81065e40>] kthread_data+0x10/0x20
> RSP: 0018:ffff88032dc69968  EFLAGS: 00010086
> RAX: 0000000000000000 RBX: ffff88033fd12980 RCX: ffffffff81d8e3a0
> RDX: 000000000000000d RSI: 0000000000000008 RDI: ffff880331a2a0c0
> RBP: ffff88032dc69968 R08: ffff880331a2a130 R09: 0000000000000001
> R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000008
> R13: 0000000000000008 R14: 0000000000000001 R15: 0000000000000000
> FS:  0000000000000000(0000) GS:ffff88033fd00000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: ffffffffffffffd8 CR3: 0000000001a0c000 CR4: 00000000000007e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process kworker/8:1 (pid: 81, threadinfo ffff88032dc68000, task ffff880331a2a0c0)
> Stack:
>  ffff88032dc69998 ffffffff8105e571 0000000000000008 ffff88033fd12980
>  0000000000000008 ffff880331a2a690 ffff88032dc69a28 ffffffff814f55f3
>  ffff88032dc69fd8 0000000000012980 ffff88032dc68010 0000000000012980
> Call Trace:
>  [<ffffffff8105e571>] wq_worker_sleeping+0x21/0xa0
>  [<ffffffff814f55f3>] __schedule+0x5a3/0x710
>  [<ffffffff814f5a99>] schedule+0x29/0x70
>  [<ffffffff81048515>] do_exit+0x2c5/0x470
>  [<ffffffff814f790c>] oops_end+0xac/0xf0
>  [<ffffffff81035e0e>] no_context+0x11e/0x1f0
>  [<ffffffff8103601d>] __bad_area_nosemaphore+0x13d/0x220
>  [<ffffffff81081d98>] ? load_balance+0x128/0x670
>  [<ffffffff81036113>] bad_area_nosemaphore+0x13/0x20
>  [<ffffffff814fa65a>] __do_page_fault+0x27a/0x490
>  [<ffffffff810018d7>] ? __switch_to+0x157/0x4f0
>  [<ffffffff810827f0>] ? idle_balance+0x1c0/0x320
>  [<ffffffff814fa87e>] do_page_fault+0xe/0x10
>  [<ffffffff814f6d48>] page_fault+0x28/0x30
>  [<ffffffffa0719bc7>] ? bcache_device_detach+0x77/0xb0 [bcache]
>  [<ffffffffa0719e76>] cached_dev_detach_finish+0xd6/0x150 [bcache]
>  [<ffffffff81236507>] ? ioc_release_fn+0x87/0xc0
>  [<ffffffff8105f307>] process_one_work+0x177/0x430
>  [<ffffffffa0719da0>] ? flash_dev_free+0x30/0x30 [bcache]
>  [<ffffffff8106129e>] worker_thread+0x12e/0x380
>  [<ffffffff81061170>] ? manage_workers+0x180/0x180
>  [<ffffffff8106650e>] kthread+0xce/0xe0
>  [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
>  [<ffffffff814ff06c>] ret_from_fork+0x7c/0xb0
>  [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
> Code: 78 05 00 00 48 8b 40 c8 c9 48 c1 e8 02 83 e0 01 c3 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 66 66 66 66 90 48 8b 87 78 05 00 00 <48> 8b 40 d8 c9 c3 66 2e 0f 1f
> 84 00 00 00 00 00 55 48 89 e5 66
> RIP  [<ffffffff81065e40>] kthread_data+0x10/0x20
>  RSP <ffff88032dc69968>
> CR2: ffffffffffffffd8
> ---[ end trace 50d16ffe964021b4 ]---
> Fixing recursive fault but reboot is needed!

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]               ` <20130114225330.GA1365-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-01-17  2:27                 ` Mike Snitzer
       [not found]                   ` <20130117022728.GA16148-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-17  2:27 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Mon, Jan 14 2013 at  5:53pm -0500,
Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> On Mon, Jan 14 2013 at  5:37pm -0500,
> Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> 
> > Want to try again with the latest bcache-for-upstream branch? I fixed
> > all that sysfs stuff, but I wasn't seeing the original build error so
> > I'd appreciate if you verify I did in fact fix that issue.
> 
> Will do, thanks Kent.

I hit the crash below if I do this in a script:

echo 1 > /sys/block/bcache0/bcache/cache/unregister
echo 1 > /sys/block/bcache0/bcache/stop

I have a workaround for this issue (just wait a few seconds between
commands).  I'm still carrying on testing and will share some
preliminary performance results vs dm-cache soon.

------------[ cut here ]------------
bcache: bcache0 stopped
WARNING: at fs/sysfs/inode.c:324 sysfs_hash_and_remove+0xa4/0xb0()
Hardware name: PRIMERGY RX300 S6
sysfs: can not remove 'cache', no directory
Modules linked in: dm_cache_cleaner(O) dm_cache_mq(O) dm_cache_basic(O) dm_cache(O) dm_thin_pool(O) dm_bio_prison dm_persistent_data(O) dm_bufio libcrc32c dm_mod(O) bc
ache ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe libfcoe
8021q libfc garp stp llc scsi_transport_fc scsi_tgt sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip
_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp
libiscsi_tcp libiscsi scsi_transport_iscsi vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb i7
core_edac edac_core iomemory_vsl(O) skd(O) ixgbe dca ptp pps_core mdio sg ses enclosure ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix
 megaraid_sas [last unloaded: dm_cache_basic]
Pid: 81, comm: kworker/8:1 Tainted: G        W  O 3.8.0-rc3.snitm+ #37
Call Trace:
 [<ffffffff810423bf>] warn_slowpath_common+0x7f/0xc0
 [<ffffffff810424b6>] warn_slowpath_fmt+0x46/0x50
 [<ffffffff811db784>] sysfs_hash_and_remove+0xa4/0xb0
 [<ffffffff811de476>] sysfs_remove_link+0x26/0x30
 [<ffffffffa0719e26>] cached_dev_detach_finish+0x86/0x150 [bcache]
 [<ffffffff81236507>] ? ioc_release_fn+0x87/0xc0
 [<ffffffff8105f307>] process_one_work+0x177/0x430
 [<ffffffffa0719da0>] ? flash_dev_free+0x30/0x30 [bcache]
 [<ffffffff8106129e>] worker_thread+0x12e/0x380
 [<ffffffff81061170>] ? manage_workers+0x180/0x180
 [<ffffffff8106650e>] kthread+0xce/0xe0
 [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
 [<ffffffff814ff06c>] ret_from_fork+0x7c/0xb0
 [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
---[ end trace 50d16ffe964021b2 ]---
bcache: Cache set 4e51cb30-a889-48b2-88cd-a61bd788eac0 unregistered
BUG: unable to handle kernel NULL pointer dereference at 0000000000000cc8
IP: [<ffffffffa0719bc7>] bcache_device_detach+0x77/0xb0 [bcache]
PGD 0
Oops: 0000 [#1] SMP
Modules linked in: dm_cache_cleaner(O) dm_cache_mq(O) dm_cache_basic(O) dm_cache(O) dm_thin_pool(O) dm_bio_prison dm_persistent_data(O) dm_bufio libcrc32c dm_mod(O) bc
ache ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe libfcoe
8021q libfc garp stp llc scsi_transport_fc scsi_tgt sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip
_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp
libiscsi_tcp libiscsi scsi_transport_iscsi vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb i7
core_edac edac_core iomemory_vsl(O) skd(O) ixgbe dca ptp pps_core mdio sg ses enclosure ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix
 megaraid_sas [last unloaded: dm_cache_basic]
CPU 8
Pid: 81, comm: kworker/8:1 Tainted: G        W  O 3.8.0-rc3.snitm+ #37 FUJITSU                          PRIMERGY RX300 S6             /D2619
RIP: 0010:[<ffffffffa0719bc7>]  [<ffffffffa0719bc7>] bcache_device_detach+0x77/0xb0 [bcache]
RSP: 0018:ffff88032dc69d38  EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff88031c950010 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff880331a2a0c0 RDI: ffff88031c950010
RBP: ffff88032dc69d48 R08: ffff88032dc68000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88031c950aa8
R13: ffff88032dc69d58 R14: 0000000000000000 R15: ffff88033fd15c05
FS:  0000000000000000(0000) GS:ffff88033fd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000cc8 CR3: 0000000001a0c000 CR4: 00000000000007e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process kworker/8:1 (pid: 81, threadinfo ffff88032dc68000, task ffff880331a2a0c0)
Stack:
 ffff88031c950000 ffff88031c950aa8 ffff88032dc69dd8 ffffffffa0719e76
 0000000000000000 ffff880331a2a0c0 0000000000000000 0000000000000000
 0000000000000000 00000000a0200001 ffff88032c22d5d8 0000000000000292
Call Trace:
 [<ffffffffa0719e76>] cached_dev_detach_finish+0xd6/0x150 [bcache]
 [<ffffffff81236507>] ? ioc_release_fn+0x87/0xc0
 [<ffffffff8105f307>] process_one_work+0x177/0x430
 [<ffffffffa0719da0>] ? flash_dev_free+0x30/0x30 [bcache]
 [<ffffffff8106129e>] worker_thread+0x12e/0x380
 [<ffffffff81061170>] ? manage_workers+0x180/0x180
 [<ffffffff8106650e>] kthread+0xce/0xe0
 [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
 [<ffffffff814ff06c>] ret_from_fork+0x7c/0xb0
 [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
Code: 00 00 49 89 44 24 08 e8 f8 ae 97 e0 41 89 44 24 38 48 8b 7b 70 e8 5a ff ff ff c7 83 94 00 00 00 00 00 00 00 48 8b 43 70 8b 53 78 <48> 8b 80 c8 0c 00 00 48 c7 04
d0 00 00 00 00 48 8b 7b 70 48 81
RIP  [<ffffffffa0719bc7>] bcache_device_detach+0x77/0xb0 [bcache]
 RSP <ffff88032dc69d38>
CR2: 0000000000000cc8
---[ end trace 50d16ffe964021b3 ]---
BUG: unable to handle kernel paging request at ffffffffffffffd8
IP: [<ffffffff81065e40>] kthread_data+0x10/0x20
PGD 1a0e067 PUD 1a0f067 PMD 0
Oops: 0000 [#2] SMP
Modules linked in: dm_cache_cleaner(O) dm_cache_mq(O) dm_cache_basic(O) dm_cache(O) dm_thin_pool(O) dm_bio_prison dm_persistent_data(O) dm_bufio libcrc32c dm_mod(O) bc
ache ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2fc fcoe libfcoe
8021q libfc garp stp llc scsi_transport_fc scsi_tgt sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip
_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp
libiscsi_tcp libiscsi scsi_transport_iscsi vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb i7
core_edac edac_core iomemory_vsl(O) skd(O) ixgbe dca ptp pps_core mdio sg ses enclosure ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix
 megaraid_sas [last unloaded: dm_cache_basic]
CPU 8
Pid: 81, comm: kworker/8:1 Tainted: G      D W  O 3.8.0-rc3.snitm+ #37 FUJITSU                          PRIMERGY RX300 S6             /D2619
RIP: 0010:[<ffffffff81065e40>]  [<ffffffff81065e40>] kthread_data+0x10/0x20
RSP: 0018:ffff88032dc69968  EFLAGS: 00010086
RAX: 0000000000000000 RBX: ffff88033fd12980 RCX: ffffffff81d8e3a0
RDX: 000000000000000d RSI: 0000000000000008 RDI: ffff880331a2a0c0
RBP: ffff88032dc69968 R08: ffff880331a2a130 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000008
R13: 0000000000000008 R14: 0000000000000001 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff88033fd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffffffffffffffd8 CR3: 0000000001a0c000 CR4: 00000000000007e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process kworker/8:1 (pid: 81, threadinfo ffff88032dc68000, task ffff880331a2a0c0)
Stack:
 ffff88032dc69998 ffffffff8105e571 0000000000000008 ffff88033fd12980
 0000000000000008 ffff880331a2a690 ffff88032dc69a28 ffffffff814f55f3
 ffff88032dc69fd8 0000000000012980 ffff88032dc68010 0000000000012980
Call Trace:
 [<ffffffff8105e571>] wq_worker_sleeping+0x21/0xa0
 [<ffffffff814f55f3>] __schedule+0x5a3/0x710
 [<ffffffff814f5a99>] schedule+0x29/0x70
 [<ffffffff81048515>] do_exit+0x2c5/0x470
 [<ffffffff814f790c>] oops_end+0xac/0xf0
 [<ffffffff81035e0e>] no_context+0x11e/0x1f0
 [<ffffffff8103601d>] __bad_area_nosemaphore+0x13d/0x220
 [<ffffffff81081d98>] ? load_balance+0x128/0x670
 [<ffffffff81036113>] bad_area_nosemaphore+0x13/0x20
 [<ffffffff814fa65a>] __do_page_fault+0x27a/0x490
 [<ffffffff810018d7>] ? __switch_to+0x157/0x4f0
 [<ffffffff810827f0>] ? idle_balance+0x1c0/0x320
 [<ffffffff814fa87e>] do_page_fault+0xe/0x10
 [<ffffffff814f6d48>] page_fault+0x28/0x30
 [<ffffffffa0719bc7>] ? bcache_device_detach+0x77/0xb0 [bcache]
 [<ffffffffa0719e76>] cached_dev_detach_finish+0xd6/0x150 [bcache]
 [<ffffffff81236507>] ? ioc_release_fn+0x87/0xc0
 [<ffffffff8105f307>] process_one_work+0x177/0x430
 [<ffffffffa0719da0>] ? flash_dev_free+0x30/0x30 [bcache]
 [<ffffffff8106129e>] worker_thread+0x12e/0x380
 [<ffffffff81061170>] ? manage_workers+0x180/0x180
 [<ffffffff8106650e>] kthread+0xce/0xe0
 [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
 [<ffffffff814ff06c>] ret_from_fork+0x7c/0xb0
 [<ffffffff81066440>] ? kthread_freezable_should_stop+0x70/0x70
Code: 78 05 00 00 48 8b 40 c8 c9 48 c1 e8 02 83 e0 01 c3 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 66 66 66 66 90 48 8b 87 78 05 00 00 <48> 8b 40 d8 c9 c3 66 2e 0f 1f
84 00 00 00 00 00 55 48 89 e5 66
RIP  [<ffffffff81065e40>] kthread_data+0x10/0x20
 RSP <ffff88032dc69968>
CR2: ffffffffffffffd8
---[ end trace 50d16ffe964021b4 ]---
Fixing recursive fault but reboot is needed!

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]           ` <20130114223722.GZ26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-01-14 22:53             ` Mike Snitzer
       [not found]               ` <20130114225330.GA1365-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-14 22:53 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Mon, Jan 14 2013 at  5:37pm -0500,
Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, Jan 10, 2013 at 09:59:54AM -0800, Kent Overstreet wrote:
> > On Wed, Jan 09, 2013 at 10:49:04AM -0500, Mike Snitzer wrote:
> > > Hey Kent,
> > > 
> > > On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>wrote:
> > > 
> > > > I've (finally!) got a bcache branch hacked up that ought to be suitable
> > > > to go upstream, possibly in staging initially.
> > > >
> > > > It's currently closer to the dev branch than the stable branch, plus
> > > > some additional minor changes to make it all more self contained. The
> > > > code has seen a decent amount of testing and I think it's in good shape,
> > > > but I'd like it if it could see a bit more testing before I see about
> > > > pushing it upstream.
> > > >
> > > > If anyone wants to try it out, checkout the bcache-for-staging branch.
> > > > It's against Linux 3.7.
> > > 
> > > 
> > > I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
> > > branch on my github:
> > > https://github.com/snitm/linux
> > > 
> > > Purpose is to have a single kernel to compare dm-cache and bcache.  My
> > > branch is against 3.8-rc2.  While importing your code I needed the
> > > following change to get bcache to compile:
> > > https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253
> > > 
> > > It now builds without issue but I haven't tested the resulting bcache to
> > > know if I broke the sysfs interface due to s/cache/bcache/ on some local
> > > variables, I don't think I did but I'll defer to you.  (BTW those crafty
> > > sysfs macros you have were pretty opaque; not really seeing what they buy
> > > in the grand scheme.  And #include "sysfs.c" is different than any code
> > > I've seen in the kernel).
> > 
> > Yeah, it was an ugly hack when I pulled the sysfs code out of super.c so
> > I could avoid adding a bunch of non static symbols. But apparantly the
> > various functions weren't even static in the first place, heh. I'll fix
> > this the right way, thanks.
> 
> Want to try again with the latest bcache-for-upstream branch? I fixed
> all that sysfs stuff, but I wasn't seeing the original build error so
> I'd appreciate if you verify I did in fact fix that issue.

Will do, thanks Kent.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]       ` <20130110175954.GR26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-01-14 22:37         ` Kent Overstreet
       [not found]           ` <20130114223722.GZ26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-14 22:37 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Thu, Jan 10, 2013 at 09:59:54AM -0800, Kent Overstreet wrote:
> On Wed, Jan 09, 2013 at 10:49:04AM -0500, Mike Snitzer wrote:
> > Hey Kent,
> > 
> > On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>wrote:
> > 
> > > I've (finally!) got a bcache branch hacked up that ought to be suitable
> > > to go upstream, possibly in staging initially.
> > >
> > > It's currently closer to the dev branch than the stable branch, plus
> > > some additional minor changes to make it all more self contained. The
> > > code has seen a decent amount of testing and I think it's in good shape,
> > > but I'd like it if it could see a bit more testing before I see about
> > > pushing it upstream.
> > >
> > > If anyone wants to try it out, checkout the bcache-for-staging branch.
> > > It's against Linux 3.7.
> > 
> > 
> > I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
> > branch on my github:
> > https://github.com/snitm/linux
> > 
> > Purpose is to have a single kernel to compare dm-cache and bcache.  My
> > branch is against 3.8-rc2.  While importing your code I needed the
> > following change to get bcache to compile:
> > https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253
> > 
> > It now builds without issue but I haven't tested the resulting bcache to
> > know if I broke the sysfs interface due to s/cache/bcache/ on some local
> > variables, I don't think I did but I'll defer to you.  (BTW those crafty
> > sysfs macros you have were pretty opaque; not really seeing what they buy
> > in the grand scheme.  And #include "sysfs.c" is different than any code
> > I've seen in the kernel).
> 
> Yeah, it was an ugly hack when I pulled the sysfs code out of super.c so
> I could avoid adding a bunch of non static symbols. But apparantly the
> various functions weren't even static in the first place, heh. I'll fix
> this the right way, thanks.

Want to try again with the latest bcache-for-upstream branch? I fixed
all that sysfs stuff, but I wasn't seeing the original build error so
I'd appreciate if you verify I did in fact fix that issue.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]             ` <20130110181424.GS26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-01-14 22:36               ` Kent Overstreet
  0 siblings, 0 replies; 48+ messages in thread
From: Kent Overstreet @ 2013-01-14 22:36 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA

On Thu, Jan 10, 2013 at 10:14:24AM -0800, Kent Overstreet wrote:
> On Thu, Jan 10, 2013 at 11:47:04AM -0500, Mike Snitzer wrote:
> > On Wed, Jan 09 2013 at 11:12am -0500,
> > Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > 
> > > (take3 with feeling.. I reverted to the gmail's old compose so all
> > > should be right in my plain-text gmail world... apologies to Kent and
> > > dm-devel for the redundant messages)
> > > 
> > > Hey Kent,
> > > 
> > > On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > > I've (finally!) got a bcache branch hacked up that ought to be suitable
> > > > to go upstream, possibly in staging initially.
> > > >
> > > > It's currently closer to the dev branch than the stable branch, plus
> > > > some additional minor changes to make it all more self contained. The
> > > > code has seen a decent amount of testing and I think it's in good shape,
> > > > but I'd like it if it could see a bit more testing before I see about
> > > > pushing it upstream.
> > > >
> > > > If anyone wants to try it out, checkout the bcache-for-staging branch.
> > > > It's against Linux 3.7.
> > > 
> > > I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
> > > branch on my github:
> > > https://github.com/snitm/linux
> > > 
> > > Purpose is to have a single kernel to compare dm-cache and bcache.  My
> > > branch is against 3.8-rc2.  While importing your code I needed the
> > > following change to get bcache to compile:
> > > https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253
> > > 
> > > It now builds without issue but I haven't tested the resulting bcache
> > 
> > Just tried to use bcache at it locked up:
> 
> Interesting, this is a new bug...
> 
> The main bcache branch is also on top of 3.7, and it doesn't have this
> new allocation code and should be fine if you want to try that (there
> were also a few bugs I fixed in the master branch without updating the
> staging branch, but this looks like something new).
> 
> Gonna try and reproduce this, after I fix that sysfs code. Hrm.

Fyi, this is fixed (missed it because my test scripts were using cache
replacement policy = random, to better stress other stuff. Doh.)

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]         ` <20130110164704.GA30790-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2013-01-10 17:56           ` Mike Snitzer
@ 2013-01-10 18:14           ` Kent Overstreet
       [not found]             ` <20130110181424.GS26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-10 18:14 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA

On Thu, Jan 10, 2013 at 11:47:04AM -0500, Mike Snitzer wrote:
> On Wed, Jan 09 2013 at 11:12am -0500,
> Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> 
> > (take3 with feeling.. I reverted to the gmail's old compose so all
> > should be right in my plain-text gmail world... apologies to Kent and
> > dm-devel for the redundant messages)
> > 
> > Hey Kent,
> > 
> > On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > I've (finally!) got a bcache branch hacked up that ought to be suitable
> > > to go upstream, possibly in staging initially.
> > >
> > > It's currently closer to the dev branch than the stable branch, plus
> > > some additional minor changes to make it all more self contained. The
> > > code has seen a decent amount of testing and I think it's in good shape,
> > > but I'd like it if it could see a bit more testing before I see about
> > > pushing it upstream.
> > >
> > > If anyone wants to try it out, checkout the bcache-for-staging branch.
> > > It's against Linux 3.7.
> > 
> > I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
> > branch on my github:
> > https://github.com/snitm/linux
> > 
> > Purpose is to have a single kernel to compare dm-cache and bcache.  My
> > branch is against 3.8-rc2.  While importing your code I needed the
> > following change to get bcache to compile:
> > https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253
> > 
> > It now builds without issue but I haven't tested the resulting bcache
> 
> Just tried to use bcache at it locked up:

Interesting, this is a new bug...

The main bcache branch is also on top of 3.7, and it doesn't have this
new allocation code and should be fine if you want to try that (there
were also a few bugs I fixed in the master branch without updating the
staging branch, but this looks like something new).

Gonna try and reproduce this, after I fix that sysfs code. Hrm.

> 
> # make-bcache -B /dev/striped_vg/bcache_origin -C /dev/stec/bcache_data
> UUID:                   edaef824-3b1c-4d14-a8fb-07fe7faa51e3
> Set UUID:               9808e4a4-0da4-49b1-8a33-0fe097ba2d59
> nbuckets:               32768
> block_size:             1
> bucket_size:            1024
> nr_in_set:              1
> nr_this_dev:            0
> first_bucket:           1
> UUID:                   0281d6a9-4ce0-4570-89e5-16bac3006fa2
> Set UUID:               9808e4a4-0da4-49b1-8a33-0fe097ba2d59
> nbuckets:               2048
> block_size:             1
> bucket_size:            1024
> nr_in_set:              1
> nr_this_dev:            0
> first_bucket:           1
> 
> [root@rhel-storage-02 ~]# echo /dev/striped_vg/bcache_origin > /sys/fs/bcache/register
> [root@rhel-storage-02 ~]# echo /dev/stec/bcache_data > /sys/fs/bcache/register
> 
> bcache: invalidating existing data                                                                                                                                     
> BUG: soft lockup - CPU#2 stuck for 22s! [kworker/2:1:9723]                                                                                                             
> Modules linked in: skd(O) dm_cache_cleaner dm_cache_mq dm_cache_basic dm_cache dm_thin_pool dm_bio_prison dm_persistent_data dm_bufio dm_mod xfs exportfs libcrc32c iom
> emory_vsl(O) bcache ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2f
> c fcoe 8021q libfcoe garp libfc stp llc scsi_transport_fc scsi_tgt sunrpc acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_t
> ables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp li
> biscsi_tcp libiscsi scsi_transport_iscsi vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb ses 
> enclosure sg ixgbe dca ptp pps_core mdio i7core_edac edac_core ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix megaraid_sas [last unloa
> ded: dm_cache_basic]                                                                                                                                                   
> CPU 2                                                                                                                                                                  
> Pid: 9723, comm: kworker/2:1 Tainted: P        W  O 3.8.0-rc2.snitm+ #34 FUJITSU                          PRIMERGY RX300 S6             /D2619                         
> RIP: 0010:[<ffffffffa05f9af0>]  [<ffffffffa05f9af0>] invalidate_buckets_lru+0x60/0x7a0 [bcache]                                                                        
> RSP: 0018:ffff880203315ca8  EFLAGS: 00000287                                                                                                                           
> RAX: ffffc90003c12000 RBX: 0000000000000000 RCX: 0000000000000000                                                                                                      
> RDX: 0000000000000800 RSI: ffffc90003c12000 RDI: ffff880205522000                                                                                                      
> RBP: ffff880203315ce8 R08: ffff8802055229c8 R09: 0000000000000000                                                                                                      
> R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000000                                                                                                      
> R13: ffff880203315c98 R14: 0000000000000000 R15: 0000000000000000                                                                                                      
> FS:  0000000000000000(0000) GS:ffff88033fc40000(0000) knlGS:0000000000000000                                                                                           
> CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b                                                                                                                      
> CR2: 00007f00998c0008 CR3: 0000000286afc000 CR4: 00000000000007e0                                                                                                      
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000                                                                                                      
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400                                                                                                      
> Process kworker/2:1 (pid: 9723, threadinfo ffff880203314000, task ffff88032f418a80)                                                                                    
> Stack:                                                                                                                                                                 
>  0000000000000000 ffff8802055229c8 0000000000000000 ffff880205522000                                                                                                   
>  ffff880203315d78 ffff880205522aec ffff88032f418a80 ffff88032f418a80                                                                                                   
>  ffff880203315d08 ffffffffa05fa350 ffff880205522000 ffff880203315d78                                                                                                   
> Call Trace:                                                                                                                                                            
>  [<ffffffffa05fa350>] invalidate_buckets+0x30/0x110 [bcache]                                                                                                           
>  [<ffffffffa05fa8e7>] bch_allocator_thread+0x4b7/0x720 [bcache]                                                                                                        
>  [<ffffffff810018d7>] ? __switch_to+0x157/0x4f0                                                                                                                        
>  [<ffffffff81082850>] ? idle_balance+0x1c0/0x320                                                                                                                       
>  [<ffffffff81066dd0>] ? wake_up_bit+0x40/0x40                                                                                                                          
>  [<ffffffff814f4b75>] ? __schedule+0x3f5/0x710                                                                                                                         
>  [<ffffffff8105f347>] process_one_work+0x177/0x430                                                                                                                     
>  [<ffffffffa05fa430>] ? invalidate_buckets+0x110/0x110 [bcache]                                                                                                        
>  [<ffffffff810612de>] worker_thread+0x12e/0x380                                                                                                                        
>  [<ffffffff810611b0>] ? manage_workers+0x180/0x180                                                                                                                     
>  [<ffffffff8106654e>] kthread+0xce/0xe0                                                                                                                                
>  [<ffffffff81066480>] ? kthread_freezable_should_stop+0x70/0x70                                                                                                        
>  [<ffffffff814fe7ac>] ret_from_fork+0x7c/0xb0                                                                                                                          
>  [<ffffffff81066480>] ? kthread_freezable_should_stop+0x70/0x70                                                                                                        
> Code: 52 4c 8d 24 90 48 8b 97 c0 00 00 00 48 8d 0c 52 48 8d 34 88 31 c9 31 c0 49 39 f4 0f 83 ba 02 00 00 66 2e 0f 1f 84 00 00 00 00 00 <41> 0f b7 44 24 0a a8 03 0f 85 
> 7b 02 00 00 41 8b 0c 24 85 c9 0f

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]   ` <CAMM=eLeeh6jb28KXGE9ZBbkMV1ysE-6NH2BjfpTsQcHAawEs+w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-01-10 17:59     ` Kent Overstreet
       [not found]       ` <20130110175954.GR26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Kent Overstreet @ 2013-01-10 17:59 UTC (permalink / raw)
  To: Mike Snitzer
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, device-mapper development

On Wed, Jan 09, 2013 at 10:49:04AM -0500, Mike Snitzer wrote:
> Hey Kent,
> 
> On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>wrote:
> 
> > I've (finally!) got a bcache branch hacked up that ought to be suitable
> > to go upstream, possibly in staging initially.
> >
> > It's currently closer to the dev branch than the stable branch, plus
> > some additional minor changes to make it all more self contained. The
> > code has seen a decent amount of testing and I think it's in good shape,
> > but I'd like it if it could see a bit more testing before I see about
> > pushing it upstream.
> >
> > If anyone wants to try it out, checkout the bcache-for-staging branch.
> > It's against Linux 3.7.
> 
> 
> I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
> branch on my github:
> https://github.com/snitm/linux
> 
> Purpose is to have a single kernel to compare dm-cache and bcache.  My
> branch is against 3.8-rc2.  While importing your code I needed the
> following change to get bcache to compile:
> https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253
> 
> It now builds without issue but I haven't tested the resulting bcache to
> know if I broke the sysfs interface due to s/cache/bcache/ on some local
> variables, I don't think I did but I'll defer to you.  (BTW those crafty
> sysfs macros you have were pretty opaque; not really seeing what they buy
> in the grand scheme.  And #include "sysfs.c" is different than any code
> I've seen in the kernel).

Yeah, it was an ugly hack when I pulled the sysfs code out of super.c so
I could avoid adding a bunch of non static symbols. But apparantly the
various functions weren't even static in the first place, heh. I'll fix
this the right way, thanks.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]         ` <20130110164704.GA30790-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2013-01-10 17:56           ` Mike Snitzer
  2013-01-10 18:14           ` Kent Overstreet
  1 sibling, 0 replies; 48+ messages in thread
From: Mike Snitzer @ 2013-01-10 17:56 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA

On Thu, Jan 10 2013 at 11:47am -0500,
Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> On Wed, Jan 09 2013 at 11:12am -0500,
> Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> 
> > (take3 with feeling.. I reverted to the gmail's old compose so all
> > should be right in my plain-text gmail world... apologies to Kent and
> > dm-devel for the redundant messages)
> > 
> > Hey Kent,
> > 
> > On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > > I've (finally!) got a bcache branch hacked up that ought to be suitable
> > > to go upstream, possibly in staging initially.
> > >
> > > It's currently closer to the dev branch than the stable branch, plus
> > > some additional minor changes to make it all more self contained. The
> > > code has seen a decent amount of testing and I think it's in good shape,
> > > but I'd like it if it could see a bit more testing before I see about
> > > pushing it upstream.
> > >
> > > If anyone wants to try it out, checkout the bcache-for-staging branch.
> > > It's against Linux 3.7.
> > 
> > I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
> > branch on my github:
> > https://github.com/snitm/linux
> > 
> > Purpose is to have a single kernel to compare dm-cache and bcache.  My
> > branch is against 3.8-rc2.  While importing your code I needed the
> > following change to get bcache to compile:
> > https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253
> > 
> > It now builds without issue but I haven't tested the resulting bcache
> 
> Just tried to use bcache at it locked up:

And I get the same lockup when I try v3.7 with the following config:

CONFIG_BCACHE=m
# CONFIG_BCACHE_DEBUG is not set
# CONFIG_BCACHE_EDEBUG is not set
# CONFIG_BCACHE_CLOSURES_DEBUG is not set
# CONFIG_CGROUP_BCACHE is not set

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found]     ` <CAMM=eLdxz17qG8=Px5VoRpv2pGsGhVn3erCQLrcr=Lm-vCOrWw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-01-10 16:47       ` Mike Snitzer
       [not found]         ` <20130110164704.GA30790-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-10 16:47 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA

On Wed, Jan 09 2013 at 11:12am -0500,
Mike Snitzer <snitzer-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> (take3 with feeling.. I reverted to the gmail's old compose so all
> should be right in my plain-text gmail world... apologies to Kent and
> dm-devel for the redundant messages)
> 
> Hey Kent,
> 
> On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> > I've (finally!) got a bcache branch hacked up that ought to be suitable
> > to go upstream, possibly in staging initially.
> >
> > It's currently closer to the dev branch than the stable branch, plus
> > some additional minor changes to make it all more self contained. The
> > code has seen a decent amount of testing and I think it's in good shape,
> > but I'd like it if it could see a bit more testing before I see about
> > pushing it upstream.
> >
> > If anyone wants to try it out, checkout the bcache-for-staging branch.
> > It's against Linux 3.7.
> 
> I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
> branch on my github:
> https://github.com/snitm/linux
> 
> Purpose is to have a single kernel to compare dm-cache and bcache.  My
> branch is against 3.8-rc2.  While importing your code I needed the
> following change to get bcache to compile:
> https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253
> 
> It now builds without issue but I haven't tested the resulting bcache

Just tried to use bcache at it locked up:

# make-bcache -B /dev/striped_vg/bcache_origin -C /dev/stec/bcache_data
UUID:                   edaef824-3b1c-4d14-a8fb-07fe7faa51e3
Set UUID:               9808e4a4-0da4-49b1-8a33-0fe097ba2d59
nbuckets:               32768
block_size:             1
bucket_size:            1024
nr_in_set:              1
nr_this_dev:            0
first_bucket:           1
UUID:                   0281d6a9-4ce0-4570-89e5-16bac3006fa2
Set UUID:               9808e4a4-0da4-49b1-8a33-0fe097ba2d59
nbuckets:               2048
block_size:             1
bucket_size:            1024
nr_in_set:              1
nr_this_dev:            0
first_bucket:           1

[root@rhel-storage-02 ~]# echo /dev/striped_vg/bcache_origin > /sys/fs/bcache/register
[root@rhel-storage-02 ~]# echo /dev/stec/bcache_data > /sys/fs/bcache/register

bcache: invalidating existing data                                                                                                                                     
BUG: soft lockup - CPU#2 stuck for 22s! [kworker/2:1:9723]                                                                                                             
Modules linked in: skd(O) dm_cache_cleaner dm_cache_mq dm_cache_basic dm_cache dm_thin_pool dm_bio_prison dm_persistent_data dm_bufio dm_mod xfs exportfs libcrc32c iom
emory_vsl(O) bcache ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge autofs4 target_core_iblock target_core_file target_core_pscsi target_core_mod configfs bnx2f
c fcoe 8021q libfcoe garp libfc stp llc scsi_transport_fc scsi_tgt sunrpc acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_t
ables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables bnx2i cnic uio ipv6 cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 iscsi_tcp li
biscsi_tcp libiscsi scsi_transport_iscsi vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support kvm_intel kvm microcode i2c_i801 i2c_core lpc_ich mfd_core igb ses 
enclosure sg ixgbe dca ptp pps_core mdio i7core_edac edac_core ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif pata_acpi ata_generic ata_piix megaraid_sas [last unloa
ded: dm_cache_basic]                                                                                                                                                   
CPU 2                                                                                                                                                                  
Pid: 9723, comm: kworker/2:1 Tainted: P        W  O 3.8.0-rc2.snitm+ #34 FUJITSU                          PRIMERGY RX300 S6             /D2619                         
RIP: 0010:[<ffffffffa05f9af0>]  [<ffffffffa05f9af0>] invalidate_buckets_lru+0x60/0x7a0 [bcache]                                                                        
RSP: 0018:ffff880203315ca8  EFLAGS: 00000287                                                                                                                           
RAX: ffffc90003c12000 RBX: 0000000000000000 RCX: 0000000000000000                                                                                                      
RDX: 0000000000000800 RSI: ffffc90003c12000 RDI: ffff880205522000                                                                                                      
RBP: ffff880203315ce8 R08: ffff8802055229c8 R09: 0000000000000000                                                                                                      
R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000000                                                                                                      
R13: ffff880203315c98 R14: 0000000000000000 R15: 0000000000000000                                                                                                      
FS:  0000000000000000(0000) GS:ffff88033fc40000(0000) knlGS:0000000000000000                                                                                           
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b                                                                                                                      
CR2: 00007f00998c0008 CR3: 0000000286afc000 CR4: 00000000000007e0                                                                                                      
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000                                                                                                      
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400                                                                                                      
Process kworker/2:1 (pid: 9723, threadinfo ffff880203314000, task ffff88032f418a80)                                                                                    
Stack:                                                                                                                                                                 
 0000000000000000 ffff8802055229c8 0000000000000000 ffff880205522000                                                                                                   
 ffff880203315d78 ffff880205522aec ffff88032f418a80 ffff88032f418a80                                                                                                   
 ffff880203315d08 ffffffffa05fa350 ffff880205522000 ffff880203315d78                                                                                                   
Call Trace:                                                                                                                                                            
 [<ffffffffa05fa350>] invalidate_buckets+0x30/0x110 [bcache]                                                                                                           
 [<ffffffffa05fa8e7>] bch_allocator_thread+0x4b7/0x720 [bcache]                                                                                                        
 [<ffffffff810018d7>] ? __switch_to+0x157/0x4f0                                                                                                                        
 [<ffffffff81082850>] ? idle_balance+0x1c0/0x320                                                                                                                       
 [<ffffffff81066dd0>] ? wake_up_bit+0x40/0x40                                                                                                                          
 [<ffffffff814f4b75>] ? __schedule+0x3f5/0x710                                                                                                                         
 [<ffffffff8105f347>] process_one_work+0x177/0x430                                                                                                                     
 [<ffffffffa05fa430>] ? invalidate_buckets+0x110/0x110 [bcache]                                                                                                        
 [<ffffffff810612de>] worker_thread+0x12e/0x380                                                                                                                        
 [<ffffffff810611b0>] ? manage_workers+0x180/0x180                                                                                                                     
 [<ffffffff8106654e>] kthread+0xce/0xe0                                                                                                                                
 [<ffffffff81066480>] ? kthread_freezable_should_stop+0x70/0x70                                                                                                        
 [<ffffffff814fe7ac>] ret_from_fork+0x7c/0xb0                                                                                                                          
 [<ffffffff81066480>] ? kthread_freezable_should_stop+0x70/0x70                                                                                                        
Code: 52 4c 8d 24 90 48 8b 97 c0 00 00 00 48 8d 0c 52 48 8d 34 88 31 c9 31 c0 49 39 f4 0f 83 ba 02 00 00 66 2e 0f 1f 84 00 00 00 00 00 <41> 0f b7 44 24 0a a8 03 0f 85 
7b 02 00 00 41 8b 0c 24 85 c9 0f

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
       [not found] ` <20130104235040.GA26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-01-09 16:12   ` Mike Snitzer
       [not found]     ` <CAMM=eLdxz17qG8=Px5VoRpv2pGsGhVn3erCQLrcr=Lm-vCOrWw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-09 16:12 UTC (permalink / raw)
  To: Kent Overstreet
  Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA, dm-devel-H+wXaHxf7aLQT0dZR+AlfA

(take3 with feeling.. I reverted to the gmail's old compose so all
should be right in my plain-text gmail world... apologies to Kent and
dm-devel for the redundant messages)

Hey Kent,

On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> I've (finally!) got a bcache branch hacked up that ought to be suitable
> to go upstream, possibly in staging initially.
>
> It's currently closer to the dev branch than the stable branch, plus
> some additional minor changes to make it all more self contained. The
> code has seen a decent amount of testing and I think it's in good shape,
> but I'd like it if it could see a bit more testing before I see about
> pushing it upstream.
>
> If anyone wants to try it out, checkout the bcache-for-staging branch.
> It's against Linux 3.7.

I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
branch on my github:
https://github.com/snitm/linux

Purpose is to have a single kernel to compare dm-cache and bcache.  My
branch is against 3.8-rc2.  While importing your code I needed the
following change to get bcache to compile:
https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253

It now builds without issue but I haven't tested the resulting bcache
to know if I broke the sysfs interface due to s/cache/bcache/ on some
local variables, I don't think I did but I'll defer to you.  (BTW
those crafty sysfs macros you have were pretty opaque; not really
seeing what they buy in the grand scheme.  And #include "sysfs.c" is
different than any code I've seen in the kernel).

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
  2013-01-04 23:50 Kent Overstreet
  2013-01-09 15:49 ` Mike Snitzer
@ 2013-01-09 16:01 ` Mike Snitzer
       [not found] ` <20130104235040.GA26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  2 siblings, 0 replies; 48+ messages in thread
From: Mike Snitzer @ 2013-01-09 16:01 UTC (permalink / raw)
  To: Kent Overstreet; +Cc: linux-bcache, device-mapper development


[-- Attachment #1.1: Type: text/plain, Size: 1512 bytes --]

(take 2 since linux-bcache rejected the mail due to HTML parts.. gmail's
new interface is a dream ;)

Hey Kent,

On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet@google.com>
wrote:
>
> I've (finally!) got a bcache branch hacked up that ought to be suitable
> to go upstream, possibly in staging initially.
>
> It's currently closer to the dev branch than the stable branch, plus
> some additional minor changes to make it all more self contained. The
> code has seen a decent amount of testing and I think it's in good shape,
> but I'd like it if it could see a bit more testing before I see about
> pushing it upstream.
>
> If anyone wants to try it out, checkout the bcache-for-staging branch.
> It's against Linux 3.7.


I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
branch on my github:
https://github.com/snitm/linux

Purpose is to have a single kernel to compare dm-cache and bcache.  My
branch is against 3.8-rc2.  While importing your code I needed the
following change to get bcache to compile:
https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253

It now builds without issue but I haven't tested the resulting bcache to
know if I broke the sysfs interface due to s/cache/bcache/ on some local
variables, I don't think I did but I'll defer to you.  (BTW those crafty
sysfs macros you have were pretty opaque; not really seeing what they buy
in the grand scheme.  And #include "sysfs.c" is different than any code
I've seen in the kernel).

[-- Attachment #1.2: Type: text/html, Size: 1921 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Bcache upstreaming
  2013-01-04 23:50 Kent Overstreet
@ 2013-01-09 15:49 ` Mike Snitzer
       [not found]   ` <CAMM=eLeeh6jb28KXGE9ZBbkMV1ysE-6NH2BjfpTsQcHAawEs+w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-01-09 16:01 ` Mike Snitzer
       [not found] ` <20130104235040.GA26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  2 siblings, 1 reply; 48+ messages in thread
From: Mike Snitzer @ 2013-01-09 15:49 UTC (permalink / raw)
  To: Kent Overstreet; +Cc: linux-bcache, device-mapper development


[-- Attachment #1.1: Type: text/plain, Size: 1408 bytes --]

Hey Kent,

On Fri, Jan 4, 2013 at 6:50 PM, Kent Overstreet <koverstreet@google.com>wrote:

> I've (finally!) got a bcache branch hacked up that ought to be suitable
> to go upstream, possibly in staging initially.
>
> It's currently closer to the dev branch than the stable branch, plus
> some additional minor changes to make it all more self contained. The
> code has seen a decent amount of testing and I think it's in good shape,
> but I'd like it if it could see a bit more testing before I see about
> pushing it upstream.
>
> If anyone wants to try it out, checkout the bcache-for-staging branch.
> It's against Linux 3.7.


I pulled your 'bcache-for-staging' code into a 'dm-devel-cache-bcache'
branch on my github:
https://github.com/snitm/linux

Purpose is to have a single kernel to compare dm-cache and bcache.  My
branch is against 3.8-rc2.  While importing your code I needed the
following change to get bcache to compile:
https://github.com/snitm/linux/commit/400b1257e93975864fd6c4b827537a0234551253

It now builds without issue but I haven't tested the resulting bcache to
know if I broke the sysfs interface due to s/cache/bcache/ on some local
variables, I don't think I did but I'll defer to you.  (BTW those crafty
sysfs macros you have were pretty opaque; not really seeing what they buy
in the grand scheme.  And #include "sysfs.c" is different than any code
I've seen in the kernel).

[-- Attachment #1.2: Type: text/html, Size: 2162 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Bcache upstreaming
@ 2013-01-04 23:50 Kent Overstreet
  2013-01-09 15:49 ` Mike Snitzer
                   ` (2 more replies)
  0 siblings, 3 replies; 48+ messages in thread
From: Kent Overstreet @ 2013-01-04 23:50 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA

I've (finally!) got a bcache branch hacked up that ought to be suitable
to go upstream, possibly in staging initially.

It's currently closer to the dev branch than the stable branch, plus
some additional minor changes to make it all more self contained. The
code has seen a decent amount of testing and I think it's in good shape,
but I'd like it if it could see a bit more testing before I see about
pushing it upstream.

If anyone wants to try it out, checkout the bcache-for-staging branch.
It's against Linux 3.7.

^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2013-02-01 20:43 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-19  8:41 Bcache upstreaming Steven Haigh
     [not found] ` <50FA5C38.60301-tY1ak9Q0PTWHXe+LvDLADg@public.gmane.org>
2013-01-19 10:35   ` Kent Overstreet
     [not found]     ` <CAC7rs0v=zA-6Lf9kH5jmXxySci6GTLMu_Tq1pZFhHDpYcj0APQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-01-19 10:42       ` Steven Haigh
  -- strict thread matches above, loose matches on Subject: below --
2013-01-04 23:50 Kent Overstreet
2013-01-09 15:49 ` Mike Snitzer
     [not found]   ` <CAMM=eLeeh6jb28KXGE9ZBbkMV1ysE-6NH2BjfpTsQcHAawEs+w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-01-10 17:59     ` Kent Overstreet
     [not found]       ` <20130110175954.GR26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-01-14 22:37         ` Kent Overstreet
     [not found]           ` <20130114223722.GZ26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-01-14 22:53             ` Mike Snitzer
     [not found]               ` <20130114225330.GA1365-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-01-17  2:27                 ` Mike Snitzer
     [not found]                   ` <20130117022728.GA16148-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-01-17 11:41                     ` Kent Overstreet
     [not found]                       ` <20130117114104.GJ10411-jC9Py7bek1znysI04z7BkA@public.gmane.org>
2013-01-30 23:36                         ` Kent Overstreet
     [not found]                           ` <20130130233643.GD12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
2013-01-30 23:48                             ` Joseph Glanville
     [not found]                               ` <CAOzFzEho6Jn8nd+vSZXEQR5_oxPEZRej=6mivJDz0MsAj5VAZg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-01-31  1:25                                 ` Kent Overstreet
2013-01-31  0:10                             ` Mike Snitzer
     [not found]                               ` <20130131001020.GA7541-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-01-31  1:26                                 ` Kent Overstreet
     [not found]                                   ` <20130131012627.GF12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
2013-01-31  1:27                                     ` Kent Overstreet
     [not found]                                       ` <20130131012747.GG12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
2013-01-31  1:48                                         ` Kent Overstreet
     [not found]                                           ` <20130131014835.GH12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
2013-01-31 19:02                                             ` Mike Snitzer
     [not found]                                               ` <20130131190249.GA12786-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-01-31 21:08                                                 ` Kent Overstreet
     [not found]                                                   ` <CAC7rs0u_aJS5BsJ0E7wH98z2VxXr=SK1z8yL0-m0Pc85ncJNHg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-01-31 22:17                                                     ` Mike Snitzer
     [not found]                                                       ` <20130131221711.GA13540-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-01-31 22:25                                                         ` Kent Overstreet
     [not found]                                                           ` <CAC7rs0ue6YgqrX9Nc18GdnVtJd558F6W=BZiMXZdRqig3s7sBA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-01-31 23:08                                                             ` Mike Snitzer
     [not found]                                                               ` <20130131230800.GB13540-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-02-01  0:33                                                                 ` Kent Overstreet
     [not found]                                                                   ` <20130201003311.GJ12631-jC9Py7bek1znysI04z7BkA@public.gmane.org>
2013-02-01  3:38                                                                     ` Mike Snitzer
     [not found]                                                                       ` <20130201103944.GM8837@soda.linbit>
2013-02-01 14:10                                                                         ` Mike Snitzer
     [not found]                                                                           ` <20130201141003.GA18095-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-02-01 14:55                                                                             ` Tejun Heo
     [not found]                                                                               ` <20130201145504.GS6824-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-02-01 15:16                                                                                 ` Mike Snitzer
2013-02-01 15:27                                                                                 ` Kent Overstreet
     [not found]                                                                                   ` <20130201152743.GV26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-02-01 15:30                                                                                     ` Tejun Heo
     [not found]                                                                                       ` <20130201153019.GT6824-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-02-01 15:33                                                                                         ` Kent Overstreet
     [not found]                                                                                           ` <20130201153318.GW26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-02-01 16:08                                                                                             ` Tejun Heo
     [not found]                                                                                               ` <20130201160820.GA31863-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-02-01 16:15                                                                                                 ` Kent Overstreet
     [not found]                                                                                                   ` <20130201161547.GY26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-02-01 16:18                                                                                                     ` Tejun Heo
     [not found]                                                                                                       ` <20130201161809.GB31863-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-02-01 20:32                                                                                                         ` Mike Snitzer
     [not found]                                                                                                           ` <20130201203229.GA21110-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-02-01 20:43                                                                                                             ` Tejun Heo
     [not found]                                                                       ` <20130201033810.GA14867-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-02-01 15:39                                                                         ` Kent Overstreet
     [not found]                                                                           ` <20130201153936.GX26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-02-01 16:12                                                                             ` Mike Snitzer
     [not found]                                                                               ` <20130201161227.GA19245-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-02-01 16:17                                                                                 ` Kent Overstreet
2013-01-31 22:01                                         ` Kent Overstreet
2013-01-31 16:52                             ` Mike Snitzer
     [not found]                               ` <20130131165223.GB11894-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-01-31 17:01                                 ` Kent Overstreet
     [not found]                                   ` <20130131170103.GT26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-01-31 17:26                                     ` Mike Snitzer
2013-01-09 16:01 ` Mike Snitzer
     [not found] ` <20130104235040.GA26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-01-09 16:12   ` Mike Snitzer
     [not found]     ` <CAMM=eLdxz17qG8=Px5VoRpv2pGsGhVn3erCQLrcr=Lm-vCOrWw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-01-10 16:47       ` Mike Snitzer
     [not found]         ` <20130110164704.GA30790-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-01-10 17:56           ` Mike Snitzer
2013-01-10 18:14           ` Kent Overstreet
     [not found]             ` <20130110181424.GS26407-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-01-14 22:36               ` Kent Overstreet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).