All of lore.kernel.org
 help / color / mirror / Atom feed
* bcache_gc: BUG: soft lockup
@ 2015-11-13 12:09 Yannis Aribaud
  2015-11-13 12:37 ` Johannes Thumshirn
  2015-11-13 12:55 ` Yannis Aribaud
  0 siblings, 2 replies; 23+ messages in thread
From: Yannis Aribaud @ 2015-11-13 12:09 UTC (permalink / raw)
  To: linux-bcache

Hi,

I recently tried bcache on a vanilla 4.1.12 kernel and got those strange soft lockup issues.
This happens at the system shutdown only but not every time, thus I suppose that it's related to bcache devices stopping.

My setup use 3 bcache devices each using dedicated hardware devices (1 HDD and 1 SSD), cache_mode is writeback and bucket/block sizes are the default ones.

I saw that a similar problem was solved in 3.17 so maybe it's a regression.

Any idea how to fix that ?

Regards,
-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-13 12:09 bcache_gc: BUG: soft lockup Yannis Aribaud
@ 2015-11-13 12:37 ` Johannes Thumshirn
  2015-11-13 12:55 ` Yannis Aribaud
  1 sibling, 0 replies; 23+ messages in thread
From: Johannes Thumshirn @ 2015-11-13 12:37 UTC (permalink / raw)
  To: Yannis Aribaud, linux-bcache

Hi Yannis,
On Fri, 2015-11-13 at 12:09 +0000, Yannis Aribaud wrote:
> Hi,
> 
> I recently tried bcache on a vanilla 4.1.12 kernel and got those
> strange soft lockup issues.
> This happens at the system shutdown only but not every time, thus I
> suppose that it's related to bcache devices stopping.
> 
> My setup use 3 bcache devices each using dedicated hardware devices
> (1 HDD and 1 SSD), cache_mode is writeback and bucket/block sizes are
> the default ones.
> 
> I saw that a similar problem was solved in 3.17 so maybe it's a
> regression.

do you have a stacktrace, so we can see where it's locking up?

Thanks,
Johannes

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-13 12:09 bcache_gc: BUG: soft lockup Yannis Aribaud
  2015-11-13 12:37 ` Johannes Thumshirn
@ 2015-11-13 12:55 ` Yannis Aribaud
  2015-11-13 13:05   ` Johannes Thumshirn
                     ` (2 more replies)
  1 sibling, 3 replies; 23+ messages in thread
From: Yannis Aribaud @ 2015-11-13 12:55 UTC (permalink / raw)
  To: Johannes Thumshirn, linux-bcache

13 novembre 2015 13:37 "Johannes Thumshirn" <jthumshirn@suse.de> a écrit:
> Hi Yannis,

Hi Johannes,

> [...]
> 
> do you have a stacktrace, so we can see where it's locking up?

There is no stack trace as the kernel isn't crashed. But the system seems waiting for something that never happens and keep throwing the same message to the console (BUG: soft lockup - CPU#22 stuck for 23s!).


-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-13 12:55 ` Yannis Aribaud
@ 2015-11-13 13:05   ` Johannes Thumshirn
  2015-11-13 13:27   ` Yannis Aribaud
       [not found]   ` <9c96132d5fce4a5a77b1b086f7c6095d@rcube.hebserv.net>
  2 siblings, 0 replies; 23+ messages in thread
From: Johannes Thumshirn @ 2015-11-13 13:05 UTC (permalink / raw)
  To: Yannis Aribaud, linux-bcache

On Fri, 2015-11-13 at 12:55 +0000, Yannis Aribaud wrote:
> 13 novembre 2015 13:37 "Johannes Thumshirn" <jthumshirn@suse.de> a
> écrit:
> > Hi Yannis,
> 
> Hi Johannes,
> 
> > [...]
> > 
> > do you have a stacktrace, so we can see where it's locking up?
> 
> There is no stack trace as the kernel isn't crashed. But the system
> seems waiting for something that never happens and keep throwing the
> same message to the console (BUG: soft lockup - CPU#22 stuck for
> 23s!).
> 
> 

I see. Can you then please force a panic on soft-lockup
via softlockup_panic=1 passed to the kernel as a boot parameter or via
sysctl -w kernel.softlockup_panic=1 on a running system.

Without having any information on where it actually locked up, I see no
chance for fixing it.

Thanks,
	Johannes

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-13 12:55 ` Yannis Aribaud
  2015-11-13 13:05   ` Johannes Thumshirn
@ 2015-11-13 13:27   ` Yannis Aribaud
       [not found]   ` <9c96132d5fce4a5a77b1b086f7c6095d@rcube.hebserv.net>
  2 siblings, 0 replies; 23+ messages in thread
From: Yannis Aribaud @ 2015-11-13 13:27 UTC (permalink / raw)
  To: Johannes Thumshirn, linux-bcache

13 novembre 2015 14:05 "Johannes Thumshirn" <jthumshirn@suse.de> a écrit:
> I see. Can you then please force a panic on soft-lockup
> via softlockup_panic=1 passed to the kernel as a boot parameter or via
> sysctl -w kernel.softlockup_panic=1 on a running system.

Ok I'll try to reproduce it with this option set.


-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
       [not found]   ` <9c96132d5fce4a5a77b1b086f7c6095d@rcube.hebserv.net>
@ 2015-11-16  8:09     ` Johannes Thumshirn
  2015-11-16 10:26     ` Yannis Aribaud
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Johannes Thumshirn @ 2015-11-16  8:09 UTC (permalink / raw)
  To: Yannis Aribaud, linux-bcache

On Fri, 2015-11-13 at 16:42 +0000, Yannis Aribaud wrote:
> Well,
> 
> I managed to get the issue again. So there is the stack trace.
> 
> I hope it will help.
> 
> 

I see what I can do, but I cannot promise anything.

Byte,
	Johannes

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
       [not found]   ` <9c96132d5fce4a5a77b1b086f7c6095d@rcube.hebserv.net>
  2015-11-16  8:09     ` Johannes Thumshirn
@ 2015-11-16 10:26     ` Yannis Aribaud
  2015-11-27 12:23     ` Johannes Thumshirn
  2015-11-27 12:32     ` Yannis Aribaud
  3 siblings, 0 replies; 23+ messages in thread
From: Yannis Aribaud @ 2015-11-16 10:26 UTC (permalink / raw)
  To: Johannes Thumshirn, linux-bcache

Hi,

I got this stack trace on an other node running the same 4.1.12 kernel, but this time on a live node (not during shutdown).


[1181894.806808] Modules linked in: cbc rbd libceph ipmi_si mpt2sas raid_class scsi_transport_sas ipmi_devintf dell_rbu tun nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc bridge 8021q garp mrp stp llc bonding xfs libcrc32c bcache uhci_hcd ohci_hcd joydev hid_generic usbhid hid iTCO_wdt iTCO_vendor_support x86_pkg_temp_thermal coretemp kvm_intel dcdbas kvm shpchp aesni_intel aes_x86_64 ablk_helper cryptd lrw gf128mul glue_helper microcode evdev ehci_pci ehci_hcd sb_edac usbcore edac_core usb_common lpc_ich ipmi_msghandler mfd_core acpi_cpufreq processor wmi thermal_sys acpi_power_meter button ext4 crc16 mbcache jbd2 btrfs xor raid6_pq dm_mod sg sd_mod crc32c_intel igb megaraid_sas i2c_algo_bit i2c_core dca ptp scsi_mod pps_core [last unloaded: ipmi_si]
[1181894.806839] CPU: 13 PID: 74 Comm: migration/13 Tainted: G             L  4.1.12-ig1 #2
[1181894.806840] Hardware name: Dell Inc. PowerEdge R730xd/0H21J3, BIOS 1.1.4 11/03/2014
[1181894.806841] task: ffff8820786894b0 ti: ffff882078698000 task.ti: ffff882078698000
[1181894.806842] RIP: 0010:[<ffffffff810a72e6>]  [<ffffffff810a72e6>] multi_cpu_stop+0x52/0x99
[1181894.806846] RSP: 0000:ffff88207869bdb8  EFLAGS: 00000293
[1181894.806847] RAX: 0000000000000000 RBX: 0000000000000187 RCX: ffff88207eccfd18
[1181894.806848] RDX: 0000000000000001 RSI: 0000000000000282 RDI: ffff88202f1e3bc0
[1181894.806849] RBP: 0000000000000001 R08: ffff882078698000 R09: ffff882078698000
[1181894.806850] R10: 0000000000000000 R11: ffff8820361280d0 R12: 0000000000000000
[1181894.806851] R13: ffff88207ecd4f00 R14: 0000000d00000004 R15: ffff88107f454f00
[1181894.806853] FS:  0000000000000000(0000) GS:ffff88207ecc0000(0000) knlGS:0000000000000000
[1181894.806854] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[1181894.806855] CR2: 0000000016053400 CR3: 000000000160b000 CR4: 00000000001406e0
[1181894.806856] Stack:
[1181894.806857]  ffff8820361280d0 ffff88207eccfd10 ffff88202f1e3be8 ffff88202f1e3bc0
[1181894.806859]  ffffffff810a7294 ffffffff810a718a ffff882078689a08 ffff8820786894b0
[1181894.806860]  ffff88207ecd4f00 ffffffff8105bae5 ffff88207ecd4f00 ffff8820786894b0
[1181894.806862] Call Trace:
[1181894.806865]  [<ffffffff810a7294>] ? cpu_stop_should_run+0x3d/0x3d
[1181894.806866]  [<ffffffff810a718a>] ? cpu_stopper_thread+0x68/0xde
[1181894.806868]  [<ffffffff8105bae5>] ? finish_task_switch+0x51/0xcb
[1181894.806871]  [<ffffffff8138ff4f>] ? console_conditional_schedule+0xf/0xf
[1181894.806872]  [<ffffffff8138e43f>] ? __schedule+0x3f5/0x4e0
[1181894.806875]  [<ffffffff81058604>] ? sort_range+0x19/0x19
[1181894.806877]  [<ffffffff8105872a>] ? smpboot_thread_fn+0x126/0x13e
[1181894.806878]  [<ffffffff81056225>] ? kthread+0x99/0xa1
[1181894.806880]  [<ffffffff8105618c>] ? __kthread_parkme+0x58/0x58
[1181894.806882]  [<ffffffff81390d12>] ? ret_from_fork+0x42/0x70
[1181894.806883]  [<ffffffff8105618c>] ? __kthread_parkme+0x58/0x58
[1181894.806884] Code: 02 00 00 e8 51 60 12 00 39 c5 41 0f 94 c4 eb 0e 89 ed 48 0f a3 28 19 ed 85 ed 41 0f 95 c4 31 c0 31 d2 eb 02 89 ea f3 90 8b 6b 20 <39> d5 74 20 83 fd 02 74 07 83 fd 03 75 10 eb 03 fa eb 0b 45 84 

dmesg keeps filling with such message and devices are unresponsive. Forced to reboot.

Regards,
-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
       [not found]   ` <9c96132d5fce4a5a77b1b086f7c6095d@rcube.hebserv.net>
  2015-11-16  8:09     ` Johannes Thumshirn
  2015-11-16 10:26     ` Yannis Aribaud
@ 2015-11-27 12:23     ` Johannes Thumshirn
  2015-11-27 12:32     ` Yannis Aribaud
  3 siblings, 0 replies; 23+ messages in thread
From: Johannes Thumshirn @ 2015-11-27 12:23 UTC (permalink / raw)
  To: Yannis Aribaud, linux-bcache

On Fri, 2015-11-13 at 16:42 +0000, Yannis Aribaud wrote:
> Well,
> 
> I managed to get the issue again. So there is the stack trace.
> 
> I hope it will help.
> 

Sorry to disappoint you, but I couldn't find anything that could be related to
your lockup.

	Johannes

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
       [not found]   ` <9c96132d5fce4a5a77b1b086f7c6095d@rcube.hebserv.net>
                       ` (2 preceding siblings ...)
  2015-11-27 12:23     ` Johannes Thumshirn
@ 2015-11-27 12:32     ` Yannis Aribaud
  2015-11-30  1:49       ` Eric Wheeler
  3 siblings, 1 reply; 23+ messages in thread
From: Yannis Aribaud @ 2015-11-27 12:32 UTC (permalink / raw)
  To: Johannes Thumshirn, linux-bcache

27 novembre 2015 13:23 "Johannes Thumshirn" <jthumshirn@suse.de> a écrit:
> Sorry to disappoint you, but I couldn't find anything that could be related to
> your lockup.

Well even I you didn't found anything, thank you for your time.

I just upgraded my kernel to 4.2.6 vanilla to see if this lockup occurs again.

Regards,
-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-27 12:32     ` Yannis Aribaud
@ 2015-11-30  1:49       ` Eric Wheeler
  2015-11-30  7:07         ` Johannes Thumshirn
                           ` (3 more replies)
  0 siblings, 4 replies; 23+ messages in thread
From: Eric Wheeler @ 2015-11-30  1:49 UTC (permalink / raw)
  To: Yannis Aribaud; +Cc: Johannes Thumshirn, linux-bcache

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1778 bytes --]


(intentional top-post)

There are a number of stability patches that haven't found their way to 
mainstream yet.  We have been using bcache for over a year now with great 
stability and no dataloss with these patches.  Try pulling the branch that 
I just set up to maintain these commits:

https://github.com/ewheelerinc/linux/commits/bcache-patches-for-3.17

git add remote ewheelerinc https://github.com/ewheelerinc/linux.git
git fetch ewheelerinc
git merge ewheelerinc/bcache-patches-for-3.17

This is a clone of Linus's tree circa 3.17-rc1 so git merge should bring 
this in cleanly to any later branch.  You could cherry-pick as well.  
I've tested and this will also merge cleanly into v4.1.13.

Note that I've not written any of these patches, I just keep them around 
since they make the difference between stable and unstable for bcache.  
The original authors and their discussions are included in the commit 
notes.




--
Eric Wheeler, President           eWheeler, Inc. dba Global Linux Security
888-LINUX26 (888-546-8926)        Fax: 503-716-3878           PO Box 25107
www.GlobalLinuxSecurity.pro       Linux since 1996!     Portland, OR 97298

On Fri, 27 Nov 2015, Yannis Aribaud wrote:

> 27 novembre 2015 13:23 "Johannes Thumshirn" <jthumshirn@suse.de> a écrit:
> > Sorry to disappoint you, but I couldn't find anything that could be related to
> > your lockup.
> 
> Well even I you didn't found anything, thank you for your time.
> 
> I just upgraded my kernel to 4.2.6 vanilla to see if this lockup occurs again.
> 
> Regards,
> -- 
> Open is better
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-30  1:49       ` Eric Wheeler
@ 2015-11-30  7:07         ` Johannes Thumshirn
  2015-11-30  9:59         ` Yannis Aribaud
                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Johannes Thumshirn @ 2015-11-30  7:07 UTC (permalink / raw)
  To: Eric Wheeler; +Cc: Yannis Aribaud, linux-bcache

Hi Eric,

Zitat von Eric Wheeler <bcache@lists.ewheeler.net>:

>
> (intentional top-post)
>
> There are a number of stability patches that haven't found their way to
> mainstream yet.  We have been using bcache for over a year now with great
> stability and no dataloss with these patches.  Try pulling the branch that
> I just set up to maintain these commits:
>

Thanks for pointing us at these patches. Yannis can you give them a try?
Regarding the maintainance situation, does anyone know when/if Kent  
will pick up these patches? If not we should try to get them to Linus  
via some other path (maybe Andrew or Jens).


> https://github.com/ewheelerinc/linux/commits/bcache-patches-for-3.17
>
> git add remote ewheelerinc https://github.com/ewheelerinc/linux.git
> git fetch ewheelerinc
> git merge ewheelerinc/bcache-patches-for-3.17
>
> This is a clone of Linus's tree circa 3.17-rc1 so git merge should bring
> this in cleanly to any later branch.  You could cherry-pick as well.
> I've tested and this will also merge cleanly into v4.1.13.
>
> Note that I've not written any of these patches, I just keep them around
> since they make the difference between stable and unstable for bcache.
> The original authors and their discussions are included in the commit
> notes.
>
>
>
>
> --
> Eric Wheeler, President           eWheeler, Inc. dba Global Linux Security
> 888-LINUX26 (888-546-8926)        Fax: 503-716-3878           PO Box 25107
> www.GlobalLinuxSecurity.pro       Linux since 1996!     Portland, OR 97298
>
> On Fri, 27 Nov 2015, Yannis Aribaud wrote:
>
>> 27 novembre 2015 13:23 "Johannes Thumshirn" <jthumshirn@suse.de> a écrit:
>> > Sorry to disappoint you, but I couldn't find anything that could  
>> be related to
>> > your lockup.
>>
>> Well even I you didn't found anything, thank you for your time.
>>
>> I just upgraded my kernel to 4.2.6 vanilla to see if this lockup  
>> occurs again.
>>
>> Regards,
>> --
>> Open is better
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-30  1:49       ` Eric Wheeler
  2015-11-30  7:07         ` Johannes Thumshirn
@ 2015-11-30  9:59         ` Yannis Aribaud
  2015-12-07 10:35         ` Yannis Aribaud
  2016-01-27 14:57         ` Yannis Aribaud
  3 siblings, 0 replies; 23+ messages in thread
From: Yannis Aribaud @ 2015-11-30  9:59 UTC (permalink / raw)
  To: Johannes Thumshirn, Eric Wheeler; +Cc: linux-bcache

30 novembre 2015 08:07 "Johannes Thumshirn" <jthumshirn@suse.de> a écrit:
> Hi Eric,
> 
> Zitat von Eric Wheeler <bcache@lists.ewheeler.net>:
> 
>> (intentional top-post)
>> 
>> There are a number of stability patches that haven't found their way to
>> mainstream yet. We have been using bcache for over a year now with great
>> stability and no dataloss with these patches. Try pulling the branch that
>> I just set up to maintain these commits:
> 
> Thanks for pointing us at these patches. Yannis can you give them a try?

I will.

-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-30  1:49       ` Eric Wheeler
  2015-11-30  7:07         ` Johannes Thumshirn
  2015-11-30  9:59         ` Yannis Aribaud
@ 2015-12-07 10:35         ` Yannis Aribaud
  2016-01-27 14:57         ` Yannis Aribaud
  3 siblings, 0 replies; 23+ messages in thread
From: Yannis Aribaud @ 2015-12-07 10:35 UTC (permalink / raw)
  To: Yannis Aribaud, Johannes Thumshirn, Eric Wheeler; +Cc: linux-bcache

Hi everyone,

It's been one week I'm using a 4.2.6 kernel merged with the Bcache patches from Ewheeler and no signs of any kind of trouble I had before.
Thus it seems your patches fix my soft lockup issue.
It's currently running on one of my ceph nodes, I will certainly push it on the others during the next weeks.

It would be great to merge thoses patches upstream since it seems that using Bcache in production requires those fixes.

Anyway, thanks to all of you for your time, advices and work on Bcache. I'll keep you updated.

Regards,
-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2015-11-30  1:49       ` Eric Wheeler
                           ` (2 preceding siblings ...)
  2015-12-07 10:35         ` Yannis Aribaud
@ 2016-01-27 14:57         ` Yannis Aribaud
  2016-01-27 15:16           ` Johannes Thumshirn
                             ` (3 more replies)
  3 siblings, 4 replies; 23+ messages in thread
From: Yannis Aribaud @ 2016-01-27 14:57 UTC (permalink / raw)
  To: Johannes Thumshirn, Eric Wheeler; +Cc: linux-bcache

Hi,

After several weeks using the 4.2.6 kernel + patches from Ewheeler we just ran into a crash again.
This time the kernel was still running and the server was responsive but not able to do any IO on the bcache devices.

[696983.683498] bcache_writebac D ffffffff810643df     0  5741      2 0x00000000
[696983.683505]  ffff88103d01f180 0000000000000046 ffff88107842d000 ffffffff811a95cd
[696983.683510]  0000000000000000 ffff8810388c4000 ffff88103d01f180 0000000000000001
[696983.683514]  ffff882034ae0c10 0000000000000000 ffff882034ae0000 ffffffff8139601e
[696983.683518] Call Trace:
[696983.683530]  [<ffffffff811a95cd>] ? blk_queue_bio+0x262/0x279
[696983.683539]  [<ffffffff8139601e>] ? schedule+0x6b/0x78
[696983.683553]  [<ffffffffa032ce9b>] ? closure_sync+0x66/0x91 [bcache]
[696983.683563]  [<ffffffffa033c89f>] ? bch_writeback_thread+0x622/0x6b5 [bcache]
[696983.683569]  [<ffffffff8100265c>] ? __switch_to+0x1de/0x3f7
[696983.683578]  [<ffffffffa033c89f>] ? bch_writeback_thread+0x622/0x6b5 [bcache]
[696983.683586]  [<ffffffffa033c27d>] ? write_dirty_finish+0x1bf/0x1bf [bcache]
[696983.683594]  [<ffffffff810589d6>] ? kthread+0x99/0xa1
[696983.683598]  [<ffffffff8105893d>] ? kthread_parkme+0x16/0x16
[696983.683603]  [<ffffffff813986df>] ? ret_from_fork+0x3f/0x70
[696983.683607]  [<ffffffff8105893d>] ? kthread_parkme+0x16/0x16

Don't know if this help.
Unfortunately I thing that we will rollback and stop using Bcache unless this is really fixed :/

Regards,

7 décembre 2015 11:35 "Yannis Aribaud" <bugs@d6bell.net> a écrit:
> Hi everyone,
> 
> It's been one week I'm using a 4.2.6 kernel merged with the Bcache patches from Ewheeler and no
> signs of any kind of trouble I had before.
> Thus it seems your patches fix my soft lockup issue.
> It's currently running on one of my ceph nodes, I will certainly push it on the others during the
> next weeks.
> 
> It would be great to merge thoses patches upstream since it seems that using Bcache in production
> requires those fixes.
> 
> Anyway, thanks to all of you for your time, advices and work on Bcache. I'll keep you updated.
> 
> Regards,
> -- 
> Open is better
-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2016-01-27 14:57         ` Yannis Aribaud
@ 2016-01-27 15:16           ` Johannes Thumshirn
  2016-01-29 11:54           ` Johannes Thumshirn
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Johannes Thumshirn @ 2016-01-27 15:16 UTC (permalink / raw)
  To: Yannis Aribaud; +Cc: Eric Wheeler, linux-bcache

On Wed, Jan 27, 2016 at 02:57:25PM +0000, Yannis Aribaud wrote:
> Hi,
> 
> After several weeks using the 4.2.6 kernel + patches from Ewheeler we just ran into a crash again.
> This time the kernel was still running and the server was responsive but not able to do any IO on the bcache devices.
> 
> [696983.683498] bcache_writebac D ffffffff810643df     0  5741      2 0x00000000
> [696983.683505]  ffff88103d01f180 0000000000000046 ffff88107842d000 ffffffff811a95cd
> [696983.683510]  0000000000000000 ffff8810388c4000 ffff88103d01f180 0000000000000001
> [696983.683514]  ffff882034ae0c10 0000000000000000 ffff882034ae0000 ffffffff8139601e
> [696983.683518] Call Trace:
> [696983.683530]  [<ffffffff811a95cd>] ? blk_queue_bio+0x262/0x279
> [696983.683539]  [<ffffffff8139601e>] ? schedule+0x6b/0x78
> [696983.683553]  [<ffffffffa032ce9b>] ? closure_sync+0x66/0x91 [bcache]
> [696983.683563]  [<ffffffffa033c89f>] ? bch_writeback_thread+0x622/0x6b5 [bcache]
> [696983.683569]  [<ffffffff8100265c>] ? __switch_to+0x1de/0x3f7
> [696983.683578]  [<ffffffffa033c89f>] ? bch_writeback_thread+0x622/0x6b5 [bcache]
> [696983.683586]  [<ffffffffa033c27d>] ? write_dirty_finish+0x1bf/0x1bf [bcache]
> [696983.683594]  [<ffffffff810589d6>] ? kthread+0x99/0xa1
> [696983.683598]  [<ffffffff8105893d>] ? kthread_parkme+0x16/0x16
> [696983.683603]  [<ffffffff813986df>] ? ret_from_fork+0x3f/0x70
> [696983.683607]  [<ffffffff8105893d>] ? kthread_parkme+0x16/0x16
> 
> Don't know if this help.
> Unfortunately I thing that we will rollback and stop using Bcache unless this is really fixed :/
> 


Hmm, ok.
I don't have a bcache setup running at the moment, but I'll have a look at it
again once I can find some spare time. If anyone else want's to jump the
granade (Eric?) go ahead.

> Regards,
> 
> 7 décembre 2015 11:35 "Yannis Aribaud" <bugs@d6bell.net> a écrit:
> > Hi everyone,
> > 
> > It's been one week I'm using a 4.2.6 kernel merged with the Bcache patches from Ewheeler and no
> > signs of any kind of trouble I had before.
> > Thus it seems your patches fix my soft lockup issue.
> > It's currently running on one of my ceph nodes, I will certainly push it on the others during the
> > next weeks.
> > 
> > It would be great to merge thoses patches upstream since it seems that using Bcache in production
> > requires those fixes.
> > 
> > Anyway, thanks to all of you for your time, advices and work on Bcache. I'll keep you updated.
> > 
> > Regards,
> > -- 
> > Open is better
> -- 
> Open is better

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2016-01-27 14:57         ` Yannis Aribaud
  2016-01-27 15:16           ` Johannes Thumshirn
@ 2016-01-29 11:54           ` Johannes Thumshirn
  2016-01-29 12:54           ` Yannis Aribaud
       [not found]           ` <b91ce8156337b782c82317d25b3228bd@rcube.hebserv.net>
  3 siblings, 0 replies; 23+ messages in thread
From: Johannes Thumshirn @ 2016-01-29 11:54 UTC (permalink / raw)
  To: Yannis Aribaud; +Cc: Eric Wheeler, linux-bcache, Kent Overstreet

[ +cc Kent ]

On Wed, Jan 27, 2016 at 02:57:25PM +0000, Yannis Aribaud wrote:
> Hi,
> 
> After several weeks using the 4.2.6 kernel + patches from Ewheeler we just ran into a crash again.
> This time the kernel was still running and the server was responsive but not able to do any IO on the bcache devices.
> 
> [696983.683498] bcache_writebac D ffffffff810643df     0  5741      2 0x00000000
> [696983.683505]  ffff88103d01f180 0000000000000046 ffff88107842d000 ffffffff811a95cd
> [696983.683510]  0000000000000000 ffff8810388c4000 ffff88103d01f180 0000000000000001
> [696983.683514]  ffff882034ae0c10 0000000000000000 ffff882034ae0000 ffffffff8139601e
> [696983.683518] Call Trace:
> [696983.683530]  [<ffffffff811a95cd>] ? blk_queue_bio+0x262/0x279
> [696983.683539]  [<ffffffff8139601e>] ? schedule+0x6b/0x78
> [696983.683553]  [<ffffffffa032ce9b>] ? closure_sync+0x66/0x91 [bcache]
> [696983.683563]  [<ffffffffa033c89f>] ? bch_writeback_thread+0x622/0x6b5 [bcache]
> [696983.683569]  [<ffffffff8100265c>] ? __switch_to+0x1de/0x3f7
> [696983.683578]  [<ffffffffa033c89f>] ? bch_writeback_thread+0x622/0x6b5 [bcache]
> [696983.683586]  [<ffffffffa033c27d>] ? write_dirty_finish+0x1bf/0x1bf [bcache]
> [696983.683594]  [<ffffffff810589d6>] ? kthread+0x99/0xa1
> [696983.683598]  [<ffffffff8105893d>] ? kthread_parkme+0x16/0x16
> [696983.683603]  [<ffffffff813986df>] ? ret_from_fork+0x3f/0x70
> [696983.683607]  [<ffffffff8105893d>] ? kthread_parkme+0x16/0x16
> 
> Don't know if this help.
> Unfortunately I thing that we will rollback and stop using Bcache unless this is really fixed :/
> 

Hi Yannis,

Do you have a machine with a bcache setup running where you can reproduce the
error? Or do you know a method to reproduce the error?

What I'd be interested in is which locks are held when it locks up (you can
acquire this information with SysRq+d or echo d > /proc/sysrq-trigger.

Kent, do you have an idea what's happening here?

> Regards,
> 
> 7 décembre 2015 11:35 "Yannis Aribaud" <bugs@d6bell.net> a écrit:
> > Hi everyone,
> > 
> > It's been one week I'm using a 4.2.6 kernel merged with the Bcache patches from Ewheeler and no
> > signs of any kind of trouble I had before.
> > Thus it seems your patches fix my soft lockup issue.
> > It's currently running on one of my ceph nodes, I will certainly push it on the others during the
> > next weeks.
> > 
> > It would be great to merge thoses patches upstream since it seems that using Bcache in production
> > requires those fixes.
> > 
> > Anyway, thanks to all of you for your time, advices and work on Bcache. I'll keep you updated.
> > 
> > Regards,
> > -- 
> > Open is better
> -- 
> Open is better

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2016-01-27 14:57         ` Yannis Aribaud
  2016-01-27 15:16           ` Johannes Thumshirn
  2016-01-29 11:54           ` Johannes Thumshirn
@ 2016-01-29 12:54           ` Yannis Aribaud
       [not found]           ` <b91ce8156337b782c82317d25b3228bd@rcube.hebserv.net>
  3 siblings, 0 replies; 23+ messages in thread
From: Yannis Aribaud @ 2016-01-29 12:54 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: Eric Wheeler, linux-bcache, Kent Overstreet

29 janvier 2016 12:54 "Johannes Thumshirn" <jthumshirn@suse.de> a écrit:
> Hi Yannis,

Hi Johannes,

> Do you have a machine with a bcache setup running where you can reproduce the
> error? Or do you know a method to reproduce the error?

I don't know how to reproduce this issue. As I told the server was running 
correctly in production for more than 6 weeks when the problem appeared.
This server is still running Bcache in production for now...

No idea what trigger this :/

> What I'd be interested in is which locks are held when it locks up (you can
> acquire this information with SysRq+d or echo d > /proc/sysrq-trigger.

I'll try to get those informations if the bug strikes again.

Regards,

-- 
Open is better

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
       [not found]           ` <b91ce8156337b782c82317d25b3228bd@rcube.hebserv.net>
@ 2016-05-11  1:11             ` Eric Wheeler
  2016-05-11  9:33               ` Jens-U. Mozdzen
  2016-05-16 11:01             ` Yannis Aribaud
  1 sibling, 1 reply; 23+ messages in thread
From: Eric Wheeler @ 2016-05-11  1:11 UTC (permalink / raw)
  To: Yannis Aribaud; +Cc: Johannes Thumshirn, linux-bcache, Kent Overstreet

On Mon, 2 May 2016, Yannis Aribaud wrote:

> Hi evryone,
> 
> Once again I got a crash on one of my servers running Bcache.
> This time the server is running a vanilla 4.4.7 kernel.
> 
> I'm using one SSD as cache for multiple devices (Ceph OSD devices) and got this:
> I joint to this email an extensive dmesg output.

Can you describe your disk stack in more detail?  What is below ceph?

I think this is the first time I've heard of Ceph being used in a bcache 
stack on the list.  Are there any others out there with success?  If so, 
what kernel versions and disk stack configuration?

So you are using one cache for multiple backing devices in a single cache 
set?  I remember seeing a thread on the list about someone having a 
similar issue (multiple backends, but not ceph).  I put some time into 
looking for the thread, it might be this one:  
  bcache: Fix writeback_thread never writing back incomplete stripes. 

but there was a patch for that which should be in 4.4.y back in March.    

Make sure you have this commit: 
  
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=a556b804dfa654f054f3d304c2c4d274ffe81f92

Also, does your backing device(s) set raid_partial_stripes_expensive=1 in 
queue_limits (eg, md raid5/6)?  I've seen bugs around that flag that might 
not be fixed yet.

--
Eric Wheeler


> 
> PS: This server isn't the same hardware as my previous bcache issues.
> 
> If any of you has an idea...
> 
> Best regards,
> -- 
> Yannis Aribaud
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2016-05-11  1:11             ` Eric Wheeler
@ 2016-05-11  9:33               ` Jens-U. Mozdzen
  2016-05-11 18:26                 ` Eric Wheeler
  0 siblings, 1 reply; 23+ messages in thread
From: Jens-U. Mozdzen @ 2016-05-11  9:33 UTC (permalink / raw)
  To: Eric Wheeler
  Cc: Yannis Aribaud, Johannes Thumshirn, linux-bcache, Kent Overstreet

Hi *,

Zitat von Eric Wheeler <bcache@lists.ewheeler.net>:
> On Mon, 2 May 2016, Yannis Aribaud wrote:
> [...]
> I think this is the first time I've heard of Ceph being used in a bcache
> stack on the list.  Are there any others out there with success?  If so,
> what kernel versions and disk stack configuration?

After an extensive test period, we have just started a productive Ceph  
environment on our bcache-based SAN servers:

- MD-RAID6 (several SAS disks) as bcache backing device
- MD-RAID1 (two SAS SSDs) as bcache cache device, only for that single  
backing device
- LVM on top of /dev/bcache0
- LVs, xfs-formatted, mounted at a convenient place, used by OSDs

kernel on our SAN nodes is 4.1.13-5-default (64 bit), as distributed  
by OpenSUSE Leap 42.1 (SUSE makes sure vital bcache patches are  
included, amongst others).

We're planning to later switch to a similar setup like the OP is  
running, using separate disks with a common bcache caching device for  
OSDs.

While we have not stressed the Ceph part yet on the productive system  
(there's plenty of other data served by SCST/FC, NFS, SaMBa and  
others), we did not yet run into problems and especially no kernel  
crashes.

> [...]
> Also, does your backing device(s) set raid_partial_stripes_expensive=1 in
> queue_limits (eg, md raid5/6)?  I've seen bugs around that flag that might
> not be fixed yet.

This does sound disturbing to me - could you please give more details,  
probably in a new thread?

Regards,
Jens

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2016-05-11  9:33               ` Jens-U. Mozdzen
@ 2016-05-11 18:26                 ` Eric Wheeler
  2016-05-12 11:51                   ` Jens-U. Mozdzen
  0 siblings, 1 reply; 23+ messages in thread
From: Eric Wheeler @ 2016-05-11 18:26 UTC (permalink / raw)
  To: Jens-U. Mozdzen
  Cc: Eric Wheeler, Yannis Aribaud, Johannes Thumshirn, linux-bcache,
	Kent Overstreet

On Wed, 11 May 2016, Jens-U. Mozdzen wrote:

> Hi *,
> 
> Zitat von Eric Wheeler <bcache@lists.ewheeler.net>:
> >On Mon, 2 May 2016, Yannis Aribaud wrote:
> >[...]
> >I think this is the first time I've heard of Ceph being used in a bcache
> >stack on the list.  Are there any others out there with success?  If so,
> >what kernel versions and disk stack configuration?
> 
> After an extensive test period, we have just started a productive Ceph
> environment on our bcache-based SAN servers:
> 
> - MD-RAID6 (several SAS disks) as bcache backing device
> - MD-RAID1 (two SAS SSDs) as bcache cache device, only for that single backing
> device
> - LVM on top of /dev/bcache0
> - LVs, xfs-formatted, mounted at a convenient place, used by OSDs

So no ceph here?  

If you're using 4.4.y then you definitely need the patch from Ming Lei.  
Read this (rather long) thread if you want the details:
  "block: make sure big bio is splitted into at most 256 bvecs"
This affects 4.3 and newer iirc.  

OTOH, 4.1 is rock solid.  As of 4.1.21 or so it has all of the bcache 
stability fixes to date.


> kernel on our SAN nodes is 4.1.13-5-default (64 bit), as distributed by
> OpenSUSE Leap 42.1 (SUSE makes sure vital bcache patches are included, amongst
> others).
> 
> We're planning to later switch to a similar setup like the OP is running,
> using separate disks with a common bcache caching device for OSDs.
> 
> While we have not stressed the Ceph part yet on the productive system (there's
> plenty of other data served by SCST/FC, NFS, SaMBa and others), we did not yet
> run into problems and especially no kernel crashes.
> 
> >[...]
> >Also, does your backing device(s) set raid_partial_stripes_expensive=1 in
> >queue_limits (eg, md raid5/6)?  I've seen bugs around that that might
> >not be fixed yet.
> 
> This does sound disturbing to me - could you please give more details,
> probably in a new thread?

Same as above I think.  When bcache writebacks in opt_io sized writes that 
exceed 256 bvecs then you run into issues.  It only does that if 
raid_partial_stripes_expensive=1 like raid5/6 when it tries to prevent 
re-writes to the same stride.  

--
Eric Wheeler


> 
> Regards,
> Jens
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2016-05-11 18:26                 ` Eric Wheeler
@ 2016-05-12 11:51                   ` Jens-U. Mozdzen
  0 siblings, 0 replies; 23+ messages in thread
From: Jens-U. Mozdzen @ 2016-05-12 11:51 UTC (permalink / raw)
  To: Eric Wheeler; +Cc: linux-bcache

Hi Eric,

Zitat von Eric Wheeler <bcache@lists.ewheeler.net>:
> On Wed, 11 May 2016, Jens-U. Mozdzen wrote:
>
>> Hi *,
>>
>> Zitat von Eric Wheeler <bcache@lists.ewheeler.net>:
>> >[...]
>> >I think this is the first time I've heard of Ceph being used in a bcache
>> >stack on the list.  Are there any others out there with success?  If so,
>> >what kernel versions and disk stack configuration?
>>
>> After an extensive test period, we have just started a productive Ceph
>> environment on our bcache-based SAN servers:
>>
>> - MD-RAID6 (several SAS disks) as bcache backing device
>> - MD-RAID1 (two SAS SSDs) as bcache cache device, only for that  
>> single backing
>> device
>> - LVM on top of /dev/bcache0
>> - LVs, xfs-formatted, mounted at a convenient place, used by OSDs
>
> So no ceph here?

Ceph OSDs. This differs from the OP's situation in that the bcache  
device isn't given to Ceph directly. But IIRC, Ceph does the same  
(partition the device and put XFS file systems on these partitions).  
And as already pointed out, we (currently) don't use the single SSD  
for multiple backing stores for these systems (but do so on other  
devices with high i/o load).

I responded because you seemed to be asking for any stack setup  
involving bcache and Ceph.

> If you're using 4.4.y then you definitely need the patch from Ming Lei.
> [...]
> OTOH, 4.1 is rock solid.  As of 4.1.21 or so it has all of the bcache
> stability fixes to date.
> [...]
> Same as above I think.  When bcache writebacks in opt_io sized writes that
> exceed 256 bvecs then you run into issues.  It only does that if
> raid_partial_stripes_expensive=1 like raid5/6 when it tries to prevent
> re-writes to the same stride.

>> kernel on our SAN nodes is 4.1.13-5-default (64 bit)

As mentioned, we're currently on 4.1.13, so no need to worry for me.  
Thank you for clarifying!

Regards,
Jens

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
       [not found]           ` <b91ce8156337b782c82317d25b3228bd@rcube.hebserv.net>
  2016-05-11  1:11             ` Eric Wheeler
@ 2016-05-16 11:01             ` Yannis Aribaud
  2016-05-19 23:26               ` Eric Wheeler
  1 sibling, 1 reply; 23+ messages in thread
From: Yannis Aribaud @ 2016-05-16 11:01 UTC (permalink / raw)
  To: Eric Wheeler; +Cc: Johannes Thumshirn, linux-bcache, Kent Overstreet

Hi,


11 mai 2016 03:10 "Eric Wheeler" <bcache@lists.ewheeler.net> a écrit:
> Can you describe your disk stack in more detail? What is below ceph?
> 
> I think this is the first time I've heard of Ceph being used in a bcache
> stack on the list. Are there any others out there with success? If so,
> what kernel versions and disk stack configuration?

I'm using one SSD as caching device (one cache set) and several SATA drives
 as backing devices.
No RAID, no MD, no LVM, nothing fancy, only bare devices and Bcache.

Bcache devices are XFS formatted and used by Ceph OSD (journaling on the same 
device). One OSD per bcache device.


> So you are using one cache for multiple backing devices in a single cache
> set? I remember seeing a thread on the list about someone having a
> similar issue (multiple backends, but not ceph). I put some time into
> looking for the thread, it might be this one:
> bcache: Fix writeback_thread never writing back incomplete stripes.
> 
> but there was a patch for that which should be in 4.4.y back in March.
> 
> Make sure you have this commit:
> 
> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=a556b804dfa654f054f3
> 304c2c4d274ffe81f92

My kernel already has this commit.

> Also, does your backing device(s) set raid_partial_stripes_expensive=1 in
> queue_limits (eg, md raid5/6)? I've seen bugs around that flag that might
> not be fixed yet.

Nope. This is set to 0.

Regards,
-- 
Yannis Aribaud

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: bcache_gc: BUG: soft lockup
  2016-05-16 11:01             ` Yannis Aribaud
@ 2016-05-19 23:26               ` Eric Wheeler
  0 siblings, 0 replies; 23+ messages in thread
From: Eric Wheeler @ 2016-05-19 23:26 UTC (permalink / raw)
  To: Yannis Aribaud; +Cc: Johannes Thumshirn, linux-bcache, Kent Overstreet

[-- Attachment #1: Type: TEXT/PLAIN, Size: 2038 bytes --]

On Mon, 16 May 2016, Yannis Aribaud wrote:

> Hi,
> 
> 
> 11 mai 2016 03:10 "Eric Wheeler" <bcache@lists.ewheeler.net> a écrit:
> > Can you describe your disk stack in more detail? What is below ceph?
> > 
> > I think this is the first time I've heard of Ceph being used in a bcache
> > stack on the list. Are there any others out there with success? If so,
> > what kernel versions and disk stack configuration?
> 
> I'm using one SSD as caching device (one cache set) and several SATA drives
>  as backing devices.
> No RAID, no MD, no LVM, nothing fancy, only bare devices and Bcache.
> 
> Bcache devices are XFS formatted and used by Ceph OSD (journaling on the same 
> device). One OSD per bcache device.
 
What bcache bucket size are you using?  Review this thread and see if it 
sounds similar:
  http://www.spinics.net/lists/linux-bcache/msg03796.html

I wonder if XFS is sending writes down that are too large as speculated 
in the thread above.


Please also try these two patches and see if they help:

  https://lkml.org/lkml/2016/4/5/1046

  http://www.spinics.net/lists/raid/msg51830.html

--
Eric Wheeler

> 
> > So you are using one cache for multiple backing devices in a single cache
> > set? I remember seeing a thread on the list about someone having a
> > similar issue (multiple backends, but not ceph). I put some time into
> > looking for the thread, it might be this one:
> > bcache: Fix writeback_thread never writing back incomplete stripes.
> > 
> > but there was a patch for that which should be in 4.4.y back in March.
> > 
> > Make sure you have this commit:
> > 
> > https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=a556b804dfa654f054f3
> > 304c2c4d274ffe81f92
> 
> My kernel already has this commit.
> 
> > Also, does your backing device(s) set raid_partial_stripes_expensive=1 in
> > queue_limits (eg, md raid5/6)? I've seen bugs around that flag that might
> > not be fixed yet.
> 
> Nope. This is set to 0.





--
Eric Wheeler



> 
> Regards,
> -- 
> Yannis Aribaud
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2016-05-19 23:26 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-13 12:09 bcache_gc: BUG: soft lockup Yannis Aribaud
2015-11-13 12:37 ` Johannes Thumshirn
2015-11-13 12:55 ` Yannis Aribaud
2015-11-13 13:05   ` Johannes Thumshirn
2015-11-13 13:27   ` Yannis Aribaud
     [not found]   ` <9c96132d5fce4a5a77b1b086f7c6095d@rcube.hebserv.net>
2015-11-16  8:09     ` Johannes Thumshirn
2015-11-16 10:26     ` Yannis Aribaud
2015-11-27 12:23     ` Johannes Thumshirn
2015-11-27 12:32     ` Yannis Aribaud
2015-11-30  1:49       ` Eric Wheeler
2015-11-30  7:07         ` Johannes Thumshirn
2015-11-30  9:59         ` Yannis Aribaud
2015-12-07 10:35         ` Yannis Aribaud
2016-01-27 14:57         ` Yannis Aribaud
2016-01-27 15:16           ` Johannes Thumshirn
2016-01-29 11:54           ` Johannes Thumshirn
2016-01-29 12:54           ` Yannis Aribaud
     [not found]           ` <b91ce8156337b782c82317d25b3228bd@rcube.hebserv.net>
2016-05-11  1:11             ` Eric Wheeler
2016-05-11  9:33               ` Jens-U. Mozdzen
2016-05-11 18:26                 ` Eric Wheeler
2016-05-12 11:51                   ` Jens-U. Mozdzen
2016-05-16 11:01             ` Yannis Aribaud
2016-05-19 23:26               ` Eric Wheeler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.