All of lore.kernel.org
 help / color / mirror / Atom feed
* how to remove and re-add broken mon?
@ 2013-09-04 10:40 Bernhard Glomm
       [not found] ` <ee15755bb1292a720a84f58fdc80a31f7a75fd64-uSuf018nij3sq35pWSNszA@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: Bernhard Glomm @ 2013-09-04 10:40 UTC (permalink / raw)
  To: ceph-users-idqoXFIVOFJgJs9I8MT0rw; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 16457 bytes --]

Hi all,

after some days of successful creating and destroying rbd's, snapshots,
clones and migrating formats all of a sudden one of the monitors doesn't work anymore.
I tried to remove and re-add the monitor from the cluster, but that doesn't seem to work either.

Here's the situation:
I liked to use ceph-deploy to initiate the cluster.
Due to the broken ceph-create-keys in dumpling I turned to the gitbuilder version
now running

ceph version 0.67.2-23-g24f2669 (24f2669783e2eb9d9af5ecbe106efed93366ba63)
on uptodate raring systems
All of a sudden the host from which I ran ceph-deploy, and which should be one of the 
5 monitors (from which 2 are also serving as OSDs) has fallen out of the quorum as you can see here
(yes, time is in sync on all nodes):

------------------
root@nuke36[/0]:~ # ceph -s
2013-09-04 11:09:56.039547 7f7a8820f700  1 -- :/0 messenger.start
2013-09-04 11:09:56.040646 7f7a8820f700  1 -- :/1016260 --> 192.168.242.92:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0 0x7f7a8000e8f0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.041304 7f7a84a08700  1 -- 192.168.242.36:0/1016260 learned my addr 192.168.242.36:0/1016260
2013-09-04 11:09:56.042843 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 1 ==== mon_map v1 ==== 776+0+0 (2241333437 0 0) 0x7f7a70000c30 con 0x7f7a8000e4e0
2013-09-04 11:09:56.043038 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 2 ==== auth_reply(proto 2 0 Success) v1 ==== 33+0+0 (2063715990 0 0) 0x7f7a70001060 con 0x7f7a8000e4e0
2013-09-04 11:09:56.043324 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x7f7a74001af0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.044197 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 3 ==== auth_reply(proto 2 0 Success) v1 ==== 206+0+0 (3910749728 0 0) 0x7f7a70001060 con 0x7f7a8000e4e0
2013-09-04 11:09:56.044375 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0 0x7f7a740020d0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.045376 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 4 ==== auth_reply(proto 2 0 Success) v1 ==== 393+0+0 (3802320753 0 0) 0x7f7a700008f0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.045457 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7f7a8000ed80 con 0x7f7a8000e4e0
2013-09-04 11:09:56.045550 7f7a8820f700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 0x7f7a800079f0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.045559 7f7a8820f700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 0x7f7a8000fa10 con 0x7f7a8000e4e0
2013-09-04 11:09:56.046376 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 5 ==== mon_map v1 ==== 776+0+0 (2241333437 0 0) 0x7f7a70001290 con 0x7f7a8000e4e0
2013-09-04 11:09:56.046417 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1524320885 0 0) 0x7f7a70001480 con 0x7f7a8000e4e0
2013-09-04 11:09:56.046429 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 7 ==== osd_map(22..22 src has 1..22) v3 ==== 2355+0+0 (11792226 0 0) 0x7f7a70001f70 con 0x7f7a8000e4e0
2013-09-04 11:09:56.046828 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 8 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1524320885 0 0) 0x7f7a700008f0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.046948 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 9 ==== osd_map(22..22 src has 1..22) v3 ==== 2355+0+0 (11792226 0 0) 0x7f7a700008c0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.047071 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 10 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1524320885 0 0) 0x7f7a70000df0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.047547 7f7a8820f700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- ?+0 0x7f7a8000b0f0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.050938 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 11 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0  v0) v1 ==== 72+0+24040 (1092875540 0 2922658865) 0x7f7a700008c0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.089981 7f7a8820f700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_command({"prefix": "status"} v 0) v1 -- ?+0 0x7f7a8000b0d0 con 0x7f7a8000e4e0
2013-09-04 11:09:56.091348 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 12 ==== mon_command_ack([{"prefix": "status"}]=0  v0) v1 ==== 54+0+558 (1155462804 0 1174924833) 0x7f7a70000db0 con 0x7f7a8000e4e0
  cluster b085fba3-8e17-443c-bb61-7758504538f8
   health HEALTH_WARN 1 mons down, quorum 0,1,3,4 atom01,atom02,ping,pong
   monmap e1: 5 mons at {atom01=192.168.242.31:6789/0,atom02=192.168.242.32:6789/0,nuke36=192.168.242.36:6789/0,ping=192.168.242.92:6789/0,pong=192.168.242.93:6789/0}, election epoch 26, quorum 0,1,3,4 atom01,atom02,ping,pong
   osdmap e22: 2 osds: 2 up, 2 in
    pgmap v46761: 1192 pgs: 1192 active+clean; 7806 MB data, 20618 MB used, 3702 GB / 3722 GB avail; 9756B/s wr, 0op/s
   mdsmap e17: 1/1/1 up {0=pong=up:active}, 1 up:standby

2013-09-04 11:09:56.096071 7f7a8820f700  1 -- 192.168.242.36:0/1016260 mark_down 0x7f7a8000e4e0 -- 0x7f7a8000e280
2013-09-04 11:09:56.096516 7f7a8820f700  1 -- 192.168.242.36:0/1016260 mark_down_all
2013-09-04 11:09:56.097065 7f7a8820f700  1 -- 192.168.242.36:0/1016260 shutdown complete.
------------------

this is the ceph.conf that was generated during ceph-deploy (well I added the debug lines obviously)

------------------
root@nuke36[/0]:~ # cat /etc/ceph/ceph.conf 
[global]
fsid = b085fba3-8e17-443c-bb61-7758504538f8
mon_initial_members = ping, pong, nuke36, atom01, atom02
mon_host = 192.168.242.92,192.168.242.93,192.168.242.36,192.168.242.31,192.168.242.32
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true
debug ms = 1
debug mon = 20
------------------

after a reboot of the node I now find

------------------
root@nuke36[/1]:~ # ps ax | egrep ceph
  855 ?        Ssl    0:00 /usr/bin/ceph-mon --cluster=ceph -i nuke36 -f
  856 ?        Ss     0:00 /usr/bin/python /usr/sbin/ceph-create-keys --cluster=ceph -i nuke36
 1813 pts/1    R+     0:00 egrep --color=auto ceph
------------------

with ceph-create-keys runing infinetly,
while the key files but already exist

------------------
root@nuke36[/1]:~ # ls -lh /etc/ceph/
total 88K
-rw-r--r-- 1 root root  72 Aug 30 15:54 ceph.bootstrap-mds.keyring
-rw-r--r-- 1 root root  72 Aug 30 15:54 ceph.bootstrap-osd.keyring
-rw------- 1 root root  64 Aug 30 15:54 ceph.client.admin.keyring
-rw-r--r-- 1 root root 303 Sep  4 10:19 ceph.conf
-rw-r--r-- 1 root root 59K Sep  3 10:15 ceph.log
-rw-r--r-- 1 root root  73 Aug 30 15:53 ceph.mon.keyring
-rw-r--r-- 1 root root  92 Aug 30 00:03 rbdmap
------------------

and

------------------
root@nuke36[/1]:~ # tree -pfugiAD /var/lib/ceph/
/var/lib/ceph
[drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds
[-rw------- root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds/ceph.keyring
[drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd
[-rw------- root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd/ceph.keyring
[drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/mds
[drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon
[drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36/done
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36/keyring
[drwxr-xr-x root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db
[-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007726.sst
[-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007727.sst
[-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007728.sst
[-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007729.sst
[-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007730.sst
[-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007767.sst
[-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007768.sst
[-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007769.sst
[-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007770.sst
[-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007772.sst
[-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007773.sst
[-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007774.sst
[-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007775.sst
[-rw-r--r-- root     root     Sep  4 10:18]  /var/lib/ceph/mon/ceph-nuke36/store.db/007777.sst
[-rw-r--r-- root     root     Sep  4 10:19]  /var/lib/ceph/mon/ceph-nuke36/store.db/007780.sst
[-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/007783.sst
[-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/007784.log
[-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/CURRENT
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36/store.db/LOCK
[-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/LOG
[-rw-r--r-- root     root     Sep  4 10:19]  /var/lib/ceph/mon/ceph-nuke36/store.db/LOG.old
[-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/MANIFEST-007782
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36/upstart
[drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/osd
[drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/tmp
------------------

to compare with atom01 which is still running in the cluster...

------------------
root@atom01[/0]:~ # tree -pfugiAD /var/lib/ceph/
/var/lib/ceph
[drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds
[-rw------- root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds/ceph.keyring
[drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd
[-rw------- root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd/ceph.keyring
[drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/mds
[drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon
[drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/done
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/keyring
[drwxr-xr-x root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db
[-rw-r--r-- root     root     Sep  4 11:25]  /var/lib/ceph/mon/ceph-atom01/store.db/006339.log
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006342.sst
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006343.sst
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006344.sst
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006345.sst
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006346.sst
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006347.sst
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006348.sst
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/store.db/CURRENT
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/store.db/LOCK
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/LOG
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/store.db/LOG.old
[-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/MANIFEST-000004
[-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/upstart
[drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/osd
[drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/tmp
------------------

the problem with ceph-create-keys seemed to have been fixed with wip-4924
since in that version "rbd create" wouldn't work I switched to
deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/       raring main
(runing on an uptodate raring)
I thought of just removing and than re-adding the failing mon but who do I do that?
The documentation on
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
says:
"service ceph -a stop mon.{mon-id}"

------------------
root@nuke36[/1]:~ # service ceph -a stop mon.nuke36
/etc/init.d/ceph: mon.nuke36 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
root@nuke36[/1]:~ # service ceph -a stop mon.c
/etc/init.d/ceph: mon.c not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
root@nuke36[/1]:~ # service ceph -a stop mon.2
/etc/init.d/ceph: mon.2 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
root@nuke36[/1]:~ # service ceph -a stop mon.ceph-nuke36
/etc/init.d/ceph: mon.ceph-nuke36 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
------------------

that doesn't help, manually stopping the daemon doesn't work either (respawning, okay)
but this combination leaves me quite curious

------------------
root@atom01[/0]:~ # ceph health detail
HEALTH_WARN 1 mons down, quorum 0,1,3,4 atom01,atom02,ping,pong
mon.nuke36 (rank 2) addr 192.168.242.36:6789/0 is down (out of quorum)
root@atom01[/0]:~ # service ceph -a stop mon.nuke36
/etc/init.d/ceph: mon.nuke36 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
/etc/init.d/ceph: mon.2 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
root@atom01[/0]:~ # service ceph -a stop mon.rank2
------------------

something seems to be out of sync, at least with the documentation?
any hint how to proceed from here?

TIA

Bernhard


-- 


  
         
    
        
      
        
          
            Bernhard Glomm

            IT Administration


          
            
                  Phone:
                
                
                  +49 (30) 86880 134
                
              
                  Fax:
                
                
                  +49 (30) 86880 100
                
              
                  Skype:
                
                
                  bernhard.glomm.ecologic
                
              
        
      
    
            
            
            
            
            
            
            
            
      
    
        
          Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany

          GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464

          Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
        
      
    
         root@nuke



[-- Attachment #1.2: Type: text/html, Size: 39409 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: how to remove and re-add broken mon?   SOLVED
       [not found] ` <ee15755bb1292a720a84f58fdc80a31f7a75fd64-uSuf018nij3sq35pWSNszA@public.gmane.org>
@ 2013-09-04 15:21   ` Bernhard Glomm
       [not found]     ` <086d868ba20ebd557ee3cf166c27295bb1c2c381-uSuf018nij3sq35pWSNszA@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: Bernhard Glomm @ 2013-09-04 15:21 UTC (permalink / raw)
  To: ceph-users-idqoXFIVOFJgJs9I8MT0rw; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 19018 bytes --]

Removing a monitor, also like described in
http://ceph.com/docs/master/rados/deployment/ceph-deploy-mon/
doesn't work

----------------
root@nuke36[/1]:/etc/ceph # ceph-deploy mon destroy nuke36
[ceph_deploy.mon][DEBUG ] Removing mon from nuke36
[ceph_deploy.mon][ERROR ] ceph-mon deamon did not stop
[ceph_deploy][ERROR ] GenericError: Failed to destroy 1 monitors
----------------

is this, I can't destroy the mon on which I run the destroy command?
(I could destroy 
but since I just added another two monitors to my cluster,
the broken first one is alive again...

well that should do as a work around for now 
(since I'm using the dev branch and do only testing stuff)
still, would be curious if I could have repaired it any other way.

Bernhard

Am 04.09.2013 12:40:04, schrieb Bernhard Glomm:
> Hi all,
> 
> after some days of successful creating and destroying rbd's, snapshots,
> clones and migrating formats all of a sudden one of the monitors doesn't work anymore.
> I tried to remove and re-add the monitor from the cluster, but that doesn't seem to work either.
> 
> Here's the situation:
> I liked to use ceph-deploy to initiate the cluster.
> Due to the broken ceph-create-keys in dumpling I turned to the gitbuilder version
> now running
> 
> ceph version 0.67.2-23-g24f2669 (24f2669783e2eb9d9af5ecbe106efed93366ba63)
> on uptodate raring systems
> All of a sudden the host from which I ran ceph-deploy, and which should be one of the 
> 5 monitors (from which 2 are also serving as OSDs) has fallen out of the quorum as you can see here
> (yes, time is in sync on all nodes):
> 
> ------------------
> root@nuke36[/0]:~ # ceph -s
> 2013-09-04 11:09:56.039547 7f7a8820f700  1 -- :/0 messenger.start
> 2013-09-04 11:09:56.040646 7f7a8820f700  1 -- :/1016260 --> 192.168.242.92:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0 0x7f7a8000e8f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.041304 7f7a84a08700  1 -- 192.168.242.36:0/1016260 learned my addr 192.168.242.36:0/1016260
> 2013-09-04 11:09:56.042843 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 1 ==== mon_map v1 ==== 776+0+0 (2241333437 0 0) 0x7f7a70000c30 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.043038 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 2 ==== auth_reply(proto 2 0 Success) v1 ==== 33+0+0 (2063715990 0 0) 0x7f7a70001060 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.043324 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x7f7a74001af0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.044197 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 3 ==== auth_reply(proto 2 0 Success) v1 ==== 206+0+0 (3910749728 0 0) 0x7f7a70001060 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.044375 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0 0x7f7a740020d0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.045376 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 4 ==== auth_reply(proto 2 0 Success) v1 ==== 393+0+0 (3802320753 0 0) 0x7f7a700008f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.045457 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0 0x7f7a8000ed80 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.045550 7f7a8820f700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 0x7f7a800079f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.045559 7f7a8820f700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 0x7f7a8000fa10 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046376 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 5 ==== mon_map v1 ==== 776+0+0 (2241333437 0 0) 0x7f7a70001290 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046417 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1524320885 0 0) 0x7f7a70001480 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046429 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 7 ==== osd_map(22..22 src has 1..22) v3 ==== 2355+0+0 (11792226 0 0) 0x7f7a70001f70 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046828 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 8 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1524320885 0 0) 0x7f7a700008f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046948 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 9 ==== osd_map(22..22 src has 1..22) v3 ==== 2355+0+0 (11792226 0 0) 0x7f7a700008c0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.047071 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 10 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0 (1524320885 0 0) 0x7f7a70000df0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.047547 7f7a8820f700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) v1 -- ?+0 0x7f7a8000b0f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.050938 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 11 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0  v0) v1 ==== 72+0+24040 (1092875540 0 2922658865) 0x7f7a700008c0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.089981 7f7a8820f700  1 -- 192.168.242.36:0/1016260 --> 192.168.242.92:6789/0 -- mon_command({"prefix": "status"} v 0) v1 -- ?+0 0x7f7a8000b0d0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.091348 7f7a86a0c700  1 -- 192.168.242.36:0/1016260 <== mon.3 192.168.242.92:6789/0 12 ==== mon_command_ack([{"prefix": "status"}]=0  v0) v1 ==== 54+0+558 (1155462804 0 1174924833) 0x7f7a70000db0 con 0x7f7a8000e4e0
>   cluster b085fba3-8e17-443c-bb61-7758504538f8
>    health HEALTH_WARN 1 mons down, quorum 0,1,3,4 atom01,atom02,ping,pong
>    monmap e1: 5 mons at {atom01=192.168.242.31:6789/0,atom02=192.168.242.32:6789/0,nuke36=192.168.242.36:6789/0,ping=192.168.242.92:6789/0,pong=192.168.242.93:6789/0}, election epoch 26, quorum 0,1,3,4 atom01,atom02,ping,pong
>    osdmap e22: 2 osds: 2 up, 2 in
>     pgmap v46761: 1192 pgs: 1192 active+clean; 7806 MB data, 20618 MB used, 3702 GB / 3722 GB avail; 9756B/s wr, 0op/s
>    mdsmap e17: 1/1/1 up {0=pong=up:active}, 1 up:standby
> 
> 2013-09-04 11:09:56.096071 7f7a8820f700  1 -- 192.168.242.36:0/1016260 mark_down 0x7f7a8000e4e0 -- 0x7f7a8000e280
> 2013-09-04 11:09:56.096516 7f7a8820f700  1 -- 192.168.242.36:0/1016260 mark_down_all
> 2013-09-04 11:09:56.097065 7f7a8820f700  1 -- 192.168.242.36:0/1016260 shutdown complete.
> ------------------
> 
> this is the ceph.conf that was generated during ceph-deploy (well I added the debug lines obviously)
> 
> ------------------
> root@nuke36[/0]:~ # cat /etc/ceph/ceph.conf 
> [global]
> fsid = b085fba3-8e17-443c-bb61-7758504538f8
> mon_initial_members = ping, pong, nuke36, atom01, atom02
> mon_host = 192.168.242.92,192.168.242.93,192.168.242.36,192.168.242.31,192.168.242.32
> auth_supported = cephx
> osd_journal_size = 1024
> filestore_xattr_use_omap = true
> debug ms = 1
> debug mon = 20
> ------------------
> 
> after a reboot of the node I now find
> 
> ------------------
> root@nuke36[/1]:~ # ps ax | egrep ceph
>   855 ?        Ssl    0:00 /usr/bin/ceph-mon --cluster=ceph -i nuke36 -f
>   856 ?        Ss     0:00 /usr/bin/python /usr/sbin/ceph-create-keys --cluster=ceph -i nuke36
>  1813 pts/1    R+     0:00 egrep --color=auto ceph
> ------------------
> 
> with ceph-create-keys runing infinetly,
> while the key files but already exist
> 
> ------------------
> root@nuke36[/1]:~ # ls -lh /etc/ceph/
> total 88K
> -rw-r--r-- 1 root root  72 Aug 30 15:54 ceph.bootstrap-mds.keyring
> -rw-r--r-- 1 root root  72 Aug 30 15:54 ceph.bootstrap-osd.keyring
> -rw------- 1 root root  64 Aug 30 15:54 ceph.client.admin.keyring
> -rw-r--r-- 1 root root 303 Sep  4 10:19 ceph.conf
> -rw-r--r-- 1 root root 59K Sep  3 10:15 ceph.log
> -rw-r--r-- 1 root root  73 Aug 30 15:53 ceph.mon.keyring
> -rw-r--r-- 1 root root  92 Aug 30 00:03 rbdmap
> ------------------
> 
> and
> 
> ------------------
> root@nuke36[/1]:~ # tree -pfugiAD /var/lib/ceph/
> /var/lib/ceph
> [drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds
> [-rw------- root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds/ceph.keyring
> [drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd
> [-rw------- root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd/ceph.keyring
> [drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/mds
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36/done
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36/keyring
> [drwxr-xr-x root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db
> [-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007726.sst
> [-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007727.sst
> [-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007728.sst
> [-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007729.sst
> [-rw-r--r-- root     root     Sep  3  6:21]  /var/lib/ceph/mon/ceph-nuke36/store.db/007730.sst
> [-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007767.sst
> [-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007768.sst
> [-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007769.sst
> [-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007770.sst
> [-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007772.sst
> [-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007773.sst
> [-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007774.sst
> [-rw-r--r-- root     root     Sep  4 10:14]  /var/lib/ceph/mon/ceph-nuke36/store.db/007775.sst
> [-rw-r--r-- root     root     Sep  4 10:18]  /var/lib/ceph/mon/ceph-nuke36/store.db/007777.sst
> [-rw-r--r-- root     root     Sep  4 10:19]  /var/lib/ceph/mon/ceph-nuke36/store.db/007780.sst
> [-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/007783.sst
> [-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/007784.log
> [-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/CURRENT
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36/store.db/LOCK
> [-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/LOG
> [-rw-r--r-- root     root     Sep  4 10:19]  /var/lib/ceph/mon/ceph-nuke36/store.db/LOG.old
> [-rw-r--r-- root     root     Sep  4 11:16]  /var/lib/ceph/mon/ceph-nuke36/store.db/MANIFEST-007782
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36/upstart
> [drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/osd
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/tmp
> ------------------
> 
> to compare with atom01 which is still running in the cluster...
> 
> ------------------
> root@atom01[/0]:~ # tree -pfugiAD /var/lib/ceph/
> /var/lib/ceph
> [drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds
> [-rw------- root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds/ceph.keyring
> [drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd
> [-rw------- root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd/ceph.keyring
> [drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/mds
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/done
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/keyring
> [drwxr-xr-x root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db
> [-rw-r--r-- root     root     Sep  4 11:25]  /var/lib/ceph/mon/ceph-atom01/store.db/006339.log
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006342.sst
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006343.sst
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006344.sst
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006345.sst
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006346.sst
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006347.sst
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/006348.sst
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/store.db/CURRENT
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/store.db/LOCK
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/LOG
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/store.db/LOG.old
> [-rw-r--r-- root     root     Sep  4 11:18]  /var/lib/ceph/mon/ceph-atom01/store.db/MANIFEST-000004
> [-rw-r--r-- root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01/upstart
> [drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/osd
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/tmp
> ------------------
> 
> the problem with ceph-create-keys seemed to have been fixed with wip-4924
> since in that version "rbd create" wouldn't work I switched to
> deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/       raring main
> (runing on an uptodate raring)
> I thought of just removing and than re-adding the failing mon but who do I do that?
> The documentation on
> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
> says:
> "service ceph -a stop mon.{mon-id}"
> 
> ------------------
> root@nuke36[/1]:~ # service ceph -a stop mon.nuke36
> /etc/init.d/ceph: mon.nuke36 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
> root@nuke36[/1]:~ # service ceph -a stop mon.c
> /etc/init.d/ceph: mon.c not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
> root@nuke36[/1]:~ # service ceph -a stop mon.2
> /etc/init.d/ceph: mon.2 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
> root@nuke36[/1]:~ # service ceph -a stop mon.ceph-nuke36
> /etc/init.d/ceph: mon.ceph-nuke36 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
> ------------------
> 
> that doesn't help, manually stopping the daemon doesn't work either (respawning, okay)
> but this combination leaves me quite curious
> 
> ------------------
> root@atom01[/0]:~ # ceph health detail
> HEALTH_WARN 1 mons down, quorum 0,1,3,4 atom01,atom02,ping,pong
> mon.nuke36 (rank 2) addr 192.168.242.36:6789/0 is down (out of quorum)
> root@atom01[/0]:~ # service ceph -a stop mon.nuke36
> /etc/init.d/ceph: mon.nuke36 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
> /etc/init.d/ceph: mon.2 not found (/etc/ceph/ceph.conf defines , /var/lib/ceph defines )
> root@atom01[/0]:~ # service ceph -a stop mon.rank2
> ------------------
> 
> something seems to be out of sync, at least with the documentation?
> any hint how to proceed from here?
> 
> TIA
> 
> Bernhard
> 
> 
> -- 
> 
  > 
         > 
    > 
        
> 
      > 
        > 
          > 
            > Bernhard Glomm
> 
            IT Administration
> 
> 
          > 
            > 
                  Phone:
                > 
                > 
                  +49 (30) 86880 134
                > 
              > 
                  Fax:
                > 
                > 
                  +49 (30) 86880 100
                > 
              > 
                  Skype:
                > 
                > 
                  bernhard.glomm.ecologic
                > 
              > 
        
      
    
            > 
            > 
            > 
            > 
            > 
            > 
            > 
            > 
      
    
        > 
          Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany
> 
          GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464
> 
          Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
        > 
      
    
         > root@nuke
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 


  
         
    
        
      
        
          
            Bernhard Glomm

            IT Administration


          
            
                  Phone:
                
                
                  +49 (30) 86880 134
                
              
                  Fax:
                
                
                  +49 (30) 86880 100
                
              
                  Skype:
                
                
                  bernhard.glomm.ecologic
                
              
        
      
    
            
            
            
            
            
            
            
            
      
    
        
          Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany

          GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464

          Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
        
      
    
         


[-- Attachment #1.2: Type: text/html, Size: 43567 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: how to remove and re-add broken mon? SOLVED
       [not found]     ` <086d868ba20ebd557ee3cf166c27295bb1c2c381-uSuf018nij3sq35pWSNszA@public.gmane.org>
@ 2013-09-04 16:05       ` Alfredo Deza
  0 siblings, 0 replies; 3+ messages in thread
From: Alfredo Deza @ 2013-09-04 16:05 UTC (permalink / raw)
  To: Bernhard Glomm; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 19065 bytes --]

On Wed, Sep 4, 2013 at 11:21 AM, Bernhard Glomm
<bernhard.glomm-uSuf018nij3sq35pWSNszA@public.gmane.org>wrote:

> Removing a monitor, also like described in
> http://ceph.com/docs/master/rados/deployment/ceph-deploy-mon/
> doesn't work
>
> ----------------
> root@nuke36[/1]:/etc/ceph # ceph-deploy mon destroy nuke36
> [ceph_deploy.mon][DEBUG ] Removing mon from nuke36
> [ceph_deploy.mon][ERROR ] ceph-mon deamon did not stop
> [ceph_deploy][ERROR ] GenericError: Failed to destroy 1 monitors
> ----------------
>
> is this, I can't destroy the mon on which I run the destroy command?
> (I could destroy
> but since I just added another two monitors to my cluster,
> the broken first one is alive again...
>

What OS is the nuke36 host?

There are a bunch of commands that `mon destroy` will call on the remote
host, but unfortunately, this is one of the commands I have not fixed yet
to use the new remote logging to tell you.

Can you try to manually run this on nuke36 and see what is going on?

    sudo ceph --cluster={cluster name, defaults to ceph} -n mon. -k {path
to keyring}/keyring mon remove

>
> well that should do as a work around for now
> (since I'm using the dev branch and do only testing stuff)
> still, would be curious if I could have repaired it any other way.
>
> Bernhard
>
> Am 04.09.2013 12:40:04, schrieb Bernhard Glomm:
>
> Hi all,
>
> after some days of successful creating and destroying rbd's, snapshots,
> clones and migrating formats all of a sudden one of the monitors doesn't
> work anymore.
> I tried to remove and re-add the monitor from the cluster, but that
> doesn't seem to work either.
>
> Here's the situation:
> I liked to use ceph-deploy to initiate the cluster.
> Due to the broken ceph-create-keys in dumpling I turned to the gitbuilder
> version
> now running
>
> ceph version 0.67.2-23-g24f2669 (24f2669783e2eb9d9af5ecbe106efed93366ba63)
> on uptodate raring systems
> All of a sudden the host from which I ran ceph-deploy, and which should be
> one of the
> 5 monitors (from which 2 are also serving as OSDs) has fallen out of the
> quorum as you can see here
> (yes, time is in sync on all nodes):
>
> ------------------
> root@nuke36[/0]:~ # ceph -s
> 2013-09-04 11:09:56.039547 7f7a8820f700  1 -- :/0 messenger.start
> 2013-09-04 11:09:56.040646 7f7a8820f700  1 -- :/1016260 -->
> 192.168.242.92:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
> 0x7f7a8000e8f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.041304 7f7a84a08700  1 -- 192.168.242.36:0/1016260learned my addr
> 192.168.242.36:0/1016260
> 2013-09-04 11:09:56.042843 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 1 ==== mon_map v1 ==== 776+0+0 (2241333437 0 0)
> 0x7f7a70000c30 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.043038 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 2 ==== auth_reply(proto 2 0 Success) v1 ==== 33+0+0
> (2063715990 0 0) 0x7f7a70001060 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.043324 7f7a86a0c700  1 -- 192.168.242.36:0/1016260-->
> 192.168.242.92:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
> 0x7f7a74001af0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.044197 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 3 ==== auth_reply(proto 2 0 Success) v1 ====
> 206+0+0 (3910749728 0 0) 0x7f7a70001060 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.044375 7f7a86a0c700  1 -- 192.168.242.36:0/1016260-->
> 192.168.242.92:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0
> 0x7f7a740020d0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.045376 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 4 ==== auth_reply(proto 2 0 Success) v1 ====
> 393+0+0 (3802320753 0 0) 0x7f7a700008f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.045457 7f7a86a0c700  1 -- 192.168.242.36:0/1016260-->
> 192.168.242.92:6789/0 -- mon_subscribe({monmap=0+}) v2 -- ?+0
> 0x7f7a8000ed80 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.045550 7f7a8820f700  1 -- 192.168.242.36:0/1016260-->
> 192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0
> 0x7f7a800079f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.045559 7f7a8820f700  1 -- 192.168.242.36:0/1016260-->
> 192.168.242.92:6789/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0
> 0x7f7a8000fa10 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046376 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 5 ==== mon_map v1 ==== 776+0+0 (2241333437 0 0)
> 0x7f7a70001290 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046417 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 6 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> (1524320885 0 0) 0x7f7a70001480 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046429 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 7 ==== osd_map(22..22 src has 1..22) v3 ====
> 2355+0+0 (11792226 0 0) 0x7f7a70001f70 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046828 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 8 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> (1524320885 0 0) 0x7f7a700008f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.046948 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 9 ==== osd_map(22..22 src has 1..22) v3 ====
> 2355+0+0 (11792226 0 0) 0x7f7a700008c0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.047071 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 10 ==== mon_subscribe_ack(300s) v1 ==== 20+0+0
> (1524320885 0 0) 0x7f7a70000df0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.047547 7f7a8820f700  1 -- 192.168.242.36:0/1016260-->
> 192.168.242.92:6789/0 -- mon_command({"prefix":
> "get_command_descriptions"} v 0) v1 -- ?+0 0x7f7a8000b0f0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.050938 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 11 ==== mon_command_ack([{"prefix":
> "get_command_descriptions"}]=0  v0) v1 ==== 72+0+24040 (1092875540 0
> 2922658865) 0x7f7a700008c0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.089981 7f7a8820f700  1 -- 192.168.242.36:0/1016260-->
> 192.168.242.92:6789/0 -- mon_command({"prefix": "status"} v 0) v1 -- ?+0
> 0x7f7a8000b0d0 con 0x7f7a8000e4e0
> 2013-09-04 11:09:56.091348 7f7a86a0c700  1 -- 192.168.242.36:0/1016260<== mon.3
> 192.168.242.92:6789/0 12 ==== mon_command_ack([{"prefix": "status"}]=0
> v0) v1 ==== 54+0+558 (1155462804 0 1174924833) 0x7f7a70000db0 con
> 0x7f7a8000e4e0
>   cluster b085fba3-8e17-443c-bb61-7758504538f8
>    health HEALTH_WARN 1 mons down, quorum 0,1,3,4 atom01,atom02,ping,pong
>    monmap e1: 5 mons at {atom01=
> 192.168.242.31:6789/0,atom02=192.168.242.32:6789/0,nuke36=192.168.242.36:6789/0,ping=192.168.242.92:6789/0,pong=192.168.242.93:6789/0},
> election epoch 26, quorum 0,1,3,4 atom01,atom02,ping,pong
>    osdmap e22: 2 osds: 2 up, 2 in
>     pgmap v46761: 1192 pgs: 1192 active+clean; 7806 MB data, 20618 MB
> used, 3702 GB / 3722 GB avail; 9756B/s wr, 0op/s
>    mdsmap e17: 1/1/1 up {0=pong=up:active}, 1 up:standby
>
> 2013-09-04 11:09:56.096071 7f7a8820f700  1 -- 192.168.242.36:0/1016260mark_down 0x7f7a8000e4e0 -- 0x7f7a8000e280
> 2013-09-04 11:09:56.096516 7f7a8820f700  1 -- 192.168.242.36:0/1016260mark_down_all
> 2013-09-04 11:09:56.097065 7f7a8820f700  1 -- 192.168.242.36:0/1016260shutdown complete.
> ------------------
>
> this is the ceph.conf that was generated during ceph-deploy (well I added
> the debug lines obviously)
>
> ------------------
> root@nuke36[/0]:~ # cat /etc/ceph/ceph.conf
> [global]
> fsid = b085fba3-8e17-443c-bb61-7758504538f8
> mon_initial_members = ping, pong, nuke36, atom01, atom02
> mon_host =
> 192.168.242.92,192.168.242.93,192.168.242.36,192.168.242.31,192.168.242.32
> auth_supported = cephx
> osd_journal_size = 1024
> filestore_xattr_use_omap = true
> debug ms = 1
> debug mon = 20
> ------------------
>
> after a reboot of the node I now find
>
> ------------------
> root@nuke36[/1]:~ # ps ax | egrep ceph
>   855 ?        Ssl    0:00 /usr/bin/ceph-mon --cluster=ceph -i nuke36 -f
>   856 ?        Ss     0:00 /usr/bin/python /usr/sbin/ceph-create-keys
> --cluster=ceph -i nuke36
>  1813 pts/1    R+     0:00 egrep --color=auto ceph
> ------------------
>
> with ceph-create-keys runing infinetly,
> while the key files but already exist
>
> ------------------
> root@nuke36[/1]:~ # ls -lh /etc/ceph/
> total 88K
> -rw-r--r-- 1 root root  72 Aug 30 15:54 ceph.bootstrap-mds.keyring
> -rw-r--r-- 1 root root  72 Aug 30 15:54 ceph.bootstrap-osd.keyring
> -rw------- 1 root root  64 Aug 30 15:54 ceph.client.admin.keyring
> -rw-r--r-- 1 root root 303 Sep  4 10:19 ceph.conf
> -rw-r--r-- 1 root root 59K Sep  3 10:15 ceph.log
> -rw-r--r-- 1 root root  73 Aug 30 15:53 ceph.mon.keyring
> -rw-r--r-- 1 root root  92 Aug 30 00:03 rbdmap
> ------------------
>
> and
>
> ------------------
> root@nuke36[/1]:~ # tree -pfugiAD /var/lib/ceph/
> /var/lib/ceph
> [drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds
> [-rw------- root     root     Aug 30 15:54]
> /var/lib/ceph/bootstrap-mds/ceph.keyring
> [drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd
> [-rw------- root     root     Aug 30 15:54]
> /var/lib/ceph/bootstrap-osd/ceph.keyring
> [drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/mds
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-nuke36
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-nuke36/done
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-nuke36/keyring
> [drwxr-xr-x root     root     Sep  4 11:16]
> /var/lib/ceph/mon/ceph-nuke36/store.db
> [-rw-r--r-- root     root     Sep  3  6:21]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007726.sst
> [-rw-r--r-- root     root     Sep  3  6:21]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007727.sst
> [-rw-r--r-- root     root     Sep  3  6:21]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007728.sst
> [-rw-r--r-- root     root     Sep  3  6:21]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007729.sst
> [-rw-r--r-- root     root     Sep  3  6:21]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007730.sst
> [-rw-r--r-- root     root     Sep  4 10:14]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007767.sst
> [-rw-r--r-- root     root     Sep  4 10:14]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007768.sst
> [-rw-r--r-- root     root     Sep  4 10:14]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007769.sst
> [-rw-r--r-- root     root     Sep  4 10:14]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007770.sst
> [-rw-r--r-- root     root     Sep  4 10:14]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007772.sst
> [-rw-r--r-- root     root     Sep  4 10:14]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007773.sst
> [-rw-r--r-- root     root     Sep  4 10:14]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007774.sst
> [-rw-r--r-- root     root     Sep  4 10:14]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007775.sst
> [-rw-r--r-- root     root     Sep  4 10:18]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007777.sst
> [-rw-r--r-- root     root     Sep  4 10:19]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007780.sst
> [-rw-r--r-- root     root     Sep  4 11:16]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007783.sst
> [-rw-r--r-- root     root     Sep  4 11:16]
> /var/lib/ceph/mon/ceph-nuke36/store.db/007784.log
> [-rw-r--r-- root     root     Sep  4 11:16]
> /var/lib/ceph/mon/ceph-nuke36/store.db/CURRENT
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-nuke36/store.db/LOCK
> [-rw-r--r-- root     root     Sep  4 11:16]
> /var/lib/ceph/mon/ceph-nuke36/store.db/LOG
> [-rw-r--r-- root     root     Sep  4 10:19]
> /var/lib/ceph/mon/ceph-nuke36/store.db/LOG.old
> [-rw-r--r-- root     root     Sep  4 11:16]
> /var/lib/ceph/mon/ceph-nuke36/store.db/MANIFEST-007782
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-nuke36/upstart
> [drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/osd
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/tmp
> ------------------
>
> to compare with atom01 which is still running in the cluster...
>
> ------------------
> root@atom01[/0]:~ # tree -pfugiAD /var/lib/ceph/
> /var/lib/ceph
> [drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-mds
> [-rw------- root     root     Aug 30 15:54]
> /var/lib/ceph/bootstrap-mds/ceph.keyring
> [drwxr-xr-x root     root     Aug 30 15:54]  /var/lib/ceph/bootstrap-osd
> [-rw------- root     root     Aug 30 15:54]
> /var/lib/ceph/bootstrap-osd/ceph.keyring
> [drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/mds
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/mon/ceph-atom01
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-atom01/done
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-atom01/keyring
> [drwxr-xr-x root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db
> [-rw-r--r-- root     root     Sep  4 11:25]
> /var/lib/ceph/mon/ceph-atom01/store.db/006339.log
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/006342.sst
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/006343.sst
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/006344.sst
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/006345.sst
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/006346.sst
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/006347.sst
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/006348.sst
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-atom01/store.db/CURRENT
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-atom01/store.db/LOCK
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/LOG
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-atom01/store.db/LOG.old
> [-rw-r--r-- root     root     Sep  4 11:18]
> /var/lib/ceph/mon/ceph-atom01/store.db/MANIFEST-000004
> [-rw-r--r-- root     root     Aug 30 15:53]
> /var/lib/ceph/mon/ceph-atom01/upstart
> [drwxr-xr-x root     root     Aug 30  0:00]  /var/lib/ceph/osd
> [drwxr-xr-x root     root     Aug 30 15:53]  /var/lib/ceph/tmp
> ------------------
>
> the problem with ceph-create-keys seemed to have been fixed with wip-4924
> since in that version "rbd create" wouldn't work I switched to
> deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/
> raring main
> (runing on an uptodate raring)
> I thought of just removing and than re-adding the failing mon but who do I
> do that?
> The documentation on
> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
> says:
> "service ceph -a stop mon.{mon-id}"
>
> ------------------
> root@nuke36[/1]:~ # service ceph -a stop mon.nuke36
> /etc/init.d/ceph: mon.nuke36 not found (/etc/ceph/ceph.conf defines ,
> /var/lib/ceph defines )
> root@nuke36[/1]:~ # service ceph -a stop mon.c
> /etc/init.d/ceph: mon.c not found (/etc/ceph/ceph.conf defines ,
> /var/lib/ceph defines )
> root@nuke36[/1]:~ # service ceph -a stop mon.2
> /etc/init.d/ceph: mon.2 not found (/etc/ceph/ceph.conf defines ,
> /var/lib/ceph defines )
> root@nuke36[/1]:~ # service ceph -a stop mon.ceph-nuke36
> /etc/init.d/ceph: mon.ceph-nuke36 not found (/etc/ceph/ceph.conf defines ,
> /var/lib/ceph defines )
> ------------------
>
> that doesn't help, manually stopping the daemon doesn't work either
> (respawning, okay)
> but this combination leaves me quite curious
>
> ------------------
> root@atom01[/0]:~ # ceph health detail
> HEALTH_WARN 1 mons down, quorum 0,1,3,4 atom01,atom02,ping,pong
> mon.nuke36 (rank 2) addr 192.168.242.36:6789/0 is down (out of quorum)
> root@atom01[/0]:~ # service ceph -a stop mon.nuke36
> /etc/init.d/ceph: mon.nuke36 not found (/etc/ceph/ceph.conf defines ,
> /var/lib/ceph defines )
> /etc/init.d/ceph: mon.2 not found (/etc/ceph/ceph.conf defines ,
> /var/lib/ceph defines )
> root@atom01[/0]:~ # service ceph -a stop mon.rank2
> ------------------
>
> something seems to be out of sync, at least with the documentation?
> any hint how to proceed from here?
>
> TIA
>
> Bernhard
>
>
> --
>  ------------------------------
>
>   *Bernhard Glomm*
> IT Administration
>
>  Phone:  +49 (30) 86880 134  Fax:  +49 (30) 86880 100  Skype: bernhard.glomm.ecologic
> <http://ecologic.eu> <http://www.youtube.com/v/hZtiK04A9Yo>
> <http://ecologic.eu/newsletter/subscribe>
> <http://www.facebook.com/Ecologic.Institute>
> <http://www.linkedin.com/company/ecologic-institute-berlin-germany>
> <http://twitter.com/EcologicBerlin>
> <http://www.youtube.com/user/EcologicInstitute>
> <http://plus.google.com/113756356645020994482> Ecologic Institut
> gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> ------------------------------
> root@nuke
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org <#140e99031b451ae9_>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
>   ------------------------------
>   [image: *Ecologic Institute*]   *Bernhard Glomm*
> IT Administration
>
>    Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: bernhard.glomm.ecologic     [image:
> Website:] <http://ecologic.eu> [image: | Video:]<http://www.youtube.com/v/hZtiK04A9Yo> [image:
> | Newsletter:] <http://ecologic.eu/newsletter/subscribe> [image: |
> Facebook:] <http://www.facebook.com/Ecologic.Institute> [image: |
> Linkedin:]<http://www.linkedin.com/company/ecologic-institute-berlin-germany> [image:
> | Twitter:] <http://twitter.com/EcologicBerlin> [image: | YouTube:]<http://www.youtube.com/user/EcologicInstitute> [image:
> | Google+:] <http://plus.google.com/113756356645020994482>   Ecologic
> Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
> Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> ------------------------------
>
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>

[-- Attachment #1.2: Type: text/html, Size: 28378 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-09-04 16:05 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-09-04 10:40 how to remove and re-add broken mon? Bernhard Glomm
     [not found] ` <ee15755bb1292a720a84f58fdc80a31f7a75fd64-uSuf018nij3sq35pWSNszA@public.gmane.org>
2013-09-04 15:21   ` how to remove and re-add broken mon? SOLVED Bernhard Glomm
     [not found]     ` <086d868ba20ebd557ee3cf166c27295bb1c2c381-uSuf018nij3sq35pWSNszA@public.gmane.org>
2013-09-04 16:05       ` Alfredo Deza

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.