All of lore.kernel.org
 help / color / mirror / Atom feed
* some questions about ceph deployment
@ 2010-09-04 13:45 FWDF
  2010-09-17 20:30 ` Sage Weil
  0 siblings, 1 reply; 7+ messages in thread
From: FWDF @ 2010-09-04 13:45 UTC (permalink / raw)
  To: ceph-devel

  We use 3 servers to build a test system of ceph, configured as below:
  
  Host                          IP      
  client01            192.168.1.10   
  ceph01              192.168.2.50
  ceph02              192.168.2.51   
  
  The OS is unbuntu 10.04 LTS and the version of ceph is v0.21.1
  
  ceph.conf:
  [global]
          auth supported = cephx
          pid file = /var/run/ceph/$name.pid
          debug ms = 0
          keyring = /etc/ceph/keyring.bin
   [mon]
          mon data = /mnt/ceph/data/mon$id
          debug ms = 1
  [mon0]
          host = ceph01
          mon addr = 192.168.2.50:6789
   [mds]
          keyring = /etc/ceph/keyring.$name
debug ms = 1
  [mds.ceph01]
          host = ceph01
  [mds.ceph02]
          host = ceph02
  [osd]
          sudo = true
          osd data = /mnt/ceph/osd$id/data
          keyring = /etc/ceph/keyring.$name
          osd journal = /mnt/ceph/osd$id/data/journal
          osd journal size = 100
  [osd0]
          host = ceph01
  [osd1]
          host = ceph01
  [osd2]
          host = ceph01
  [osd3]
          host = ceph01
  [osd10]
          host = ceph02
  
  There are 4 HDDs in the ceph01 and every HDD has a OSD named as osd0, osd1, osd2,osd3; there is 1 HDD in the ceph02 named as osd10. All these HDDs are made as btrfs and mounted on the mount point as listed below:
  
  ceph01
           /dev/sdc1         /mnt/ceph/osd0/data               btrfs
           /dev/sdd1         /mnt/ceph/osd1/data               btrfs
           /dev/sde1         /mnt/ceph/osd2/data               btrfs
           /dev/sdf1          /mnt/ceph/osd3/data               btrfs
  
  ceph02
           /dev/sdb1         /mnt.ceph/osd10/data             btrfs
  
  Make ceph FileSystem:
  root@ceph01:~#  mkcephfs  -c /etc/cepf/ceph.conf -a -k /etc/ceph/keyring.bin
  
  Startup ceph:
  root@ceph01:~#  /etc/init.d/ceph –a  start

         Then
  root@ceph01:~#  ceph -w
  10.09.01_17:56:19.337895   mds e17: 1/1/1 up {0=up:active}, 1 up:standby
  10.09.01_17:56:19.347184   osd e27: 5 osds: 5 up, 5 in
  10.09.01_17:56:19.349447     log … 
  10.09.01_17:56:19.373773   mon e1: 1 mons at 192.168.2.50:6789/0
  
  The ceph file system is mounted to client01(192.168.1.10), ceph01(192.168.2.50), ceph02(192.168.2.51)at /data/ceph. It works fine at the beginning, I can use ls and the write and read of file is ok. After some files are wrote , I find I can’t use ls –l /data/ceph until I umount ceph from ceph02, but one day later the same problem occurred again, then I umount ceph from ceph01 the system and everything is ok.

  Q1:
         Can the ceph filesystem be mounted to a member of ceph cluster?

         When I follow the instruction of http://ceph.newdream.net/wiki/Monitor_cluster_expansion to expand a monitor to ceph02, the following error occurred:
  
  root@ceph02:~#  /etc/init.d/ceph start mon1
  [/etc/ceph/fetch_config/tmp/fetched.ceph.conf.14210] ceph.conf 100%  2565  2.5KB/s  00:00 
  === mon.1 ===
  Starting Ceph mon1 on ceph02...
   ** WARNING: Ceph is still under heavy development, and is only suitable for **
   ** testing and review.  Do not trust it with important data.  **
  terminate called after throwing an instance of 'std::logic_error'
    what():  basic_string::_S_construct NULL not valid
  Aborted (core dumped)
  failed: ' /usr/bin/cmon -i 1 -c /tmp/fetched.ceph.conf.14210 '

  Q2:
  How to expand a monitor to a running ceph system?
   
  Q3
      Is it possible to add mds when the ceph system is running? how?
  
  I fdisked a HDD into two partition, one for journal, other one for data like this:
  /dev/sdc1(180GB)as data
  /dev/sdc2(10GB)as journal
  
  /dev/sdc1 made as btrfs, mount to /mnt/osd0/data
  /dev/sdc2 made as btrfs, mount to /mnt/osd0/journal
  
  ceph.conf:
  …
  [osd]
          osd data = /mnt/ceph/osd$id/data
          osd journal = /mnt/ceph/osd$id/journal
          ; osd journal size = 100
  …
  When I use mkcephfs command, I can't build osd until I edited ceph.conf like this:
  
  [osd]
          osd data = /mnt/ceph/osd$id/data
          osd journal = /mnt/ceph/osd$id/data/journal
          osd journal size = 100
  …
  
  Q4.
    How to set the journal path to a device or patition?
  
  Thanks for all help and reply , sorry for my lame English.
  
  Lin


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-09-23  5:22 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-04 13:45 some questions about ceph deployment FWDF
2010-09-17 20:30 ` Sage Weil
2010-09-22 11:09   ` cang lin
     [not found]   ` <AANLkTikLgvHVRnHC+ept0NZv7uGVpAL52hDdFH2wiN9L@mail.gmail.com>
2010-09-22 16:17     ` Sage Weil
2010-09-22 17:57       ` cang lin
2010-09-22 20:44       ` Sage Weil
2010-09-23  5:22         ` cang lin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.