All of lore.kernel.org
 help / color / mirror / Atom feed
* Problem with Ceph installation inside Amazon EC2
@ 2010-04-20 13:38 Christian Baun
  2010-04-20 14:21 ` Wido den Hollander
  0 siblings, 1 reply; 5+ messages in thread
From: Christian Baun @ 2010-04-20 13:38 UTC (permalink / raw)
  To: ceph-devel

Hi,

I want to test Ceph inside Amazon EC2. Therefore I compiled and installed ceph-0.19.1.tar.gz and ceph-kclient-0.19.1.tar.gz

Two Ubuntu instances are running inside EC2. 
One server with cmon, cmds, and cosd and one client.

# cat /etc/ceph/ceph.conf
[global]
       pid file = /var/run/ceph/$name.pid
       debug ms = 1

[mon]
       mon data = /srv/ceph/mon

[mon0]
       host = ceph01
       mon addr = 10.194.145.203:6789

[mds]

[mds0]
       host = ceph01

[osd]
       sudo = true
       osd data = /srv/ceph/osd

[osd1]
       host = ceph01
       btrfs devs = /dev/sdc


# cat /etc/hosts
127.0.0.1 localhost
10.194.145.203   ceph01


# mkcephfs -c /etc/ceph/ceph.conf --allhosts --mkbtrfs -k ~/admin.keyring
.conf --allhosts --mkbtrfs -k ~/admin.keyring
/usr/local/bin/monmaptool --create --clobber --add 10.194.145.203:6789 --print /tmp/monmap.9681
/usr/local/bin/monmaptool: monmap file /tmp/monmap.9681
/usr/local/bin/monmaptool: generated fsid 7a008165-58df-733f-deca-90faecaef60a
epoch 1
fsid 7a008165-58df-733f-deca-90faecaef60a
last_changed 10.04.20 13:34:52.518583
created 10.04.20 13:34:52.518583
	mon0 10.194.145.203:6789/0
/usr/local/bin/monmaptool: writing epoch 1 to /tmp/monmap.9681 (1 monitors)
max osd in /etc/ceph/ceph.conf is 1, num osd is 2
/usr/local/bin/osdmaptool: osdmap file '/tmp/osdmap.9681'
/usr/local/bin/osdmaptool: writing epoch 1 to /tmp/osdmap.9681
Building admin keyring at /root/admin.keyring
creating /root/admin.keyring
Building monitor keyring with all service keys
creating /tmp/monkeyring.9681
importing contents of /root/admin.keyring into /tmp/monkeyring.9681
creating /tmp/keyring.mds.0
importing contents of /tmp/keyring.mds.0 into /tmp/monkeyring.9681
creating /tmp/keyring.osd.1
importing contents of /tmp/keyring.osd.1 into /tmp/monkeyring.9681
=== mon0 === 
10.04.20 13:34:54.805739 store(/srv/ceph/mon) mkfs
10.04.20 13:34:54.805853 store(/srv/ceph/mon) test -d /srv/ceph/mon && /bin/rm -rf /srv/ceph/mon ; mkdir -p /srv/ceph/mon
10.04.20 13:34:54.831017 mon0(starting).class v0 create_initial -- creating initial map
10.04.20 13:34:54.832277 mon0(starting).auth v0 create_initial -- creating initial map
10.04.20 13:34:54.832308 mon0(starting).auth v0 reading initial keyring 
/usr/local/bin/mkmonfs: created monfs at /srv/ceph/mon for mon0
admin.keyring                                    100%  119     0.1KB/s   00:00    
=== mds0 === 
WARNING: no keyring specified for mds0
=== osd1 === 
umount: /srv/ceph/osd: not mounted
umount: /dev/sdc: not mounted

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

fs created label (null) on /dev/sdc
	nodesize 4096 leafsize 4096 sectorsize 4096 size 419.97GB
Btrfs Btrfs v0.19
Scanning for Btrfs filesystems
monmap.9681                                      100%  187     0.2KB/s   00:00    
 ** WARNING: Ceph is still under heavy development, and is only suitable for **
 **          testing and review.  Do not trust it with important data.       **
created object store for osd1 fsid 7a008165-58df-733f-deca-90faecaef60a on /srv/ceph/osd
WARNING: no keyring specified for osd1



# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1             10403128   4038372   5945328  41% /
none                   3804316       112   3804204   1% /dev
none                   3941904         0   3941904   0% /dev/shm
none                   3941904        60   3941844   1% /var/run
none                   3941904         0   3941904   0% /var/lock
none                   3941904         0   3941904   0% /lib/init/rw
/dev/sdb             433455904    203016 411234584   1% /mnt
/dev/sdc             440366080        32 440366048   1% /srv/ceph/osd


# mount | grep ceph
/dev/sdc on /srv/ceph/osd type btrfs (rw,flushoncommit)


At the server:
# ./ceph start
10.04.20 13:35:39.757563 7f6485c32720 -- :/10726 messenger.start
10.04.20 13:35:39.757808 7f6485c32720 -- :/10726 --> mon0 10.194.145.203:6789/0 -- auth(proto 0 30 bytes) -- ?+0 0x1e46910
10.04.20 13:35:39.757990 7f6482eb3710 -- :/10726 >> 10.194.145.203:6789/0 pipe(0x1e426a0 sd=-1 pgs=0 cs=0 l=0).fault first fault


At the client:
# mount -t ceph 10.194.145.203:/ /mnt/ceph/
mount: 10.194.145.203:/: can't read superblock


What can I do at this point?

Thanks in advance
    Christian 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Problem with Ceph installation inside Amazon EC2
  2010-04-20 13:38 Problem with Ceph installation inside Amazon EC2 Christian Baun
@ 2010-04-20 14:21 ` Wido den Hollander
  2010-04-20 15:18   ` Christian Baun
  0 siblings, 1 reply; 5+ messages in thread
From: Wido den Hollander @ 2010-04-20 14:21 UTC (permalink / raw)
  To: Christian Baun; +Cc: ceph-devel

Hi,

What does "ceph -w" show?

Do you get some "xauth" messages in your mon0 logfiles?

-- 
Met vriendelijke groet,

Wido den Hollander
Hoofd Systeembeheer / CSO
Telefoon Support Nederland: 0900 9633 (45 cpm)
Telefoon Support België: 0900 70312 (45 cpm)
Telefoon Direct: (+31) (0)20 50 60 104
Fax: +31 (0)20 50 60 111
E-mail: support@pcextreme.nl
Website: http://www.pcextreme.nl
Kennisbank: http://support.pcextreme.nl/
Netwerkstatus: http://nmc.pcextreme.nl


On Tue, 2010-04-20 at 13:38 +0000, Christian Baun wrote:
> Hi,
> 
> I want to test Ceph inside Amazon EC2. Therefore I compiled and installed ceph-0.19.1.tar.gz and ceph-kclient-0.19.1.tar.gz
> 
> Two Ubuntu instances are running inside EC2. 
> One server with cmon, cmds, and cosd and one client.
> 
> # cat /etc/ceph/ceph.conf
> [global]
>        pid file = /var/run/ceph/$name.pid
>        debug ms = 1
> 
> [mon]
>        mon data = /srv/ceph/mon
> 
> [mon0]
>        host = ceph01
>        mon addr = 10.194.145.203:6789
> 
> [mds]
> 
> [mds0]
>        host = ceph01
> 
> [osd]
>        sudo = true
>        osd data = /srv/ceph/osd
> 
> [osd1]
>        host = ceph01
>        btrfs devs = /dev/sdc
> 
> 
> # cat /etc/hosts
> 127.0.0.1 localhost
> 10.194.145.203   ceph01
> 
> 
> # mkcephfs -c /etc/ceph/ceph.conf --allhosts --mkbtrfs -k ~/admin.keyring
> .conf --allhosts --mkbtrfs -k ~/admin.keyring
> /usr/local/bin/monmaptool --create --clobber --add 10.194.145.203:6789 --print /tmp/monmap.9681
> /usr/local/bin/monmaptool: monmap file /tmp/monmap.9681
> /usr/local/bin/monmaptool: generated fsid 7a008165-58df-733f-deca-90faecaef60a
> epoch 1
> fsid 7a008165-58df-733f-deca-90faecaef60a
> last_changed 10.04.20 13:34:52.518583
> created 10.04.20 13:34:52.518583
> 	mon0 10.194.145.203:6789/0
> /usr/local/bin/monmaptool: writing epoch 1 to /tmp/monmap.9681 (1 monitors)
> max osd in /etc/ceph/ceph.conf is 1, num osd is 2
> /usr/local/bin/osdmaptool: osdmap file '/tmp/osdmap.9681'
> /usr/local/bin/osdmaptool: writing epoch 1 to /tmp/osdmap.9681
> Building admin keyring at /root/admin.keyring
> creating /root/admin.keyring
> Building monitor keyring with all service keys
> creating /tmp/monkeyring.9681
> importing contents of /root/admin.keyring into /tmp/monkeyring.9681
> creating /tmp/keyring.mds.0
> importing contents of /tmp/keyring.mds.0 into /tmp/monkeyring.9681
> creating /tmp/keyring.osd.1
> importing contents of /tmp/keyring.osd.1 into /tmp/monkeyring.9681
> === mon0 === 
> 10.04.20 13:34:54.805739 store(/srv/ceph/mon) mkfs
> 10.04.20 13:34:54.805853 store(/srv/ceph/mon) test -d /srv/ceph/mon && /bin/rm -rf /srv/ceph/mon ; mkdir -p /srv/ceph/mon
> 10.04.20 13:34:54.831017 mon0(starting).class v0 create_initial -- creating initial map
> 10.04.20 13:34:54.832277 mon0(starting).auth v0 create_initial -- creating initial map
> 10.04.20 13:34:54.832308 mon0(starting).auth v0 reading initial keyring 
> /usr/local/bin/mkmonfs: created monfs at /srv/ceph/mon for mon0
> admin.keyring                                    100%  119     0.1KB/s   00:00    
> === mds0 === 
> WARNING: no keyring specified for mds0
> === osd1 === 
> umount: /srv/ceph/osd: not mounted
> umount: /dev/sdc: not mounted
> 
> WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
> WARNING! - see http://btrfs.wiki.kernel.org before using
> 
> fs created label (null) on /dev/sdc
> 	nodesize 4096 leafsize 4096 sectorsize 4096 size 419.97GB
> Btrfs Btrfs v0.19
> Scanning for Btrfs filesystems
> monmap.9681                                      100%  187     0.2KB/s   00:00    
>  ** WARNING: Ceph is still under heavy development, and is only suitable for **
>  **          testing and review.  Do not trust it with important data.       **
> created object store for osd1 fsid 7a008165-58df-733f-deca-90faecaef60a on /srv/ceph/osd
> WARNING: no keyring specified for osd1
> 
> 
> 
> # df
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sda1             10403128   4038372   5945328  41% /
> none                   3804316       112   3804204   1% /dev
> none                   3941904         0   3941904   0% /dev/shm
> none                   3941904        60   3941844   1% /var/run
> none                   3941904         0   3941904   0% /var/lock
> none                   3941904         0   3941904   0% /lib/init/rw
> /dev/sdb             433455904    203016 411234584   1% /mnt
> /dev/sdc             440366080        32 440366048   1% /srv/ceph/osd
> 
> 
> # mount | grep ceph
> /dev/sdc on /srv/ceph/osd type btrfs (rw,flushoncommit)
> 
> 
> At the server:
> # ./ceph start
> 10.04.20 13:35:39.757563 7f6485c32720 -- :/10726 messenger.start
> 10.04.20 13:35:39.757808 7f6485c32720 -- :/10726 --> mon0 10.194.145.203:6789/0 -- auth(proto 0 30 bytes) -- ?+0 0x1e46910
> 10.04.20 13:35:39.757990 7f6482eb3710 -- :/10726 >> 10.194.145.203:6789/0 pipe(0x1e426a0 sd=-1 pgs=0 cs=0 l=0).fault first fault
> 
> 
> At the client:
> # mount -t ceph 10.194.145.203:/ /mnt/ceph/
> mount: 10.194.145.203:/: can't read superblock
> 
> 
> What can I do at this point?
> 
> Thanks in advance
>     Christian 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Problem with Ceph installation inside Amazon EC2
  2010-04-20 14:21 ` Wido den Hollander
@ 2010-04-20 15:18   ` Christian Baun
  2010-04-20 15:41     ` Wido den Hollander
  0 siblings, 1 reply; 5+ messages in thread
From: Christian Baun @ 2010-04-20 15:18 UTC (permalink / raw)
  To: wido; +Cc: ceph-devel

Hi,

I have just these logfiles in /var/log/ceph:

# ls -l /var/log/ceph/
total 4
-rw-r--r-- 1 root root 940 2010-04-20 15:12 ip-10-194-145-203.12134
lrwxrwxrwx 1 root root  23 2010-04-20 15:12 osd1 -> ip-10-194-145-203.12134

# cat ip-10-194-145-203.12134 
10.04.20 15:12:06.702434 --- opened log /var/log/ceph/ip-10-194-145-203.12134 ---
ceph version 0.19.1 (7d7f925bcd03630778e36fe633efaf92768925c7)
10.04.20 15:12:06.702550 ---- renamed symlink /var/log/ceph/osd1 -> /var/log/ceph/osd1.0 ----
10.04.20 15:12:06.702573 ---- created symlink /var/log/ceph/osd1 -> ip-10-194-145-203.12134 ----
10.04.20 15:12:06.702674 7fd59e775720 -- :/12134 messenger.start
10.04.20 15:12:06.702726 7fd59e775720 -- :/12134 shutdown complete.
10.04.20 15:12:06.702788 7fd59e775720 filestore(/srv/ceph/osd) mkfs in /srv/ceph/osd
10.04.20 15:12:06.717191 7fd59e775720 filestore(/srv/ceph/osd) mkfs done in /srv/ceph/osd
10.04.20 15:12:06.717467 7fd59e775720 filestore(/srv/ceph/osd) mount btrfs TRANS_START ioctl is supported
10.04.20 15:12:06.717481 7fd59e775720 filestore(/srv/ceph/osd) mount btrfs CLONE_RANGE ioctl is supported
10.04.20 15:12:06.717488 7fd59e775720 filestore(/srv/ceph/osd) mount detected btrfs

There is no mon0 logfile.

"ceph -w" says the same as "ceph start"

# ceph -w
10.04.20 15:14:31.755364 7fc9f7329720 -- :/12154 messenger.start
10.04.20 15:14:31.755598 7fc9f7329720 -- :/12154 --> mon0 10.194.145.203:6789/0 -- auth(proto 0 30 bytes) -- ?+0 0xc99de0
10.04.20 15:14:31.755777 7fc9f45aa710 -- :/12154 >> 10.194.145.203:6789/0 pipe(0xc9d2f0 sd=-1 pgs=0 cs=0 l=0).fault first fault

# mount -t ceph 10.194.145.203:/ /mnt/ceph/
mount: 10.194.145.203:/: can't read superblock

Best Regards,
   Christian 




Am Dienstag, 20. April 2010 schrieb Wido den Hollander:
> Hi,
> 
> What does "ceph -w" show?
> 
> Do you get some "xauth" messages in your mon0 logfiles?
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Problem with Ceph installation inside Amazon EC2
  2010-04-20 15:18   ` Christian Baun
@ 2010-04-20 15:41     ` Wido den Hollander
  2010-04-20 16:13       ` Christian Baun
  0 siblings, 1 reply; 5+ messages in thread
From: Wido den Hollander @ 2010-04-20 15:41 UTC (permalink / raw)
  To: Christian Baun; +Cc: ceph-devel

Hi,

Do you have the following processes running?

- cmon
- cmds
- cosd

And you are trying to mount the Ceph filesystem which is running local
isn't it?

I also experienced some troubles with mounting a local Ceph filesystem,
so i switched to a dedicated client.

-- 
Met vriendelijke groet,

Wido den Hollander
Hoofd Systeembeheer / CSO
Telefoon Support Nederland: 0900 9633 (45 cpm)
Telefoon Support België: 0900 70312 (45 cpm)
Telefoon Direct: (+31) (0)20 50 60 104
Fax: +31 (0)20 50 60 111
E-mail: support@pcextreme.nl
Website: http://www.pcextreme.nl
Kennisbank: http://support.pcextreme.nl/
Netwerkstatus: http://nmc.pcextreme.nl


On Tue, 2010-04-20 at 15:18 +0000, Christian Baun wrote:
> Hi,
> 
> I have just these logfiles in /var/log/ceph:
> 
> # ls -l /var/log/ceph/
> total 4
> -rw-r--r-- 1 root root 940 2010-04-20 15:12 ip-10-194-145-203.12134
> lrwxrwxrwx 1 root root  23 2010-04-20 15:12 osd1 -> ip-10-194-145-203.12134
> 
> # cat ip-10-194-145-203.12134 
> 10.04.20 15:12:06.702434 --- opened log /var/log/ceph/ip-10-194-145-203.12134 ---
> ceph version 0.19.1 (7d7f925bcd03630778e36fe633efaf92768925c7)
> 10.04.20 15:12:06.702550 ---- renamed symlink /var/log/ceph/osd1 -> /var/log/ceph/osd1.0 ----
> 10.04.20 15:12:06.702573 ---- created symlink /var/log/ceph/osd1 -> ip-10-194-145-203.12134 ----
> 10.04.20 15:12:06.702674 7fd59e775720 -- :/12134 messenger.start
> 10.04.20 15:12:06.702726 7fd59e775720 -- :/12134 shutdown complete.
> 10.04.20 15:12:06.702788 7fd59e775720 filestore(/srv/ceph/osd) mkfs in /srv/ceph/osd
> 10.04.20 15:12:06.717191 7fd59e775720 filestore(/srv/ceph/osd) mkfs done in /srv/ceph/osd
> 10.04.20 15:12:06.717467 7fd59e775720 filestore(/srv/ceph/osd) mount btrfs TRANS_START ioctl is supported
> 10.04.20 15:12:06.717481 7fd59e775720 filestore(/srv/ceph/osd) mount btrfs CLONE_RANGE ioctl is supported
> 10.04.20 15:12:06.717488 7fd59e775720 filestore(/srv/ceph/osd) mount detected btrfs
> 
> There is no mon0 logfile.
> 
> "ceph -w" says the same as "ceph start"
> 
> # ceph -w
> 10.04.20 15:14:31.755364 7fc9f7329720 -- :/12154 messenger.start
> 10.04.20 15:14:31.755598 7fc9f7329720 -- :/12154 --> mon0 10.194.145.203:6789/0 -- auth(proto 0 30 bytes) -- ?+0 0xc99de0
> 10.04.20 15:14:31.755777 7fc9f45aa710 -- :/12154 >> 10.194.145.203:6789/0 pipe(0xc9d2f0 sd=-1 pgs=0 cs=0 l=0).fault first fault
> 
> # mount -t ceph 10.194.145.203:/ /mnt/ceph/
> mount: 10.194.145.203:/: can't read superblock
> 
> Best Regards,
>    Christian 
> 
> 
> 
> 
> Am Dienstag, 20. April 2010 schrieb Wido den Hollander:
> > Hi,
> > 
> > What does "ceph -w" show?
> > 
> > Do you get some "xauth" messages in your mon0 logfiles?
> > 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Problem with Ceph installation inside Amazon EC2
  2010-04-20 15:41     ` Wido den Hollander
@ 2010-04-20 16:13       ` Christian Baun
  0 siblings, 0 replies; 5+ messages in thread
From: Christian Baun @ 2010-04-20 16:13 UTC (permalink / raw)
  To: wido; +Cc: ceph-devel

Hi,

I think it is working now.

I had to do these steps:
cp src/init-ceph /etc/init.d/ceph
chmod a+x /etc/init.d/ceph
chmod u+w /etc/init.d/ceph
/etc/init.d/ceph -c /etc/ceph/ceph.conf start

Now, the server processes are running

# ps -aux | grep ceph
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
root      1948  0.0  0.0      0     0 ?        S    11:28   0:00 [ceph-msgr/0]
root      1949  0.0  0.0      0     0 ?        S    11:28   0:00 [ceph-msgr/1]
root     12467  0.0  0.0 179812  3452 ?        Ssl  15:58   0:00 /usr/local/bin/cmon -i 0 -c /etc/ceph/ceph.conf
root     12495  0.0  0.0 189532  3792 ?        Ssl  15:58   0:00 /usr/local/bin/cmds -i 0 -c /etc/ceph/ceph.conf
root     12545  0.0  0.2 247480 21676 ?        Ssl  15:58   0:00 /usr/local/bin/cosd -i 1 -c /etc/ceph/ceph.conf
root     12597  0.0  0.0   7616   920 pts/1    R+   16:01   0:00 grep --color=auto ceph

The mount command at the client machine was successful too:

# mount | grep ceph
10.194.145.203:/ on /mnt/ceph type ceph (rw)



A small performance test inside EC2:

I created 2 Instances. 1 Instance has an EBS-Volume attached. The EBS-Volume is the storage for the Ceoh. Both instances (m1.large) are inside EC2 us-east-1d.

Create testdata

# dd if=/dev/zero of=/tmp/testfile bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 7.24084 s, 141 MB/s

# cat test.bat 
cp /tmp/testfile /mnt/ceph
sync

Copy the testdata from the client to the server:

# time ./test.bat 
real	0m29.211s
user	0m0.020s
sys	0m1.490s

Thanks a lot for the help.

Best Regards,
   Christian 


Am Dienstag, 20. April 2010 schrieb Wido den Hollander:
> Hi,
> 
> Do you have the following processes running?
> 
> - cmon
> - cmds
> - cosd
> 
> And you are trying to mount the Ceph filesystem which is running local
> isn't it?
> 
> I also experienced some troubles with mounting a local Ceph filesystem,
> so i switched to a dedicated client.
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-04-20 16:13 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-04-20 13:38 Problem with Ceph installation inside Amazon EC2 Christian Baun
2010-04-20 14:21 ` Wido den Hollander
2010-04-20 15:18   ` Christian Baun
2010-04-20 15:41     ` Wido den Hollander
2010-04-20 16:13       ` Christian Baun

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.