* How to build ceph upon zfs filesystem.
@ 2012-08-26 1:39 ramu
2012-08-26 13:14 ` Mark Nelson
0 siblings, 1 reply; 7+ messages in thread
From: ramu @ 2012-08-26 1:39 UTC (permalink / raw)
To: ceph-devel
Hi all,
I want build ceph upon zfs file system,now iam installed ceph upon btrfs
filesystem.
Please help me to ceph builds upon zfs filesystem.
Thanks,
Ramu.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: How to build ceph upon zfs filesystem.
2012-08-26 1:39 How to build ceph upon zfs filesystem ramu
@ 2012-08-26 13:14 ` Mark Nelson
2012-08-27 4:34 ` ramu
0 siblings, 1 reply; 7+ messages in thread
From: Mark Nelson @ 2012-08-26 13:14 UTC (permalink / raw)
To: ramu; +Cc: ceph-devel
Hi Ramu,
You'll probably want to use the zfs on linux sources from here:
http://zfsonlinux.org
Testing this has been on my list but keeps getting pushed back. Someone
else on the list may have given it a try already though. Keep in mind
that Ceph won't be using any special tricks that ZFS supports as we
haven't really been targeting it. I think the xattr support for the
linux port is pretty good so hopefully that should all work fine. If
you have problems you may want to try with:
filestore xattr use omap = true
Mark
On 08/25/2012 08:39 PM, ramu wrote:
> Hi all,
>
> I want build ceph upon zfs file system,now iam installed ceph upon btrfs
> filesystem.
> Please help me to ceph builds upon zfs filesystem.
>
> Thanks,
> Ramu.
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: How to build ceph upon zfs filesystem.
2012-08-26 13:14 ` Mark Nelson
@ 2012-08-27 4:34 ` ramu
2012-08-29 13:47 ` ramu eppa
0 siblings, 1 reply; 7+ messages in thread
From: ramu @ 2012-08-27 4:34 UTC (permalink / raw)
To: ceph-devel
Hi,
After install zfs and created zpool using
"zpool create tank /dev/sdb2" after set mountpoint"zfs set
mountpoint=/data/osd.1" then
"zfs mount -a"
But when am running mkcephfs command am getting following error,
temp dir is /tmp/mkcephfs.Em23UGJyWd
preparing monmap in /tmp/mkcephfs.Em23UGJyWd/monmap
/usr/local/bin/monmaptool --create --clobber --add a 192.168.120.32:6789 --add b
192.168.120.32:6790 --print /tmp/mkcephfs.Em23UGJyWd/monmap
/usr/local/bin/monmaptool: monmap file /tmp/mkcephfs.Em23UGJyWd/monmap
/usr/local/bin/monmaptool: generated fsid a9d2dba0-a490-4c92-a5e1-04cce1118462
epoch 0
fsid a9d2dba0-a490-4c92-a5e1-04cce1118462
last_changed 2012-08-27 10:00:22.031670
created 2012-08-27 10:00:22.031670
0: 192.168.120.32:6789/0 mon.a
1: 192.168.120.32:6790/0 mon.b
/usr/local/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.Em23UGJyWd/monmap (2
monitors)
=== osd.0 ===
pushing conf and monmap to node32:/tmp/mkfs.ceph.5587
2012-08-27 10:00:25.408604 7f46bf574780 -1 filestore(/data/osd.0) leveldb db
created
2012-08-27 10:00:25.585815 7f46bf574780 -1 filestore(/data/osd.0) limited size
xattrs -- filestore_xattr_use_omap enabled
2012-08-27 10:00:26.207126 7f46bf574780 -1 created object store /data/osd.0
journal /data/osd.0/osd.0.journal for osd.0 fsid a9d2dba0-a490-4c92-a5e1-
04cce1118462
creating private key for osd.0 keyring /etc/ceph/keyring.osd.0
creating /etc/ceph/keyring.osd.0
collecting osd.0 key
=== osd.1 ===
pushing conf and monmap to node32:/tmp/mkfs.ceph.5587
2012-08-27 10:00:40.308853 7fb416bc2780 -1 filestore(/data/osd.1) leveldb db
created
2012-08-27 10:00:40.309546 7fb416bc2780 -1 journal FileJournal::_open: unable to
open journal: open() failed: (22) Invalid argument
2012-08-27 10:00:40.310103 7fb416bc2780 -1 OSD::mkfs: FileStore::mkfs failed
with error -22
2012-08-27 10:00:40.310291 7fb416bc2780 -1 ** ERROR: error creating empty
object store in /data/osd.1: (22) Invalid argument
failed: 'ssh root@node32 /usr/local/sbin/mkcephfs -d /tmp/mkfs.ceph.5587 --init-
daemon osd.1'
Thanks,
Ramu.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: How to build ceph upon zfs filesystem.
2012-08-27 4:34 ` ramu
@ 2012-08-29 13:47 ` ramu eppa
2012-08-29 13:50 ` Smart Weblications GmbH - Florian Wiessner
0 siblings, 1 reply; 7+ messages in thread
From: ramu eppa @ 2012-08-29 13:47 UTC (permalink / raw)
To: ceph-devel
Hi,
After zfs installed and two osd's are mounted to zfs file system then started
ceph.After creating image through qemu-rbd the osd's are going down.
The error log's are,
ceph-osd.1.log is,
2012-08-29 10:55:07.218136 7f6e98598780 1 journal _open
/data/osd.1/osd.1.journal fd 10: 1048576000 bytes, block size 131072 bytes,
directio = 0, aio = 0
2012-08-29 10:55:07.221183 7f6e98598780 0 filestore(/data/osd.1) mkjournal
created journal on /data/osd.1/osd.1.journal
2012-08-29 10:55:07.221312 7f6e98598780 1 filestore(/data/osd.1) mkfs done in
/data/osd.1
2012-08-29 10:55:07.384739 7f6e98598780 -1 filestore(/data/osd.1) _detect_fs
unable to create /data/osd.1/xattr_test: (28) No space left on device
2012-08-29 10:55:07.385077 7f6e98598780 -1 OSD::mkfs: couldn't mount FileStore:
error -28
2012-08-29 10:55:07.385252 7f6e98598780 -1 ^[[0;31m ** ERROR: error creating
empty object store in /data/osd.1: (28) No space left on device^[[0m
and ceph-osd.2.log is,
0> 2012-08-29 10:07:19.786818 7fb5da7f4700 -1 *** Caught signal (Aborted) **
in thread 7fb5da7f4700
ceph version 0.47.2 (commit:8bf9fde89bd6ebc4b0645b2fe02dadb1c17ad372)
1: /usr/local/bin/ceph-osd() [0x6ea78a]
2: (()+0xfcb0) [0x7fb5f86bfcb0]
3: (gsignal()+0x35) [0x7fb5f6e1d445]
4: (abort()+0x17b) [0x7fb5f6e20bab]
5: (__gnu_cxx::__verbose_terminate_handler()+0x11d) [0x7fb5f776b69d]
6: (()+0xb5846) [0x7fb5f7769846]
7: (()+0xb5873) [0x7fb5f7769873]
8: (()+0xb596e) [0x7fb5f776996e]
9: (object_info_t::decode(ceph::buffer::list::iterator&)+0x544) [0x7fc064]
10: (object_info_t::object_info_t(ceph::buffer::list&)+0x16e) [0x57b27e]
11: (ReplicatedPG::get_object_context(hobject_t const&, object_locator_t
const&, bool)+0xdd) [0x54652d]
12: (ReplicatedPG::find_object_context(hobject_t const&, object_locator_t
const&, ReplicatedPG::ObjectContext**, bool, snapid_t*)+0x517) [0x547de7]
13: (ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>)+0x753) [0x56b4c3]
14: (PG::do_request(std::tr1::shared_ptr<OpRequest>)+0x199) [0x6051c9]
15: (OSD::dequeue_op(PG*)+0x238) [0x5c73a8]
16: (ThreadPool::worker()+0x605) [0x794e35]
17: (ThreadPool::WorkThread::entry()+0xd) [0x5dd8ad]
18: (()+0x7e9a) [0x7fb5f86b7e9a]
19: (clone()+0x6d) [0x7fb5f6ed94bd]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
interpret this.
Thanks,
Ramu.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: How to build ceph upon zfs filesystem.
2012-08-29 13:47 ` ramu eppa
@ 2012-08-29 13:50 ` Smart Weblications GmbH - Florian Wiessner
2012-08-30 3:54 ` ramu eppa
0 siblings, 1 reply; 7+ messages in thread
From: Smart Weblications GmbH - Florian Wiessner @ 2012-08-29 13:50 UTC (permalink / raw)
To: ramu eppa; +Cc: ceph-devel
Am 29.08.2012 15:47, schrieb ramu eppa:
> 2012-08-29 10:55:07.384739 7f6e98598780 -1 filestore(/data/osd.1) _detect_fs
> unable to create /data/osd.1/xattr_test: (28) No space left on device
> 2012-08-29 10:55:07.385077 7f6e98598780 -1 OSD::mkfs: couldn't mount FileStore:
> error -28
> 2012-08-29 10:55:07.385252 7f6e98598780 -1 ^[[0;31m ** ERROR: error creating
> empty object store in /data/osd.1: (28) No space left on device^[[0m
no space left on device?! - zfs full?
--
Mit freundlichen Grüßen,
Florian Wiessner
Smart Weblications GmbH
Martinsberger Str. 1
D-95119 Naila
fon.: +49 9282 9638 200
fax.: +49 9282 9638 205
24/7: +49 900 144 000 00 - 0,99 EUR/Min*
http://www.smart-weblications.de
--
Sitz der Gesellschaft: Naila
Geschäftsführer: Florian Wiessner
HRB-Nr.: HRB 3840 Amtsgericht Hof
*aus dem dt. Festnetz, ggf. abweichende Preise aus dem Mobilfunknetz
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: How to build ceph upon zfs filesystem.
2012-08-29 13:50 ` Smart Weblications GmbH - Florian Wiessner
@ 2012-08-30 3:54 ` ramu eppa
2014-03-12 18:48 ` sellers
0 siblings, 1 reply; 7+ messages in thread
From: ramu eppa @ 2012-08-30 3:54 UTC (permalink / raw)
To: ceph-devel
Hi Wiessner,
Actually I created zpool and this is mounted to /dev/sdb and then set a
mountpint to osd.The space showing is 144GB.But the osd's are going down after
rbd image is while running.
Thanks,
Ramu.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: How to build ceph upon zfs filesystem.
2012-08-30 3:54 ` ramu eppa
@ 2014-03-12 18:48 ` sellers
0 siblings, 0 replies; 7+ messages in thread
From: sellers @ 2014-03-12 18:48 UTC (permalink / raw)
To: ceph-devel
ramu eppa <ramu.freesystems <at> gmail.com> writes:
>
> Hi Wiessner,
>
> Actually I created zpool and this is mounted to /dev/sdb and then set a
> mountpint to osd.The space showing is 144GB.But the osd's are going down
after
> rbd image is while running.
>
> Thanks,
> Ramu.
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo <at> vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
I found I get a similar message if I have allocated an iscsi target device
pre-partitioned with a primary partition using the entire device. If I
left some space and created an extended partition, it worked fine.
YMMV
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2014-03-12 19:00 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-26 1:39 How to build ceph upon zfs filesystem ramu
2012-08-26 13:14 ` Mark Nelson
2012-08-27 4:34 ` ramu
2012-08-29 13:47 ` ramu eppa
2012-08-29 13:50 ` Smart Weblications GmbH - Florian Wiessner
2012-08-30 3:54 ` ramu eppa
2014-03-12 18:48 ` sellers
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.