All of lore.kernel.org
 help / color / mirror / Atom feed
* CEPHFS mount error !!!
@ 2013-02-06  5:37 femi anjorin
  2013-02-06  6:55 ` Dan Mick
  0 siblings, 1 reply; 8+ messages in thread
From: femi anjorin @ 2013-02-06  5:37 UTC (permalink / raw)
  To: ceph-devel; +Cc: Joao Eduardo Luis, Martin B Nielsen, Ross Turk

Linux 2.6.32-279.19.1.el6.x86_64 x86_64 CentOS 6.3

Pls can somebody help ... This command not working on CentOS 6.3 !!!!

mount -t ceph 172.16.0.25:6789:/ /mnt/mycephfs -o
name=admin,secret=AQCqnw9RmBPpOBAAeJjkIgKGYvyRGlekTpUPog==

 FATAL: Module ceph not found.
 mount.ceph: modprobe failed, exit status 1
 mount error: ceph filesystem not supported by the system


Regards,
Femi





On Tue, Feb 5, 2013 at 1:49 PM, femi anjorin <femi.anjorin@gmail.com> wrote:
>
> Hi ...
>
> Thanks. i set --debug ms = 0. The result is HEALTH_OK ...but i get an
> error when trying to setup an client access to the cluster CEPHFS!!!
> ----------------------------------------------------------------------------------------
> I tried setting up another server which should act as client..
> - i install ceph on it.
> - got the configuration file from the cluster servers to the new
> server ...  /etc/ceph/ceph.conf
> -i did ...  mkdir /mnt/mycephfs
> -i copied the key from ceph.keyring and used it in the command below
> -i try to run this command: mount -t ceph 172.16.0.25:6789:/
> /mnt/mycephfs -o
> name=admin,secret=AQCqnw9RmBPpOBAAeJjkIgKGYvyRGlekTpUPog==
> Here is the result i got:::
>
> [root@testclient]# mount -t ceph 172.16.0.25:6789:/ /mnt/mycephfs -o
> name=admin,secret=AQCqnw9RmBPpOBAAeJjkIgKGYvyRGlekTpUPog==
> FATAL: Module ceph not found.
> mount.ceph: modprobe failed, exit status 1
> mount error: ceph filesystem not supported by the system
>
> Regards,
> Femi.
>
> On Mon, Feb 4, 2013 at 3:27 PM, Joao Eduardo Luis <joao.luis@inktank.com> wrote:
> > This wasn't obvious due to all the debug being outputted, but here's why
> > 'ceph health' wasn't replying with HEALTH_OK:
> >
> >
> > On 02/04/2013 12:21 PM, femi anjorin wrote:
> >>
> >> 2013-02-04 12:56:15.818985 7f149bfff700  1 HEALTH_WARN 4987 pgs
> >> peering; 4987 pgs stuck inactive; 5109 pgs stuck unclean
> >
> >
> > Furthermore, on your other email in which you ran 'ceph health detail', this
> > appear to have gone away, as it is replying with HEALTH_OK again.
> >
> > You might want to set '--debug-ms 0' when you run 'ceph', or set it in your
> > ceph.conf, leaving it at a higher level only for daemons (i.e., under [mds],
> > [mon], [osd]...).  The resulting output will be clearer and more easily
> > understandable.
> >
> >   -Joao
> >

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-02-07 20:12 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-06  5:37 CEPHFS mount error !!! femi anjorin
2013-02-06  6:55 ` Dan Mick
2013-02-06  7:20   ` femi anjorin
2013-02-06  7:24     ` Jens Kristian Søgaard
2013-02-06 11:54       ` Dennis Jacobfeuerborn
2013-02-06 15:29         ` Dimitri Maziuk
2013-02-07 11:05           ` femi anjorin
2013-02-07 20:12             ` Dan Mick

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.