All of lore.kernel.org
 help / color / mirror / Atom feed
* Multiple named clusters on same nodes
@ 2012-05-23  9:00 Amon Ott
  2012-05-23 18:12 ` Tommi Virtanen
  0 siblings, 1 reply; 5+ messages in thread
From: Amon Ott @ 2012-05-23  9:00 UTC (permalink / raw)
  To: ceph-devel

Hello all!

We would like to have two independent clusters on the same cluster nodes, 
specially one for user home directories (called homeuser) and one for backups 
(backup). The reason is that in case the homeuser cephfs breaks (like it has 
done several times in our tests), we still have independent storage of the 
backups without the need of seperate server nodes.

So I started experimenting with the new "cluster" variable, but it does not 
seem to be well supported so far. mkcephfs does not even know about it and 
always uses "ceph" as cluster name. Setting a value for "cluster" in global 
section of ceph.conf (homeuser.conf, backup.conf, ...) does not work, it is 
not even used in the same config file, instead it has the fixed value "ceph".

My questions:
1) Has someone here ever tried such a setup?
2) Is there an official (documented) way to at least setup ceph clusters with 
individual cluster names?
3) Are there plans to support cluster names in mkcephfs?

Amon Ott
-- 
Dr. Amon Ott
m-privacy GmbH           Tel: +49 30 24342334
Am Köllnischen Park 1    Fax: +49 30 24342336
10179 Berlin             http://www.m-privacy.de

Amtsgericht Charlottenburg, HRB 84946

Geschäftsführer:
 Dipl.-Kfm. Holger Maczkowsky,
 Roman Maczkowsky

GnuPG-Key-ID: 0x2DD3A649
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multiple named clusters on same nodes
  2012-05-23  9:00 Multiple named clusters on same nodes Amon Ott
@ 2012-05-23 18:12 ` Tommi Virtanen
  2012-05-24  7:59   ` Amon Ott
  0 siblings, 1 reply; 5+ messages in thread
From: Tommi Virtanen @ 2012-05-23 18:12 UTC (permalink / raw)
  To: Amon Ott; +Cc: ceph-devel

On Wed, May 23, 2012 at 2:00 AM, Amon Ott <a.ott@m-privacy.de> wrote:
> So I started experimenting with the new "cluster" variable, but it does not
> seem to be well supported so far. mkcephfs does not even know about it and
> always uses "ceph" as cluster name. Setting a value for "cluster" in global
> section of ceph.conf (homeuser.conf, backup.conf, ...) does not work, it is
> not even used in the same config file, instead it has the fixed value "ceph".

"cluster" is not meant to be set in the config file; that's too late,
the config file is read from /etc/ceph/$cluster.conf. Instead, you
pass in --cluster=foo, and that is early enough to influence what
config file is read.

The --cluster argument will be more useful with the new-style Chef
deployment. All the new daemon running infrastructure will already
happily run osds and mons from multiple cluster, even though several
locations still have "ceph" hardcoded, just due to time pressure.

I don't think anyone is likely to fix mkcephfs to work with it -- I'm
personally trying to get mkcephfs declared obsolete. It's
fundamentally the wrong tool; for example, it cannot expand or
reconfigure an existing cluster.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Multiple named clusters on same nodes
  2012-05-23 18:12 ` Tommi Virtanen
@ 2012-05-24  7:59   ` Amon Ott
  2012-05-24  8:58     ` Amon Ott
  0 siblings, 1 reply; 5+ messages in thread
From: Amon Ott @ 2012-05-24  7:59 UTC (permalink / raw)
  To: Tommi Virtanen; +Cc: ceph-devel

[-- Attachment #1: Type: text/plain, Size: 1422 bytes --]

On Wednesday 23 May 2012 wrote Tommi Virtanen:
> On Wed, May 23, 2012 at 2:00 AM, Amon Ott <a.ott@m-privacy.de> wrote:
> > So I started experimenting with the new "cluster" variable, but it does
> > not seem to be well supported so far. mkcephfs does not even know about
> > it and always uses "ceph" as cluster name. Setting a value for "cluster"
> > in global section of ceph.conf (homeuser.conf, backup.conf, ...) does not
> > work, it is not even used in the same config file, instead it has the
> > fixed value "ceph".
> [...]
> I don't think anyone is likely to fix mkcephfs to work with it -- I'm
> personally trying to get mkcephfs declared obsolete. It's
> fundamentally the wrong tool; for example, it cannot expand or
> reconfigure an existing cluster.

Attached is a patch based on current git stable that makes mkcephfs work fine 
for me with --cluster name. ceph-mon uses the wrong mkfs path for "mon data" 
(default "ceph" instead of supplied cluster name), so I put in a workaround.

Please have a look and consider inclusion as well as fixing mon data path. 
Thanks.

Amon Ott
-- 
Dr. Amon Ott
m-privacy GmbH           Tel: +49 30 24342334
Am Köllnischen Park 1    Fax: +49 30 24342336
10179 Berlin             http://www.m-privacy.de

Amtsgericht Charlottenburg, HRB 84946

Geschäftsführer:
 Dipl.-Kfm. Holger Maczkowsky,
 Roman Maczkowsky

GnuPG-Key-ID: 0x2DD3A649

[-- Attachment #2: mkcephfs-with-cluster-names.diff --]
[-- Type: text/x-diff, Size: 4283 bytes --]

commit fc394c63b9fd4f5fea4bc3a430f57164a96dc543
Author: Amon Ott <ao@rsbac.org>
Date:   Thu May 24 09:48:29 2012 +0200

    mkcephfs: Support "--cluster name" for cluster naming
    
    Current mkcephs can only create clusters with name "ceph".
    This patch allows to specify the cluster name and fixes some default paths
    to the new $cluster based locations.
    Parameter --conf is now optional and defaults to /etc/ceph/$cluster.conf.
    
    Signed-off-by: Amon Ott <a.ott@m-privacy.de>

diff --git a/src/mkcephfs.in b/src/mkcephfs.in
index 17b6014..e1c061e 100644
--- a/src/mkcephfs.in
+++ b/src/mkcephfs.in
@@ -60,7 +60,7 @@ else
 fi
 
 usage_exit() {
-    echo "usage: $0 -a -c ceph.conf [-k adminkeyring] [--mkbtrfs]"
+    echo "usage: $0 [--cluster name] -a [-c ceph.conf] [-k adminkeyring] [--mkbtrfs]"
     echo "   to generate a new ceph cluster on all nodes; for advanced usage see man page"
     echo "   ** be careful, this WILL clobber old data; check your ceph.conf carefully **"
     exit
@@ -89,6 +89,7 @@ moreargs=""
 auto_action=0
 manual_action=0
 nocopyconf=0
+cluster="ceph"
 
 while [ $# -ge 1 ]; do
 case $1 in
@@ -141,6 +142,11 @@ case $1 in
 	    shift
 	    conf=$1
 	    ;;
+    --cluster | -C)
+	    [ -z "$2" ] && usage_exit
+	    shift
+	    cluster=$1
+	    ;;
     --numosd)
 	    [ -z "$2" ] && usage_exit
 	    shift
@@ -181,6 +187,8 @@ done
 
 [ -z "$conf" ] && [ -n "$dir" ] && conf="$dir/conf"
 
+[ -z "$conf" ] && conf="/etc/ceph/$cluster.conf"
+
 if [ $manual_action -eq 0 ]; then
     if [ $auto_action -eq 0 ]; then
         echo "You must specify an action. See man page."
@@ -245,19 +253,19 @@ if [ -n "$initdaemon" ]; then
     name="$type.$id"
     
     # create /var/run/ceph (or wherever pid file and/or admin socket live)
-    get_conf pid_file "/var/run/ceph/$name.pid" "pid file"
+    get_conf pid_file "/var/run/ceph/$type/$cluster-$id.pid" "pid file"
     rundir=`dirname $pid_file`
     if [ "$rundir" != "." ] && [ ! -d "$rundir" ]; then
 	mkdir -p $rundir
     fi
-    get_conf asok_file "/var/run/ceph/$name.asok" "admin socket"
+    get_conf asok_file "/var/run/ceph/$type/$cluster-$id.asok" "admin socket"
     rundir=`dirname $asok_file`
     if [ "$rundir" != "." ] && [ ! -d "$rundir" ]; then
 	mkdir -p $rundir
     fi
 
     if [ $type = "osd" ]; then
-	$BINDIR/ceph-osd -c $conf --monmap $dir/monmap -i $id --mkfs
+	$BINDIR/ceph-osd --cluster $cluster -c $conf --monmap $dir/monmap -i $id --mkfs
 	create_private_key
     fi
     
@@ -266,7 +274,9 @@ if [ -n "$initdaemon" ]; then
     fi
 
     if [ $type = "mon" ]; then
-	$BINDIR/ceph-mon -c $conf --mkfs -i $id --monmap $dir/monmap --osdmap $dir/osdmap -k $dir/keyring.mon
+        get_conf mondata "" "mon data"
+        test -z "$mondata" && mondata="/var/lib/ceph/mon/$cluster-$id"
+	$BINDIR/ceph-mon --cluster $cluster -c $conf --mon-data=$mondata --mkfs -i $id --monmap $dir/monmap --osdmap $dir/osdmap -k $dir/keyring.mon
     fi
     
     exit 0
@@ -442,14 +452,14 @@ if [ $allhosts -eq 1 ]; then
 
 	    if [ $nocopyconf -eq 0 ]; then
 		# also put conf at /etc/ceph/ceph.conf
-		scp -q $dir/conf $host:/etc/ceph/ceph.conf
+		scp -q $dir/conf $host:/etc/ceph/$cluster.conf
 	    fi
 	else
 	    rdir=$dir
 
 	    if [ $nocopyconf -eq 0 ]; then
 		# also put conf at /etc/ceph/ceph.conf
-		cp $dir/conf /etc/ceph/ceph.conf
+		cp $dir/conf /etc/ceph/$cluster.conf
 	    fi
 	fi
 	
@@ -486,15 +496,15 @@ if [ $allhosts -eq 1 ]; then
 	    scp -q $dir/* $host:$rdir
 
 	    if [ $nocopyconf -eq 0 ]; then
-		# also put conf at /etc/ceph/ceph.conf
-		scp -q $dir/conf $host:/etc/ceph/ceph.conf
+		# also put conf at /etc/ceph/$cluster.conf
+		scp -q $dir/conf $host:/etc/ceph/$cluster.conf
 	    fi
 	else
 	    rdir=$dir
 
 	    if [ $nocopyconf -eq 0 ]; then
 	        # also put conf at /etc/ceph/ceph.conf
-		cp $dir/conf /etc/ceph/ceph.conf
+		cp $dir/conf /etc/ceph/$cluster.conf
 	    fi
 	fi
 	
@@ -503,7 +513,7 @@ if [ $allhosts -eq 1 ]; then
 
     # admin keyring
     if [ -z "$adminkeyring" ]; then
-	get_conf adminkeyring "/etc/ceph/keyring" "keyring" global
+	get_conf adminkeyring "/etc/ceph/$cluster.keyring" "keyring" global
     fi
     echo "placing client.admin keyring in $adminkeyring"
     cp $dir/keyring.admin $adminkeyring

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: Multiple named clusters on same nodes
  2012-05-24  7:59   ` Amon Ott
@ 2012-05-24  8:58     ` Amon Ott
  2012-05-29 18:54       ` Greg Farnum
  0 siblings, 1 reply; 5+ messages in thread
From: Amon Ott @ 2012-05-24  8:58 UTC (permalink / raw)
  To: Tommi Virtanen; +Cc: ceph-devel

[-- Attachment #1: Type: text/plain, Size: 762 bytes --]

On Thursday 24 May 2012 wrote Amon Ott:
> Attached is a patch based on current git stable that makes mkcephfs work
> fine for me with --cluster name. ceph-mon uses the wrong mkfs path for "mon
> data" (default "ceph" instead of supplied cluster name), so I put in a
> workaround.
>
> Please have a look and consider inclusion as well as fixing mon data path.
> Thanks.

And another patch for the init script to handle multiple clusters.

Amon Ott
-- 
Dr. Amon Ott
m-privacy GmbH           Tel: +49 30 24342334
Am Köllnischen Park 1    Fax: +49 30 24342336
10179 Berlin             http://www.m-privacy.de

Amtsgericht Charlottenburg, HRB 84946

Geschäftsführer:
 Dipl.-Kfm. Holger Maczkowsky,
 Roman Maczkowsky

GnuPG-Key-ID: 0x2DD3A649

[-- Attachment #2: init-ceph-with-cluster-names.diff --]
[-- Type: text/x-diff, Size: 5476 bytes --]

commit d446077dc93894784348f7560ee29eaf6e3ce272
Author: Amon Ott <ao@rsbac.org>
Date:   Thu May 24 10:55:27 2012 +0200

    Make init script init-ceph.in cluster name aware.
    
    Add "--cluster clustername" parameter to start/stop/etc. specific cluster
    with default config file /etc/ceph/cluster.conf.
    If no clustername is given, walk through /etc/ceph/*.conf and try to
    start/stop/etc. them all with clustername taken from conf basename.
    
    Signed-off-by: Amon Ott <a.ott@m-privacy.de>

diff --git a/src/init-ceph.in b/src/init-ceph.in
index f2702e3..6efe7f0 100644
--- a/src/init-ceph.in
+++ b/src/init-ceph.in
@@ -28,6 +28,7 @@ fi
 
 usage_exit() {
     echo "usage: $0 [options] {start|stop|restart} [mon|osd|mds]..."
+    printf "\t--cluster clustername\n"
     printf "\t-c ceph.conf\n"
     printf "\t--valgrind\trun via valgrind\n"
     printf "\t--hostname [hostname]\toverride hostname lookup\n"
@@ -36,6 +37,8 @@ usage_exit() {
 
 . $LIBDIR/ceph_common.sh
 
+conf=""
+
 EXIT_STATUS=0
 
 signal_daemon() {
@@ -45,7 +48,7 @@ signal_daemon() {
     signal=$4
     action=$5
     [ -z "$action" ] && action="Stopping"
-    echo -n "$action Ceph $name on $host..."
+    echo -n "$action Ceph $cluster $name on $host..."
     do_cmd "if [ -e $pidfile ]; then
         pid=`cat $pidfile`
         if [ -e /proc/\$pid ] && grep -q $daemon /proc/\$pid/cmdline ; then
@@ -75,7 +78,7 @@ stop_daemon() {
     signal=$4
     action=$5
     [ -z "$action" ] && action="Stopping"
-    echo -n "$action Ceph $name on $host..."
+    echo -n "$action Ceph $cluster $name on $host..."
     do_cmd "while [ 1 ]; do 
 	[ -e $pidfile ] || break
 	pid=\`cat $pidfile\`
@@ -103,6 +106,7 @@ monaddr=
 dobtrfs=1
 dobtrfsumount=0
 verbose=0
+cluster=""
 
 while echo $1 | grep -q '^-'; do     # FIXME: why not '^-'?
 case $1 in
@@ -151,6 +155,12 @@ case $1 in
 	    shift
 	    hostname=$1
             ;;
+    --cluster )
+	    [ -z "$2" ] && usage_exit
+	    options="$options $1"
+	    shift
+	    cluster=$1
+            ;;
     *)
 	    echo unrecognized option \'$1\'
 	    usage_exit
@@ -160,11 +170,25 @@ options="$options $1"
 shift
 done
 
-verify_conf
-
 command=$1
 [ -n "$*" ] && shift
 
+if test -z "$cluster"
+then
+    for c in /etc/ceph/*.conf
+    do
+        test -f $c && $0 --cluster "$(basename $c .conf)" "$command" "$@"
+    done
+    exit 0
+fi
+
+if test -z "$conf"
+then
+    conf="/etc/ceph/$cluster.conf"
+fi
+
+verify_conf
+
 get_name_list "$@"
 
 for name in $what; do
@@ -176,9 +200,9 @@ for name in $what; do
     check_host || continue
 
     binary="$BINDIR/ceph-$type"
-    cmd="$binary -i $id"
+    cmd="$binary --cluster $cluster -i $id"
 
-    get_conf pid_file "$RUN_DIR/$type.$id.pid" "pid file"
+    get_conf pid_file "$RUN_DIR/$type/$cluster-$id.pid" "pid file"
     if [ -n "$pid_file" ]; then
 	do_cmd "mkdir -p "`dirname $pid_file`
 	cmd="$cmd --pid-file $pid_file"
@@ -191,13 +215,13 @@ for name in $what; do
         get_conf auto_start "" "auto start"
         if [ "$auto_start" = "no" ] || [ "$auto_start" = "false" ] || [ "$auto_start" = "0" ]; then
             if [ -z "$@" ]; then
-                echo "Skipping Ceph $name on $host... auto start is disabled"
+                echo "Skipping Ceph $cluster $name on $host... auto start is disabled"
                 continue
             fi
         fi
 
 	if daemon_is_running $name ceph-$type $id $pid_file; then
-	    echo "Starting Ceph $name on $host...already running"
+	    echo "Starting Ceph $cluster $name on $host...already running"
 	    continue
 	fi
 
@@ -228,7 +252,7 @@ for name in $what; do
     fi
 
     # do lockfile, if RH
-    get_conf lockfile "/var/lock/subsys/ceph" "lock file"
+    get_conf lockfile "/var/lock/subsys/ceph/$cluster" "lock file"
     lockdir=`dirname $lockfile`
     if [ ! -d "$lockdir" ]; then
 	lockfile=""
@@ -270,7 +294,7 @@ for name in $what; do
 		echo Mounting Btrfs on $host:$btrfs_path
 		do_root_cmd "modprobe btrfs ; btrfs device scan || btrfsctl -a ; egrep -q '^[^ ]+ $btrfs_path' /proc/mounts || mount -t btrfs $btrfs_opt $first_dev $btrfs_path"
 	    fi
-	    echo Starting Ceph $name on $host...
+	    echo Starting Ceph $cluster $name on $host...
 	    mkdir -p $RUN_DIR
 	    get_conf pre_start_eval "" "pre start eval"
 	    [ -n "$pre_start_eval" ] && $pre_start_eval
@@ -297,14 +321,14 @@ for name in $what; do
 
 	status)
 	    if daemon_is_running $name ceph-$type $id $pid_file; then
-                echo "$name: running..."
+                echo "$cluster $name: running..."
             elif [ -e "$pid_file" ]; then
                 # daemon is dead, but pid file still exists
-                echo "$name: dead."
+                echo "$cluster $name: dead."
                 EXIT_STATUS=1
             else
                 # daemon is dead, and pid file is gone
-                echo "$name: not running."
+                echo "$cluster $name: not running."
                 EXIT_STATUS=3
             fi
 	    ;;
@@ -329,7 +353,7 @@ for name in $what; do
 	    ;;
 	
 	force-reload | reload)
-	    signal_daemon $name ceph-$type $pid_file -1 "Reloading"
+	    signal_daemon $name ceph-$type $pid_file -1 "$cluster Reloading"
 	    ;;
 
 	restart)
@@ -339,7 +363,7 @@ for name in $what; do
 
 	cleanlogs)
 	    echo removing logs
-	    [ -n "$log_dir" ] && do_cmd "rm -f $log_dir/$type.$id.*"
+	    [ -n "$log_dir" ] && do_cmd "rm -f $log_dir/$cluster-$type.$id.*"
 	    ;;
 
 	cleanalllogs)

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: Multiple named clusters on same nodes
  2012-05-24  8:58     ` Amon Ott
@ 2012-05-29 18:54       ` Greg Farnum
  0 siblings, 0 replies; 5+ messages in thread
From: Greg Farnum @ 2012-05-29 18:54 UTC (permalink / raw)
  To: Amon Ott; +Cc: Tommi Virtanen, ceph-devel

On Thursday, May 24, 2012 at 1:58 AM, Amon Ott wrote:
> On Thursday 24 May 2012 wrote Amon Ott:
> > Attached is a patch based on current git stable that makes mkcephfs work
> > fine for me with --cluster name. ceph-mon uses the wrong mkfs path for "mon
> > data" (default "ceph" instead of supplied cluster name), so I put in a
> > workaround.
> > 
> > Please have a look and consider inclusion as well as fixing mon data path.
> > Thanks.
> 
> 
> 
> And another patch for the init script to handle multiple clusters.

Amon:
Thanks for the patches! Unfortunately nobody who's competent to review these (ie, not me) has time to look into them right now, but they're on the queue when TV or Sage gets some time. :)
-Greg



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-05-29 18:54 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-23  9:00 Multiple named clusters on same nodes Amon Ott
2012-05-23 18:12 ` Tommi Virtanen
2012-05-24  7:59   ` Amon Ott
2012-05-24  8:58     ` Amon Ott
2012-05-29 18:54       ` Greg Farnum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.