All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sage Weil <sage@newdream.net>
To: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: Sage Weil <sage@newdream.net>
Subject: [PATCH 01/20] ceph: documentation
Date: Wed, 15 Jul 2009 14:24:31 -0700	[thread overview]
Message-ID: <1247693090-27796-2-git-send-email-sage@newdream.net> (raw)
In-Reply-To: <1247693090-27796-1-git-send-email-sage@newdream.net>

Mount options, syntax.

Signed-off-by: Sage Weil <sage@newdream.net>
---
 Documentation/filesystems/ceph.txt |  181 ++++++++++++++++++++++++++++++++++++
 1 files changed, 181 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/filesystems/ceph.txt

diff --git a/Documentation/filesystems/ceph.txt b/Documentation/filesystems/ceph.txt
new file mode 100644
index 0000000..26b014e
--- /dev/null
+++ b/Documentation/filesystems/ceph.txt
@@ -0,0 +1,181 @@
+Ceph Distributed File System
+============================
+
+Ceph is a distributed network file system designed to provide good
+performance, reliability, and scalability.
+
+Basic features include:
+
+ * POSIX semantics
+ * Seamless scaling from 1 to many thousands of nodes
+ * High availability and reliability.  No single points of failure.
+ * N-way replication of data across storage nodes
+ * Fast recovery from node failures
+ * Automatic rebalancing of data on node addition/removal
+ * Easy deployment: most FS components are userspace daemons
+
+Also,
+ * Flexible snapshots (on any directory)
+ * Recursive accounting (nested files, directories, bytes)
+
+In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
+on symmetric access by all clients to shared block devices, Ceph
+separates data and metadata management into independent server
+clusters, similar to Lustre.  Unlike Lustre, however, metadata and
+storage nodes run entirely as user space daemons.  Storage nodes
+utilize btrfs to store data objects, leveraging its advanced features
+(checksumming, metadata replication, etc.).  File data is striped
+across storage nodes in large chunks to distribute workload and
+facilitate high throughputs.  When storage nodes fail, data is
+re-replicated in a distributed fashion by the storage nodes themselves
+(with some minimal coordination from a cluster monitor), making the
+system extremely efficient and scalable.
+
+Metadata servers effectively form a large, consistent, distributed
+in-memory cache above the file namespace that is extremely scalable,
+dynamically redistributes metadata in response to workload changes,
+and can tolerate arbitrary (well, non-Byzantine) node failures.  The
+metadata server takes a somewhat unconventional approach to metadata
+storage to significantly improve performance for common workloads.  In
+particular, inodes with only a single link are embedded in
+directories, allowing entire directories of dentries and inodes to be
+loaded into its cache with a single I/O operation.  The contents of
+extremely large directories can be fragmented and managed by
+independent metadata servers, allowing scalable concurrent access.
+
+The system offers automatic data rebalancing/migration when scaling
+from a small cluster of just a few nodes to many hundreds, without
+requiring an administrator carve the data set into static volumes or
+go through the tedious process of migrating data between servers.
+When the file system approaches full, new nodes can be easily added
+and things will "just work."
+
+Ceph includes flexible snapshot mechanism that allows a user to create
+a snapshot on any subdirectory (and its nested contents) in the
+system.  Snapshot creation and deletion are as simple as 'mkdir
+.snap/foo' and 'rmdir .snap/foo'.
+
+Ceph also provides some recursive accounting on directories for nested
+files and bytes.  That is, a 'getfattr -d foo' on any directory in the
+system will reveal the total number of nested regular files and
+subdirectories, and a summation of all nested file sizes.  This makes
+the identification of large disk space consumers relatively quick, as
+no 'du' or similar recursive scan of the file system is required.
+
+
+Mount Syntax
+============
+
+The basic mount syntax is:
+
+ # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt
+
+You only need to specify a single monitor, as the client will get the
+full list when it connects.  (However, if the monitor you specify
+happens to be down, the mount won't succeed.)  The port can be left
+off if the monitor is using the default.  So if the monitor is at
+1.2.3.4,
+
+ # mount -t ceph 1.2.3.4:/ /mnt/ceph
+
+is sufficient.  If /sbin/mount.ceph is installed, a hostname can be
+used instead of an IP address.
+
+
+
+Mount Options
+=============
+
+  ip=A.B.C.D[:N]
+  port=N
+	Specify the IP and/or port the client should bind to locally.
+	There is normally not much reason to do this.  If the IP is not
+	specified, the client's IP address is determined by looking at the
+	address it's connection to the monitor originates from.
+
+  wsize=X
+	Specify the maximum write size in bytes.  By default there is no
+	maximu.  Ceph will normally size writes based on the file stripe
+	size.
+
+  rsize=X
+	Specify the maximum readahead.
+
+  mount_timeout=X
+	Specify the timeout value for mount (in seconds), in the case
+	of a non-responsive Ceph file system.  The default is 30
+	seconds.
+
+  rbytes
+	When stat() is called on a directory, set st_size to 'rbytes',
+	the summation of file sizes over all files nested beneath that
+	directory.  This is the default.
+
+  norbytes
+	When stat() is called on a directory, set st_size to the
+	number of entries in that directory.
+
+  nocrc
+	Disable CRC32C calculation for data writes.  If set, the OSD
+	must rely on TCP's error correction to detect data corruption
+	in the data payload.
+
+  noasyncreaddir
+	Disable client's use its local cache to satisfy	readdir
+	requests.  (This does not change correctness; the client uses
+	cached metadata only when a lease or capability ensures it is
+	valid.)
+
+
+Debugging options (these are also changeable via debugfs):
+
+  debug=N
+	Specify the level of debug output for the Ceph client.  Larger
+	values mean more output, and range from 0 to 50.  The default
+	is 1 (high-level informational messages only).
+
+  debug_console=N
+	If non-zero, debug messages will be printk'ed with KERN_ERR,
+	causing them to appear on the system console.  Otherwise,
+	messages will be printed with KERN_DEBUG and will appear in
+	the system log.
+
+  debug_msgr=N
+	Debug level for the messaging/communications layer, if >= 0.
+	Default is -1.
+
+  debug_mdsc=N
+	Debug level for the MDS client, if >= 0.
+
+  debug_osdc=N
+	Debug level for the OSD client, if >= 0.
+
+  debug_addr=N
+	Debug level for address space operations, if >= 0.
+
+  debug_file=N
+	Debug level for file operations, if >= 0.
+
+  debug_inode=N
+	Debug level for inode operations, if >= 0.
+
+  debug_caps=N
+	Debug level for file capability operations, if >= 0.
+
+  debug_snap=N
+	Debug level for snapshot operations, if >= 0.
+
+
+
+
+More Information
+================
+
+For more information on Ceph, see the home page at
+	http://ceph.newdream.net/
+
+The Linux kernel client source tree is available at
+	git://ceph.newdream.net/linux-ceph-client.git
+
+and the source for the full system is at
+	git://ceph.newdream.net/ceph.git
-- 
1.5.6.5


  reply	other threads:[~2009-07-15 21:21 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-15 21:24 [PATCH 00/20] ceph: Ceph distributed file system client v0.10 Sage Weil
2009-07-15 21:24 ` Sage Weil [this message]
2009-07-15 21:24   ` [PATCH 02/20] ceph: on-wire types Sage Weil
2009-07-15 21:24     ` [PATCH 03/20] ceph: client types Sage Weil
2009-07-15 21:24       ` [PATCH 04/20] ceph: super.c Sage Weil
2009-07-15 21:24         ` [PATCH 05/20] ceph: inode operations Sage Weil
2009-07-15 21:24           ` [PATCH 06/20] ceph: directory operations Sage Weil
2009-07-15 21:24             ` [PATCH 07/20] ceph: file operations Sage Weil
2009-07-15 21:24               ` [PATCH 08/20] ceph: address space operations Sage Weil
2009-07-15 21:24                 ` [PATCH 09/20] ceph: MDS client Sage Weil
2009-07-15 21:24                   ` [PATCH 10/20] ceph: OSD client Sage Weil
2009-07-15 21:24                     ` [PATCH 11/20] ceph: CRUSH mapping algorithm Sage Weil
2009-07-15 21:24                       ` [PATCH 12/20] ceph: monitor client Sage Weil
2009-07-15 21:24                         ` [PATCH 13/20] ceph: capability management Sage Weil
2009-07-15 21:24                           ` [PATCH 14/20] ceph: snapshot management Sage Weil
2009-07-15 21:24                             ` [PATCH 15/20] ceph: messenger library Sage Weil
2009-07-15 21:24                               ` [PATCH 16/20] ceph: nfs re-export support Sage Weil
2009-07-15 21:24                                 ` [PATCH 17/20] ceph: ioctls Sage Weil
2009-07-15 21:24                                   ` [PATCH 18/20] ceph: debugging Sage Weil
2009-07-15 21:24                                     ` [PATCH 19/20] ceph: debugfs Sage Weil
2009-07-15 21:24                                       ` [PATCH 20/20] ceph: Kconfig, Makefile Sage Weil
2009-07-16 12:27                                     ` [PATCH 18/20] ceph: debugging Andi Kleen
2009-07-16 17:17                                       ` Sage Weil
2009-07-17 18:07                                         ` Sage Weil
2009-07-17 18:56                                           ` Andi Kleen
2009-07-17 19:52                                             ` Sage Weil
2009-07-17 20:01                                               ` Andi Kleen
2009-07-17 21:35                                                 ` Sage Weil
2009-07-17 21:51                                                   ` Andi Kleen
2009-07-15 22:05                                   ` common layout xattr Andreas Dilger
2009-07-15 22:19                                     ` Sage Weil
2009-07-16  5:13                                       ` Andreas Dilger
2009-07-16 22:29                                         ` Sage Weil
2009-07-17  4:45                                           ` Andreas Dilger
2009-07-18  4:51                                             ` Sage Weil
2009-07-16 19:27                                 ` [PATCH 16/20] ceph: nfs re-export support J. Bruce Fields
2009-07-16 19:50                                   ` Sage Weil
2009-07-16 21:21                                     ` Trond Myklebust
2009-07-16 22:07                                       ` Sage Weil
2009-07-17 14:05                                         ` J. Bruce Fields
2009-07-17 16:49                                           ` Sage Weil
2009-07-17 16:57                                             ` J. Bruce Fields
2009-07-16 12:31     ` [PATCH 02/20] ceph: on-wire types Andi Kleen
2009-07-16 16:58       ` Sage Weil
2009-07-16  3:59 ` [PATCH 00/20] ceph: Ceph distributed file system client v0.10 Noah Watkins
2009-07-16 17:03   ` Sage Weil
2009-07-16 12:26 ` Andi Kleen
2009-07-16 17:11   ` Sage Weil
2009-07-18  1:28     ` Chris Wright
2009-07-18  4:39       ` Sage Weil
  -- strict thread matches above, loose matches on Subject: below --
2009-09-08 22:56 [PATCH 00/20] ceph distributed file system client Sage Weil
2009-09-08 22:56 ` [PATCH 01/20] ceph: documentation Sage Weil
2009-03-09 22:40 [PATCH 00/20] ceph: Ceph distributed file system client Sage Weil
2009-03-09 22:40 ` [PATCH 01/20] ceph: documentation Sage Weil

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1247693090-27796-2-git-send-email-sage@newdream.net \
    --to=sage@newdream.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.