All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/20] ceph distributed file system client
@ 2009-09-08 22:56 Sage Weil
  2009-09-08 22:56 ` [PATCH 01/20] ceph: documentation Sage Weil
  2009-09-08 23:10 ` [PATCH 00/20] ceph distributed file system client Daniel Walker
  0 siblings, 2 replies; 27+ messages in thread
From: Sage Weil @ 2009-09-08 22:56 UTC (permalink / raw)
  To: linux-fsdevel, linux-kernel; +Cc: Sage Weil

Hi,

This is v0.14 of the Ceph distributed file system client.  Changes since
v0.12 (the last release posted) include:

 - refactored, simplified network message library
   - now strictly client/server
   - simplified callback vector
   - fewer memory allocations
 - improved client/monitor protocol
 - fixed EOF vs (short) read behavior with multi-client sharing
 - cleanup, refactoring in osd reply code
 - bug fixes

The biggest change is in the message library.  A whole class of
potential memory allocations have been removed, the groundwork has
been laid to reserve memory for incoming messages, and lots of code
got removed in the process.

This is mainly motivated by the desire to eliminate any memory
allocations during writeback; we're pretty close to having that
resolved.  If there are other areas of concern (general or specific)
with this patchset, please speak up.

We would like to see this merged soon.  What is the next step here?

Thanks,
sage


Kernel client git tree:
        git://ceph.newdream.net/linux-ceph-client.git

System:
	git://ceph.newdream.net/ceph.git

---
 Documentation/filesystems/ceph.txt |  140 ++
 fs/Kconfig                         |    1 +
 fs/Makefile                        |    1 +
 fs/ceph/Kconfig                    |   26 +
 fs/ceph/Makefile                   |   35 +
 fs/ceph/addr.c                     | 1170 +++++++++++++++
 fs/ceph/buffer.h                   |   93 ++
 fs/ceph/caps.c                     | 2768 ++++++++++++++++++++++++++++++++++
 fs/ceph/ceph_debug.h               |   34 +
 fs/ceph/ceph_fs.h                  |  935 ++++++++++++
 fs/ceph/ceph_ver.h                 |    6 +
 fs/ceph/crush/crush.c              |  140 ++
 fs/ceph/crush/crush.h              |  188 +++
 fs/ceph/crush/hash.h               |   90 ++
 fs/ceph/crush/mapper.c             |  588 ++++++++
 fs/ceph/crush/mapper.h             |   20 +
 fs/ceph/debugfs.c                  |  455 ++++++
 fs/ceph/decode.h                   |  136 ++
 fs/ceph/dir.c                      | 1175 +++++++++++++++
 fs/ceph/export.c                   |  235 +++
 fs/ceph/file.c                     |  916 +++++++++++
 fs/ceph/inode.c                    | 2398 +++++++++++++++++++++++++++++
 fs/ceph/ioctl.c                    |   98 ++
 fs/ceph/ioctl.h                    |   20 +
 fs/ceph/mds_client.c               | 2913 ++++++++++++++++++++++++++++++++++++
 fs/ceph/mds_client.h               |  320 ++++
 fs/ceph/mdsmap.c                   |  139 ++
 fs/ceph/mdsmap.h                   |   47 +
 fs/ceph/messenger.c                | 1815 ++++++++++++++++++++++
 fs/ceph/messenger.h                |  263 ++++
 fs/ceph/mon_client.c               |  651 ++++++++
 fs/ceph/mon_client.h               |  102 ++
 fs/ceph/msgr.h                     |  158 ++
 fs/ceph/osd_client.c               | 1278 ++++++++++++++++
 fs/ceph/osd_client.h               |  142 ++
 fs/ceph/osdmap.c                   |  871 +++++++++++
 fs/ceph/osdmap.h                   |   94 ++
 fs/ceph/rados.h                    |  427 ++++++
 fs/ceph/snap.c                     |  896 +++++++++++
 fs/ceph/super.c                    | 1035 +++++++++++++
 fs/ceph/super.h                    |  961 ++++++++++++
 fs/ceph/types.h                    |   27 +
 42 files changed, 23807 insertions(+), 0 deletions(-)

^ permalink raw reply	[flat|nested] 27+ messages in thread
* [PATCH 00/20] ceph: Ceph distributed file system client v0.10
@ 2009-07-15 21:24 Sage Weil
  2009-07-15 21:24 ` [PATCH 01/20] ceph: documentation Sage Weil
  0 siblings, 1 reply; 27+ messages in thread
From: Sage Weil @ 2009-07-15 21:24 UTC (permalink / raw)
  To: linux-fsdevel, linux-kernel; +Cc: Sage Weil

This is v0.10 of the Ceph distributed file system client.

Changes since v0.9:
 - fixed unaligned memory access (thanks for heads up to Stefan Richter)
 - a few code cleanups
 - MDS reconnect and op replay bugfixes.  (The main milestone here is
   stable handling of MDS server failures and restarts, tested by
   running various workloads with the servers in restart loops.)

What would people like to see for this to be merged into fs/?

Thanks-
sage




---

Ceph is a distributed file system designed for reliability, scalability, 
and performance.  The storage system consists of some (potentially 
large) number of storage servers (bricks), a smaller set of metadata 
server daemons, and a few monitor daemons for managing cluster 
membership and state.  The storage daemons rely on btrfs for storing 
data (and take advantage of btrfs' internal transactions to keep the 
local data set in a consistent state).  This makes the storage cluster 
simple to deploy, while providing scalability not currently available 
from block-based Linux cluster file systems.

Additionaly, Ceph brings a few new things to Linux.  Directory 
granularity snapshots allow users to create a read-only snapshot of any 
directory (and its nested contents) with 'mkdir .snap/my_snapshot' [1]. 
Deletion is similarly trivial ('rmdir .snap/old_snapshot').  Ceph also 
maintains recursive accounting statistics on the number of nested files, 
directories, and file sizes for each directory, making it much easier 
for an administrator to manage usage [2].

Basic features include:

 * Strong data and metadata consistency between clients
 * High availability and reliability.  No single points of failure.
 * N-way replication of all data across storage nodes
 * Scalability from 1 to potentially many thousands of nodes
 * Fast recovery from node failures
 * Automatic rebalancing of data on node addition/removal
 * Easy deployment: most FS components are userspace daemons

In contrast to cluster filesystems like GFS2 and OCFS2 that rely on 
symmetric access by all clients to shared block devices, Ceph separates 
data and metadata management into independent server clusters, similar 
to Lustre.  Unlike Lustre, however, metadata and storage nodes run 
entirely as user space daemons.  The storage daemon utilizes btrfs to 
store data objects, leveraging its advanced features (transactions, 
checksumming, metadata replication, etc.).  File data is striped across 
storage nodes in large chunks to distribute workload and facilitate high 
throughputs. When storage nodes fail, data is re-replicated in a 
distributed fashion by the storage nodes themselves (with some minimal 
coordination from the cluster monitor), making the system extremely 
efficient and scalable.

Metadata servers effectively form a large, consistent, distributed
in-memory cache above the storage cluster that is scalable,
dynamically redistributes metadata in response to workload changes,
and can tolerate arbitrary (well, non-Byzantine) node failures.  The
metadata server embeds inodes with only a single link inside the
directories that contain them, allowing entire directories of dentries
and inodes to be loaded into its cache with a single I/O operation.
Hard links are supported via an auxiliary table facilitating inode
lookup by number.  The contents of large directories can be fragmented
and managed by independent metadata servers, allowing scalable
concurrent access.

The system offers automatic data rebalancing/migration when scaling from 
a small cluster of just a few nodes to many hundreds, without requiring 
an administrator to carve the data set into static volumes or go through 
the tedious process of migrating data between servers.  When the file 
system approaches full, new storage nodes can be easily added and things 
will "just work."

A git tree containing just the client (and this patch series) is at
	git://ceph.newdream.net/linux-ceph-client.git

The corresponding user space daemons need to be built in order to test
it.  Instructions for getting a test setup running are at
        http://ceph.newdream.net/wiki/

The source for the full system is at
	git://ceph.newdream.net/ceph.git

Debian packages are available from
	http://ceph.newdream.net/debian

The Ceph home page is at
	http://ceph.newdream.net

[1] Snapshots
        http://marc.info/?l=linux-fsdevel&m=122341525709480&w=2
[2] Recursive accounting
        http://marc.info/?l=linux-fsdevel&m=121614651204667&w=2

---
 Documentation/filesystems/ceph.txt |  181 +++
 fs/Kconfig                         |    1 +
 fs/Makefile                        |    1 +
 fs/ceph/Kconfig                    |   14 +
 fs/ceph/Makefile                   |   35 +
 fs/ceph/addr.c                     | 1099 ++++++++++++++
 fs/ceph/caps.c                     | 2570 +++++++++++++++++++++++++++++++++
 fs/ceph/ceph_debug.h               |   86 ++
 fs/ceph/ceph_fs.h                  |  924 ++++++++++++
 fs/ceph/ceph_ver.h                 |    6 +
 fs/ceph/crush/crush.c              |  140 ++
 fs/ceph/crush/crush.h              |  188 +++
 fs/ceph/crush/hash.h               |   90 ++
 fs/ceph/crush/mapper.c             |  597 ++++++++
 fs/ceph/crush/mapper.h             |   19 +
 fs/ceph/debugfs.c                  |  604 ++++++++
 fs/ceph/decode.h                   |  136 ++
 fs/ceph/dir.c                      | 1129 +++++++++++++++
 fs/ceph/export.c                   |  155 ++
 fs/ceph/file.c                     |  794 +++++++++++
 fs/ceph/inode.c                    | 2357 ++++++++++++++++++++++++++++++
 fs/ceph/ioctl.c                    |   65 +
 fs/ceph/ioctl.h                    |   12 +
 fs/ceph/mds_client.c               | 2775 ++++++++++++++++++++++++++++++++++++
 fs/ceph/mds_client.h               |  353 +++++
 fs/ceph/mdsmap.c                   |  132 ++
 fs/ceph/mdsmap.h                   |   45 +
 fs/ceph/messenger.c                | 2392 +++++++++++++++++++++++++++++++
 fs/ceph/messenger.h                |  273 ++++
 fs/ceph/mon_client.c               |  454 ++++++
 fs/ceph/mon_client.h               |  135 ++
 fs/ceph/msgr.h                     |  156 ++
 fs/ceph/osd_client.c               |  983 +++++++++++++
 fs/ceph/osd_client.h               |  151 ++
 fs/ceph/osdmap.c                   |  692 +++++++++
 fs/ceph/osdmap.h                   |   83 ++
 fs/ceph/rados.h                    |  419 ++++++
 fs/ceph/snap.c                     |  890 ++++++++++++
 fs/ceph/super.c                    | 1204 ++++++++++++++++
 fs/ceph/super.h                    |  952 ++++++++++++
 fs/ceph/types.h                    |   27 +
 41 files changed, 23319 insertions(+), 0 deletions(-)

^ permalink raw reply	[flat|nested] 27+ messages in thread
* [PATCH 00/20] ceph: Ceph distributed file system client
@ 2009-03-09 22:40 Sage Weil
  2009-03-09 22:40 ` [PATCH 01/20] ceph: documentation Sage Weil
  0 siblings, 1 reply; 27+ messages in thread
From: Sage Weil @ 2009-03-09 22:40 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel; +Cc: Sage Weil

This is a patch series for v0.7 of the Ceph distributed file system
client (against v2.6.29-rc7).

Changes since v0.6:
 * Improved (faster) truncate strategy
 * Moved proc items to sysfs
 * Bug fixes, performance improvements

Changes since v0.5:
 * Asynchronous commit of metadata operations to server

Please consider for inclusion in mm and/or staging trees.  Review
and/or comments are most welcome.

Thanks,
sage


---

Ceph is a distributed file system designed for reliability, scalability, 
and performance.  The storage system consists of some (potentially 
large) number of storage servers (bricks), a smaller set of metadata 
server daemons, and a few monitor daemons for managing cluster 
membership and state.  The storage daemons rely on btrfs for storing 
data (and take advantage of btrfs' internal transactions to keep the 
local data set in a consistent state).  This makes the storage cluster 
simple to deploy, while providing scalability not currently available 
from block-based Linux cluster file systems.

Additionaly, Ceph brings a few new things to Linux.  Directory 
granularity snapshots allow users to create a read-only snapshot of any 
directory (and its nested contents) with 'mkdir .snap/my_snapshot' [1]. 
Deletion is similarly trivial ('rmdir .snap/old_snapshot').  Ceph also 
maintains recursive accounting statistics on the number of nested files, 
directories, and file sizes for each directory, making it much easier 
for an administrator to manage usage [2].

Basic features include:

 * Strong data and metadata consistency between clients
 * High availability and reliability.  No single points of failure.
 * N-way replication of all data across storage nodes
 * Scalability from 1 to potentially many thousands of nodes
 * Fast recovery from node failures
 * Automatic rebalancing of data on node addition/removal
 * Easy deployment: most FS components are userspace daemons

In contrast to cluster filesystems like GFS2 and OCFS2 that rely on 
symmetric access by all clients to shared block devices, Ceph separates 
data and metadata management into independent server clusters, similar 
to Lustre.  Unlike Lustre, however, metadata and storage nodes run 
entirely as user space daemons.  The storage daemon utilizes btrfs to 
store data objects, leveraging its advanced features (transactions, 
checksumming, metadata replication, etc.).  File data is striped across 
storage nodes in large chunks to distribute workload and facilitate high 
throughputs. When storage nodes fail, data is re-replicated in a 
distributed fashion by the storage nodes themselves (with some minimal 
coordination from the cluster monitor), making the system extremely 
efficient and scalable.

Metadata servers effectively form a large, consistent, distributed
in-memory cache above the storage cluster that is scalable,
dynamically redistributes metadata in response to workload changes,
and can tolerate arbitrary (well, non-Byzantine) node failures.  The
metadata server embeds inodes with only a single link inside the
directories that contain them, allowing entire directories of dentries
and inodes to be loaded into its cache with a single I/O operation.
Hard links are supported via an auxiliary table facilitating inode
lookup by number.  The contents of large directories can be fragmented
and managed by independent metadata servers, allowing scalable
concurrent access.

The system offers automatic data rebalancing/migration when scaling from 
a small cluster of just a few nodes to many hundreds, without requiring 
an administrator to carve the data set into static volumes or go through 
the tedious process of migrating data between servers.  When the file 
system approaches full, new storage nodes can be easily added and things 
will "just work."

A git tree containing just the client (and this patch series) is at
	git://ceph.newdream.net/linux-ceph-client.git

A few caveats:
  * The corresponding user space daemons need to be built in order to test
    it.  Instructions for getting a test setup running are at
        http://ceph.newdream.net/wiki/
  * There is some #ifdef kernel version compatibility cruft that will
    obviously be removed down the line.

The source for the full system is at
	git://ceph.newdream.net/ceph.git

Debian packages are available from
	http://ceph.newdream.net/debian

The Ceph home page is at
	http://ceph.newdream.net

[1] Snapshots
        http://marc.info/?l=linux-fsdevel&m=122341525709480&w=2
[2] Recursive accounting
        http://marc.info/?l=linux-fsdevel&m=121614651204667&w=2

---
 Documentation/filesystems/ceph.txt |  175 +++
 fs/Kconfig                         |    1 +
 fs/Makefile                        |    1 +
 fs/ceph/Kconfig                    |   20 +
 fs/ceph/Makefile                   |   35 +
 fs/ceph/addr.c                     | 1027 ++++++++++++++++
 fs/ceph/bookkeeper.c               |  117 ++
 fs/ceph/bookkeeper.h               |   19 +
 fs/ceph/caps.c                     | 1900 ++++++++++++++++++++++++++++
 fs/ceph/ceph_debug.h               |  103 ++
 fs/ceph/ceph_fs.h                  | 1355 ++++++++++++++++++++
 fs/ceph/ceph_ver.h                 |    6 +
 fs/ceph/crush/crush.c              |  139 +++
 fs/ceph/crush/crush.h              |  179 +++
 fs/ceph/crush/hash.h               |   80 ++
 fs/ceph/crush/mapper.c             |  536 ++++++++
 fs/ceph/crush/mapper.h             |   19 +
 fs/ceph/decode.h                   |  151 +++
 fs/ceph/dir.c                      |  837 +++++++++++++
 fs/ceph/export.c                   |  143 +++
 fs/ceph/file.c                     |  432 +++++++
 fs/ceph/inode.c                    | 2090 +++++++++++++++++++++++++++++++
 fs/ceph/ioctl.c                    |   62 +
 fs/ceph/ioctl.h                    |   12 +
 fs/ceph/mds_client.c               | 2391 ++++++++++++++++++++++++++++++++++++
 fs/ceph/mds_client.h               |  314 +++++
 fs/ceph/mdsmap.c                   |  118 ++
 fs/ceph/mdsmap.h                   |   94 ++
 fs/ceph/messenger.c                | 2389 +++++++++++++++++++++++++++++++++++
 fs/ceph/messenger.h                |  267 ++++
 fs/ceph/mon_client.c               |  450 +++++++
 fs/ceph/mon_client.h               |  109 ++
 fs/ceph/osd_client.c               | 1173 ++++++++++++++++++
 fs/ceph/osd_client.h               |  142 +++
 fs/ceph/osdmap.c                   |  641 ++++++++++
 fs/ceph/osdmap.h                   |  106 ++
 fs/ceph/snap.c                     |  883 +++++++++++++
 fs/ceph/super.c                    | 1120 +++++++++++++++++
 fs/ceph/super.h                    |  813 ++++++++++++
 fs/ceph/sysfs.c                    |  465 +++++++
 fs/ceph/types.h                    |   20 +
 41 files changed, 20934 insertions(+), 0 deletions(-)

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2009-09-08 23:47 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-08 22:56 [PATCH 00/20] ceph distributed file system client Sage Weil
2009-09-08 22:56 ` [PATCH 01/20] ceph: documentation Sage Weil
2009-09-08 22:56   ` [PATCH 02/20] ceph: on-wire types Sage Weil
2009-09-08 22:56     ` [PATCH 03/20] ceph: client types Sage Weil
2009-09-08 22:56       ` [PATCH 04/20] ceph: ref counted buffer Sage Weil
2009-09-08 22:56         ` [PATCH 05/20] ceph: super.c Sage Weil
2009-09-08 22:56           ` [PATCH 06/20] ceph: inode operations Sage Weil
2009-09-08 22:56             ` [PATCH 07/20] ceph: directory operations Sage Weil
2009-09-08 22:56               ` [PATCH 08/20] ceph: file operations Sage Weil
2009-09-08 22:56                 ` [PATCH 09/20] ceph: address space operations Sage Weil
2009-09-08 22:56                   ` [PATCH 10/20] ceph: MDS client Sage Weil
2009-09-08 22:56                     ` [PATCH 11/20] ceph: OSD client Sage Weil
2009-09-08 22:56                       ` [PATCH 12/20] ceph: CRUSH mapping algorithm Sage Weil
2009-09-08 22:56                         ` [PATCH 13/20] ceph: monitor client Sage Weil
2009-09-08 22:56                           ` [PATCH 14/20] ceph: capability management Sage Weil
2009-09-08 22:56                             ` [PATCH 15/20] ceph: snapshot management Sage Weil
2009-09-08 22:56                               ` [PATCH 16/20] ceph: messenger library Sage Weil
2009-09-08 22:56                                 ` [PATCH 17/20] ceph: nfs re-export support Sage Weil
2009-09-08 22:56                                   ` [PATCH 18/20] ceph: ioctls Sage Weil
2009-09-08 22:56                                     ` [PATCH 19/20] ceph: debugfs Sage Weil
2009-09-08 22:56                                       ` [PATCH 20/20] ceph: Kconfig, Makefile Sage Weil
2009-09-08 23:05                                     ` [PATCH 18/20] ceph: ioctls Randy Dunlap
2009-09-08 23:45                                       ` Sage Weil
2009-09-08 23:10 ` [PATCH 00/20] ceph distributed file system client Daniel Walker
2009-09-08 23:47   ` Sage Weil
  -- strict thread matches above, loose matches on Subject: below --
2009-07-15 21:24 [PATCH 00/20] ceph: Ceph distributed file system client v0.10 Sage Weil
2009-07-15 21:24 ` [PATCH 01/20] ceph: documentation Sage Weil
2009-07-15 21:24   ` [PATCH 02/20] ceph: on-wire types Sage Weil
2009-07-15 21:24     ` [PATCH 03/20] ceph: client types Sage Weil
2009-03-09 22:40 [PATCH 00/20] ceph: Ceph distributed file system client Sage Weil
2009-03-09 22:40 ` [PATCH 01/20] ceph: documentation Sage Weil
2009-03-09 22:40   ` [PATCH 02/20] ceph: on-wire types Sage Weil
2009-03-09 22:40     ` [PATCH 03/20] ceph: client types Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.