All of lore.kernel.org
 help / color / mirror / Atom feed
From: Colin McCabe <cmccabe@alumni.cmu.edu>
To: Longguang Yue <longguang_yue@tcloudcomputing.com>
Cc: Sage Weil <sage@newdream.net>, ceph-devel@vger.kernel.org
Subject: Re: how long client will take when recover to ok,after ceph down.
Date: Mon, 14 Feb 2011 02:00:46 -0800	[thread overview]
Message-ID: <AANLkTi=2eOzxo17HiOpAJ0aLj06oRcJHGVZ_+S7uHL2_@mail.gmail.com> (raw)
In-Reply-To: <D0C67741759ABD4BBD8D3A5CB815606E07E915@mail.cloud-valley.com.cn>

Hi Longguang,

Basically, if *all* remote servers become inaccessible, you have two
bad choices:

1) Wait for the remote servers to become accessible.
This is Ceph's current behavior. This is also NFS' behavior in its
default "hard mount" mode.

2) Throw away data and metadata, but keep running
This is NFS' behavior if you have done a "soft mount."

Neither one of these choices is really good. That's why Ceph's focus
is on keeping the filesystem running even if a few nodes go down. We
have talked about implementing "soft mount" semantics for Ceph, but I
don't think it's been done yet (Sage, correct me if I'm wrong here?)
Also, you could get soft mount semantics by using libceph to access
the filesystem rather than the kernel client. Userspace programs usnig
libceph can always be killed in mid-write or read.

A lot of system administrators don't like soft mount semantics though.
Think carefully if that's really what you want.

cheers,
Colin


On Sun, Feb 13, 2011 at 5:21 PM, Longguang Yue
<longguang_yue@tcloudcomputing.com> wrote:
> When client lose its connection with ceph, then ls /mnt/ceph, it will
> hang all the time
> Remount does not work.
> Thanks
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2011-02-14 10:00 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-13 13:43 The Ceph wiki? Daniel Friesen
2011-02-13 18:30 ` Sage Weil
2011-02-14  0:44   ` Daniel Friesen
2011-02-14  1:21   ` how long client will take when recover to ok,after ceph down Longguang Yue
2011-02-14 10:00     ` Colin McCabe [this message]
2011-02-14  3:13   ` question about socket conection among osd,mds,mon Longguang Yue
2011-02-14 10:21     ` Colin McCabe
2011-02-14 13:37       ` 答复: " Longguang Yue
2011-02-14 17:12         ` Gregory Farnum

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTi=2eOzxo17HiOpAJ0aLj06oRcJHGVZ_+S7uHL2_@mail.gmail.com' \
    --to=cmccabe@alumni.cmu.edu \
    --cc=ceph-devel@vger.kernel.org \
    --cc=longguang_yue@tcloudcomputing.com \
    --cc=sage@newdream.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.