All of lore.kernel.org
 help / color / mirror / Atom feed
From: Somnath Roy <Somnath.Roy@sandisk.com>
To: "Wang, Zhiqiang" <zhiqiang.wang@intel.com>,
	Gregory Farnum <greg@gregs42.com>
Cc: ceph-devel <ceph-devel@vger.kernel.org>
Subject: RE: CephFS + Erasure coding
Date: Wed, 6 May 2015 04:17:24 +0000	[thread overview]
Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2CD86F6A@SACMBXIP01.sdcorp.global.sandisk.com> (raw)
In-Reply-To: <06E7D85B3BA36C4DB207FEDE871C53489D9E71@SHSMSX101.ccr.corp.intel.com>

Thanks Wang !
But, is this supported right now or coming with object stub implementation in Infernalis ?

Regards
Somnath

-----Original Message-----
From: Wang, Zhiqiang [mailto:zhiqiang.wang@intel.com] 
Sent: Tuesday, May 05, 2015 7:42 PM
To: Somnath Roy; Gregory Farnum
Cc: ceph-devel
Subject: RE: CephFS + Erasure coding

I think the readproxy mode is what you need. Read is proxied to the base tier, and write is on the cache tier.

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Friday, May 1, 2015 5:58 AM
To: Gregory Farnum
Cc: ceph-devel
Subject: RE: CephFS + Erasure coding

Greg,
Probably not supported right now, but, wanted to confirm if there is any way we can use Ceph cache tier for only writes and reads are forwarded to the backend erasure coded pool.

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Thursday, April 30, 2015 2:28 PM
To: Gregory Farnum
Cc: ceph-devel
Subject: RE: CephFS + Erasure coding

Got it , thanks !

-----Original Message-----
From: Gregory Farnum [mailto:greg@gregs42.com]
Sent: Thursday, April 30, 2015 2:21 PM
To: Somnath Roy
Cc: ceph-devel
Subject: Re: CephFS + Erasure coding

On Thu, Apr 30, 2015 at 1:55 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Hi Greg,
> Forgive my ignorance on this part if it is already discussed in the community. Could you please let me know if CephFS is supporting Erasure coded pool in the backend or not ?
> If not, is there any plan of supporting this kind of configuration in near future ?

You can set replicated cache pools, positioned as tiers over an EC pool, as data pools for CephFS. You cannot use EC pools directly for CephFS. We have no plans to add support for EC pools in the foreseeable future — since they have limited RADOS functionality we'd need to implement a whole log-structured file storage system or something. :( -Greg

>
> Thanks & Regards
> Somnath
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
\x04 {.n +       +%  lzwm  b 맲  r  yǩ ׯzX  \x17  ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z \x1e w   ?    & )ߢ^[f
N     r  y   b X  ǧv ^ )޺{.n +   z ]z   {ay \x1dʇڙ ,j   f   h   z \x1e w       j:+v   w j m         zZ+     ݢj"  ! i

  reply	other threads:[~2015-05-06  4:17 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-30 20:55 CephFS + Erasure coding Somnath Roy
2015-04-30 21:20 ` Gregory Farnum
2015-04-30 21:27   ` Somnath Roy
2015-04-30 21:57     ` Somnath Roy
2015-04-30 22:00       ` Gregory Farnum
2015-05-06  2:42       ` Wang, Zhiqiang
2015-05-06  4:17         ` Somnath Roy [this message]
2015-05-06  4:29           ` Wang, Zhiqiang
2015-05-06  4:32             ` Somnath Roy
2015-05-06  4:39               ` Wang, Zhiqiang
2015-05-06  5:03                 ` Somnath Roy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=755F6B91B3BE364F9BCA11EA3F9E0C6F2CD86F6A@SACMBXIP01.sdcorp.global.sandisk.com \
    --to=somnath.roy@sandisk.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=greg@gregs42.com \
    --cc=zhiqiang.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.