All of lore.kernel.org
 help / color / mirror / Atom feed
From: Gregory Farnum <greg@inktank.com>
To: "Johnu George (johnugeo)" <johnugeo@cisco.com>
Cc: Milosz Tanski <milosz@adfin.com>,
	ceph-devel <ceph-devel@vger.kernel.org>
Subject: Re: ceph data locality
Date: Mon, 8 Sep 2014 16:11:45 -0700	[thread overview]
Message-ID: <CAPYLRzgTxxOvV1fCa45K-T=GnqPRGwsPsbZZumGRp7ZhsbdAXg@mail.gmail.com> (raw)
In-Reply-To: <D033671F.70A%johnugeo@cisco.com>

It implements the getBlockLocations() api (or whatever it is) in the
Hadoop FileSystem interface. The upshot of this is that the Hadoop
scheduler can do the exact same scheduling job on tasks with Ceph that
it does with HDFS.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

On Mon, Sep 8, 2014 at 3:53 PM, Johnu George (johnugeo)
<johnugeo@cisco.com> wrote:
> Hi Greg,
>        Thanks. Can you explain more on "Ceph *does* export locations so
> the follow-up jobs can be scheduled appropriately”?
>
> Thanks,
> Johnu
>
>
> On 9/8/14, 12:51 PM, "Gregory Farnum" <greg@inktank.com> wrote:
>
>>On Thu, Sep 4, 2014 at 12:16 AM, Johnu George (johnugeo)
>><johnugeo@cisco.com> wrote:
>>> Hi All,
>>>         I was reading more on Hadoop over ceph. I heard from Noah that
>>> tuning of Hadoop on Ceph is going on. I am just curious to know if there
>>> is any reason to keep default object size as 64MB. Is it because of the
>>> fact that it becomes difficult to encode
>>>  getBlockLocations if blocks are divided into objects and to choose the
>>> best location for tasks if no nodes in the system has a complete block.?
>>
>>We used 64MB because it's the HDFS default and in some *very* stupid
>>tests it seemed to be about the fastest. You could certainly make it
>>smaller if you wanted, and it would probably work to multiply it by
>>2-4x, but then you're using bigger objects than most people do.
>>
>>> I see that Ceph doesn¹t place objects considering the client location or
>>> distance between client and the osds where data is
>>>stored.(data-locality)
>>> While, data locality is the key idea for HDFS block placement and
>>> retrieval for maximum throughput. So, how does ceph plan to perform
>>>better
>>> than HDFS as ceph relies on random placement
>>>  using hashing unlike HDFS block placement? Can someone also point out
>>> some performance results comparing ceph random placements vs hdfs
>>>locality
>>> aware placement?
>>
>>I don't think we have any serious performance results; there hasn't
>>been enough focus on productizing it for that kind of work.
>>Anecdotally I've seen people on social media claim that it's as fast
>>or even many times faster than HDFS (I suspect if it's many times
>>faster they had a misconfiguration somewhere in HDFS, though!).
>>In any case, Ceph has two plans for being faster than HDFS:
>>1) big users indicate that always writing locally is often a mistake
>>and it tends to overfill certain nodes within your cluster. Plus,
>>networks are much faster now so it doesn't cost as much to write over
>>it, and Ceph *does* export locations so the follow-up jobs can be
>>scheduled appropriately.
>>
>>>
>>> Also, Sage wrote about a way to specify a node to be primary for hadoop
>>> like environments.
>>> (http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/1548 ) Is
>>> this through primary affinity configuration?
>>
>>That mechanism ("preferred" PGs) is dead. Primary affinity is a
>>completely different thing.
>>
>>
>>On Thu, Sep 4, 2014 at 8:59 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>> QFS unlike Ceph places the erasure coding logic inside of the client
>>> so it's not a apples-to-apples comparison. but I think you get my
>>> point, and it would be possible to implement a rich Ceph
>>> (filesystem/hadoop) client like this as well.
>>>
>>> In summary, if Hadoop on Ceph is a major priority I think it would be
>>> best to "borrow" the good ideas for QFS and implement them in Hadoop
>>> Ceph filesystem and Ceph it self (letting a smart client get chunks
>>> directly, write chunks directly). I don't doubt that it's a lot of
>>> work but the results might be worth it in in terms of performance you
>>> get for the cost.
>>
>>Unfortunately implementing CephFS on top of RADOS' EC pools is going
>>to be a major project which we haven't done anything to scope out yet,
>>so it's going to be a while before that's really an option. But it is
>>a "real" filesystem, so we still have that going for us. ;)
>>-Greg
>>Software Engineer #42 @ http://inktank.com | http://ceph.com
>>--
>>To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

      reply	other threads:[~2014-09-08 23:11 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <D02D591B.57E%johnugeo@cisco.com>
     [not found] ` <D02D5C36.583%johnugeo@cisco.com>
     [not found]   ` <D02D5FAE.58B%johnugeo@cisco.com>
2014-09-04 15:59     ` ceph data locality Milosz Tanski
2014-09-08 19:51       ` Gregory Farnum
2014-09-08 22:53         ` Johnu George (johnugeo)
2014-09-08 23:11           ` Gregory Farnum [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPYLRzgTxxOvV1fCa45K-T=GnqPRGwsPsbZZumGRp7ZhsbdAXg@mail.gmail.com' \
    --to=greg@inktank.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=johnugeo@cisco.com \
    --cc=milosz@adfin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.