All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sage Weil <sage@newdream.net>
To: srimugunthan dhandapani <srimugunthan.dhandapani@gmail.com>
Cc: ceph-devel <ceph-devel@vger.kernel.org>
Subject: Re: workload balance
Date: Tue, 19 Jul 2011 07:39:15 -0700 (PDT)	[thread overview]
Message-ID: <Pine.LNX.4.64.1107190735050.5656@cobra.newdream.net> (raw)
In-Reply-To: <CAMjNe_fkt1Mw489F_=dxGCqoxZes1ASZu5hGBJtiCgOMNYoeNg@mail.gmail.com>

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1808 bytes --]

On Sun, 17 Jul 2011, srimugunthan dhandapani wrote:
> 2011/6/30 Josh Durgin <josh.durgin@dreamhost.com>
> >
> > On 06/27/2011 05:25 PM, huang jun wrote:
> > > thanks,Josh
> > > By default,we set two replicas for each PG,so if we use ceph
> > > as back-end storage of a website, you know, some files will be frequently read,
> > > if then of thousands clients do this, some osd's workload will be very high.
> > > so in this circumstance, how to balance the whole  cluster's workload?
> >
> > If the files don't change often, they can be cached by the clients. If
> > there really is one object that is being updated and read frequently,
> > there's not much you can do currently. To reduce the load on the primary
> > OSD, we could add a flag to the MDS to tell clients to read from
> > replicas based on the usage.
> 
> 
> If a particular file is updated heavily, if we can change the inode
> number of the heavily updated file, then the objects will be remapped
> to new locations and can result in balancing.
> Will that be a good solution to implement?

I'm not sure that would help.  If the inode changes (a big if), then the 
existing data has to move too, and you probably don't win anything.

The challenge with many writers in general is keeping the writes atomic 
and (logically) serialized.  That's simple enough if they all go through a 
single node.  The second problem is that, even with some clever way to 
distribute that work (some tree hierarchy aggregating writes in front of 
the final object, say), the clients have to know when to do that (vs the 
simple approach in the general case).

Do you really have thousands of clients writing to the same 4MB range of a 
file?  (Remember the file striping parameters can be adjusted to change 
that.)

sage

      reply	other threads:[~2011-07-19 14:35 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-27  9:34 workload balance huang jun
2011-06-27 21:06 ` Josh Durgin
2011-06-28  0:25   ` huang jun
2011-06-30 18:04     ` Josh Durgin
2011-07-01  0:13       ` huang jun
2011-07-17 10:31       ` srimugunthan dhandapani
2011-07-19 14:39         ` Sage Weil [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.1107190735050.5656@cobra.newdream.net \
    --to=sage@newdream.net \
    --cc=ceph-devel@vger.kernel.org \
    --cc=srimugunthan.dhandapani@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.