All of lore.kernel.org
 help / color / mirror / Atom feed
* workload balance
@ 2011-06-27  9:34 huang jun
  2011-06-27 21:06 ` Josh Durgin
  0 siblings, 1 reply; 7+ messages in thread
From: huang jun @ 2011-06-27  9:34 UTC (permalink / raw)
  To: ceph-devel

hi,all
Here is a problem confused me a lot. Some OSD's workload will be very high
when the client read a file continuously. So i think the cluster must
have a strategy to balance workload
there are two classes "MovingAverager and  IATAverager ",but i can not
figure out what the class IATAverager mean here?
and futhermore, can we bring tiered storage theory in ceph?

thanks!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: workload balance
  2011-06-27  9:34 workload balance huang jun
@ 2011-06-27 21:06 ` Josh Durgin
  2011-06-28  0:25   ` huang jun
  0 siblings, 1 reply; 7+ messages in thread
From: Josh Durgin @ 2011-06-27 21:06 UTC (permalink / raw)
  To: huang jun; +Cc: ceph-devel

On 06/27/2011 02:34 AM, huang jun wrote:
> hi,all
> Here is a problem confused me a lot. Some OSD's workload will be very high
> when the client read a file continuously. So i think the cluster must
> have a strategy to balance workload
> there are two classes "MovingAverager and  IATAverager ",but i can not
> figure out what the class IATAverager mean here?
> and futhermore, can we bring tiered storage theory in ceph?
> 
> thanks!

Those classes were attempts to use some statistics to discard some
reads, but they did not work so they've been removed from master.
Our current strategy is simply striping files across objects.

Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: workload balance
  2011-06-27 21:06 ` Josh Durgin
@ 2011-06-28  0:25   ` huang jun
  2011-06-30 18:04     ` Josh Durgin
  0 siblings, 1 reply; 7+ messages in thread
From: huang jun @ 2011-06-28  0:25 UTC (permalink / raw)
  To: Josh Durgin; +Cc: ceph-devel

thanks,Josh
By default,we set two replicas for each PG,so if we use ceph
as back-end storage of a website, you know, some files will be frequently read,
if then of thousands clients do this, some osd's workload will be very high.
so in this circumstance, how to balance the whole  cluster's workload?

在 2011年6月28日 上午5:06,Josh Durgin <josh.durgin@dreamhost.com> 写道:
> On 06/27/2011 02:34 AM, huang jun wrote:
>> hi,all
>> Here is a problem confused me a lot. Some OSD's workload will be very high
>> when the client read a file continuously. So i think the cluster must
>> have a strategy to balance workload
>> there are two classes "MovingAverager and  IATAverager ",but i can not
>> figure out what the class IATAverager mean here?
>> and futhermore, can we bring tiered storage theory in ceph?
>>
>> thanks!
>
> Those classes were attempts to use some statistics to discard some
> reads, but they did not work so they've been removed from master.
> Our current strategy is simply striping files across objects.
>
> Josh
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: workload balance
  2011-06-28  0:25   ` huang jun
@ 2011-06-30 18:04     ` Josh Durgin
  2011-07-01  0:13       ` huang jun
  2011-07-17 10:31       ` srimugunthan dhandapani
  0 siblings, 2 replies; 7+ messages in thread
From: Josh Durgin @ 2011-06-30 18:04 UTC (permalink / raw)
  To: huang jun; +Cc: ceph-devel

On 06/27/2011 05:25 PM, huang jun wrote:
> thanks,Josh
> By default,we set two replicas for each PG,so if we use ceph
> as back-end storage of a website, you know, some files will be frequently read,
> if then of thousands clients do this, some osd's workload will be very high.
> so in this circumstance, how to balance the whole  cluster's workload?

If the files don't change often, they can be cached by the clients. If
there really is one object that is being updated and read frequently,
there's not much you can do currently. To reduce the load on the primary
OSD, we could add a flag to the MDS to tell clients to read from
replicas based on the usage.

Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: workload balance
  2011-06-30 18:04     ` Josh Durgin
@ 2011-07-01  0:13       ` huang jun
  2011-07-17 10:31       ` srimugunthan dhandapani
  1 sibling, 0 replies; 7+ messages in thread
From: huang jun @ 2011-07-01  0:13 UTC (permalink / raw)
  To: Josh Durgin; +Cc: ceph-devel

thank you Josh
Currently,we can reduce the primary workload by distributing read
requests to replicas,
but once a object becomes a extremly hot point, this stratey does not
make much difference, so can we bring Tiered Storage into ceph, just
like to use HSM or ILM? In this way, client can got data more
effeciently.

best regards!

在 2011年7月1日 上午2:04,Josh Durgin <josh.durgin@dreamhost.com> 写道:
> On 06/27/2011 05:25 PM, huang jun wrote:
>> thanks,Josh
>> By default,we set two replicas for each PG,so if we use ceph
>> as back-end storage of a website, you know, some files will be frequently read,
>> if then of thousands clients do this, some osd's workload will be very high.
>> so in this circumstance, how to balance the whole  cluster's workload?
>
> If the files don't change often, they can be cached by the clients. If
> there really is one object that is being updated and read frequently,
> there's not much you can do currently. To reduce the load on the primary
> OSD, we could add a flag to the MDS to tell clients to read from
> replicas based on the usage.
>
> Josh
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: workload balance
  2011-06-30 18:04     ` Josh Durgin
  2011-07-01  0:13       ` huang jun
@ 2011-07-17 10:31       ` srimugunthan dhandapani
  2011-07-19 14:39         ` Sage Weil
  1 sibling, 1 reply; 7+ messages in thread
From: srimugunthan dhandapani @ 2011-07-17 10:31 UTC (permalink / raw)
  To: ceph-devel

2011/6/30 Josh Durgin <josh.durgin@dreamhost.com>
>
> On 06/27/2011 05:25 PM, huang jun wrote:
> > thanks,Josh
> > By default,we set two replicas for each PG,so if we use ceph
> > as back-end storage of a website, you know, some files will be frequently read,
> > if then of thousands clients do this, some osd's workload will be very high.
> > so in this circumstance, how to balance the whole  cluster's workload?
>
> If the files don't change often, they can be cached by the clients. If
> there really is one object that is being updated and read frequently,
> there's not much you can do currently. To reduce the load on the primary
> OSD, we could add a flag to the MDS to tell clients to read from
> replicas based on the usage.


If a particular file is updated heavily, if we can change the inode
number of the heavily updated file, then the objects will be remapped
to new locations and can result in balancing.
Will that be a good solution to implement?

>
> Josh
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: workload balance
  2011-07-17 10:31       ` srimugunthan dhandapani
@ 2011-07-19 14:39         ` Sage Weil
  0 siblings, 0 replies; 7+ messages in thread
From: Sage Weil @ 2011-07-19 14:39 UTC (permalink / raw)
  To: srimugunthan dhandapani; +Cc: ceph-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1808 bytes --]

On Sun, 17 Jul 2011, srimugunthan dhandapani wrote:
> 2011/6/30 Josh Durgin <josh.durgin@dreamhost.com>
> >
> > On 06/27/2011 05:25 PM, huang jun wrote:
> > > thanks,Josh
> > > By default,we set two replicas for each PG,so if we use ceph
> > > as back-end storage of a website, you know, some files will be frequently read,
> > > if then of thousands clients do this, some osd's workload will be very high.
> > > so in this circumstance, how to balance the whole  cluster's workload?
> >
> > If the files don't change often, they can be cached by the clients. If
> > there really is one object that is being updated and read frequently,
> > there's not much you can do currently. To reduce the load on the primary
> > OSD, we could add a flag to the MDS to tell clients to read from
> > replicas based on the usage.
> 
> 
> If a particular file is updated heavily, if we can change the inode
> number of the heavily updated file, then the objects will be remapped
> to new locations and can result in balancing.
> Will that be a good solution to implement?

I'm not sure that would help.  If the inode changes (a big if), then the 
existing data has to move too, and you probably don't win anything.

The challenge with many writers in general is keeping the writes atomic 
and (logically) serialized.  That's simple enough if they all go through a 
single node.  The second problem is that, even with some clever way to 
distribute that work (some tree hierarchy aggregating writes in front of 
the final object, say), the clients have to know when to do that (vs the 
simple approach in the general case).

Do you really have thousands of clients writing to the same 4MB range of a 
file?  (Remember the file striping parameters can be adjusted to change 
that.)

sage

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2011-07-19 14:35 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-27  9:34 workload balance huang jun
2011-06-27 21:06 ` Josh Durgin
2011-06-28  0:25   ` huang jun
2011-06-30 18:04     ` Josh Durgin
2011-07-01  0:13       ` huang jun
2011-07-17 10:31       ` srimugunthan dhandapani
2011-07-19 14:39         ` Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.