All of lore.kernel.org
 help / color / mirror / Atom feed
* set_layout
@ 2012-04-27 17:25 xinren
  2012-04-30 17:43 ` set_layout Tommi Virtanen
  0 siblings, 1 reply; 3+ messages in thread
From: xinren @ 2012-04-27 17:25 UTC (permalink / raw)
  To: ceph-devel

All,

I try to use cephfs to set up 
various parameters, such as 

cephfs /cephdir set_layout -s 8388608 -c 1 -u 8388608 -o 1

In this case, the "preferred osd" will be 1. I then copy files to /cephdir. I 
found that stripe_unit is 8MByte. However, files are copies to different OSDs (I 
have 4 OSDs). Is there any way that I copy a file to a designate OSD? Thanks in 
advance.

Regards

Xinren




^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: set_layout
  2012-04-27 17:25 set_layout xinren
@ 2012-04-30 17:43 ` Tommi Virtanen
  2012-05-03 21:09   ` set_layout xinren
  0 siblings, 1 reply; 3+ messages in thread
From: Tommi Virtanen @ 2012-04-30 17:43 UTC (permalink / raw)
  To: xinren; +Cc: ceph-devel

On Fri, Apr 27, 2012 at 10:25, xinren <tntinfo@hotmail.com> wrote:
> All,
>
> I try to use cephfs to set up
> various parameters, such as
>
> cephfs /cephdir set_layout -s 8388608 -c 1 -u 8388608 -o 1
>
> In this case, the "preferred osd" will be 1. I then copy files to /cephdir. I
> found that stripe_unit is 8MByte. However, files are copies to different OSDs (I
> have 4 OSDs). Is there any way that I copy a file to a designate OSD? Thanks in
> advance.

The "local PGs" feature used by -o is deprecated and removed
(depending on the release).

Experience with the feature showed that it did not behave well, and we
currently consider trying to manually manage data allocations to osds
a bad idea.

If you want to manage how your data is located and replicated, you are
better off using pools with custom crushmap rules. This allows for
placement policies that are still flexible enough to handle osd
failures and balancing.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: set_layout
  2012-04-30 17:43 ` set_layout Tommi Virtanen
@ 2012-05-03 21:09   ` xinren
  0 siblings, 0 replies; 3+ messages in thread
From: xinren @ 2012-05-03 21:09 UTC (permalink / raw)
  To: ceph-devel

Thanks for the info. I will look at "custom crushmap rules".




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2012-05-03 21:09 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-27 17:25 set_layout xinren
2012-04-30 17:43 ` set_layout Tommi Virtanen
2012-05-03 21:09   ` set_layout xinren

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.