ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Redundancy with Ceph
@ 2010-07-04 20:34 Christian Baun
  2010-07-04 20:41 ` Gregory Farnum
  0 siblings, 1 reply; 6+ messages in thread
From: Christian Baun @ 2010-07-04 20:34 UTC (permalink / raw)
  To: ceph-devel

Hi,

Is it possible to tell Ceph that the user data is stored in a redundant way across the osds?

Best Regards,
   Christian 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Redundancy with Ceph
  2010-07-04 20:34 Redundancy with Ceph Christian Baun
@ 2010-07-04 20:41 ` Gregory Farnum
  2010-07-04 22:33   ` Christian Baun
  0 siblings, 1 reply; 6+ messages in thread
From: Gregory Farnum @ 2010-07-04 20:41 UTC (permalink / raw)
  To: Christian Baun; +Cc: ceph-devel

On Sun, Jul 4, 2010 at 1:34 PM, Christian Baun <cray@unix-ag.uni-kl.de> wrote:
> Hi,
>
> Is it possible to tell Ceph that the user data is stored in a redundant way across the osds?
Could you clarify? What do you mean by "across the OSDs"?
Obviously Ceph is aware of how many copies it makes.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Redundancy with Ceph
  2010-07-04 20:41 ` Gregory Farnum
@ 2010-07-04 22:33   ` Christian Baun
  2010-07-05  5:14     ` Thomas Mueller
  0 siblings, 1 reply; 6+ messages in thread
From: Christian Baun @ 2010-07-04 22:33 UTC (permalink / raw)
  To: Gregory Farnum, ceph-devel

Hi,

I created 2 servers and one client

Server 1 => mon, mds, osd
Server 2 => osd

I tried some tests with iozone and I don't think Server 2 is used.

Server 1:
# df | grep osd
/dev/sdc              10485760    600504   9885256   6% /data/osd1

Server 2:
# df | grep osd
/dev/sdc              10485760        32  10485728   1% /data/osd2

Client:
# df | grep osd
10.243.150.209:/      10485760    896000   9589760   9% /mnt/ceph

Theses stats don't look like there is any redundancy with the user data.

Server 1:
# ls /var/log/ceph/
ip-10-243-150-209.18773  ip-10-243-150-209.20598  mds.0   osd1.3
ip-10-243-150-209.18993  ip-10-243-150-209.20822  mon0    osd1.4
ip-10-243-150-209.19277  ip-10-243-150-209.20912  osd1    osd1.5
ip-10-243-150-209.19501  ip-10-243-150-209.20940  osd1.0  osd1.6
ip-10-243-150-209.19727  ip-10-243-150-209.20990  osd1.1  osd1.7
ip-10-243-150-209.19963  mds0                     osd1.2  stat

Server 2:
# ls /var/log/ceph/
ip-10-212-118-67.19342  osd2

How can I check if Server 2 is really recognized by Server 1? 

Best Regards,
   Christian 



Am Sonntag, 4. Juli 2010 schrieben Sie:
> On Sun, Jul 4, 2010 at 1:34 PM, Christian Baun <cray@unix-ag.uni-kl.de> wrote:
> > Hi,
> >
> > Is it possible to tell Ceph that the user data is stored in a redundant way across the osds?
> Could you clarify? What do you mean by "across the OSDs"?
> Obviously Ceph is aware of how many copies it makes.
> 



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Redundancy with Ceph
  2010-07-04 22:33   ` Christian Baun
@ 2010-07-05  5:14     ` Thomas Mueller
  2010-07-05  7:55       ` Christian Baun
  0 siblings, 1 reply; 6+ messages in thread
From: Thomas Mueller @ 2010-07-05  5:14 UTC (permalink / raw)
  To: ceph-devel

Am Mon, 05 Jul 2010 00:33:50 +0200 schrieb Christian Baun:

> Hi,
> 
> I created 2 servers and one client
> 
> Server 1 => mon, mds, osd
> Server 2 => osd
> 
> I tried some tests with iozone and I don't think Server 2 is used.


did you read:

http://ceph.newdream.net/wiki/Adjusting_replication_level



- Thomas


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Redundancy with Ceph
  2010-07-05  5:14     ` Thomas Mueller
@ 2010-07-05  7:55       ` Christian Baun
  2010-07-05  8:04         ` Thomas Mueller
  0 siblings, 1 reply; 6+ messages in thread
From: Christian Baun @ 2010-07-05  7:55 UTC (permalink / raw)
  To: Thomas Mueller; +Cc: ceph-devel

Hi Thomas,

Thanks a lot for the link and your help!
Now, the issue are clear.

The output of "ceph osd dump -o -" said that there shall be replication because max_osd is 3 and the size of all pools is 2. But osd2 was "out down".

# ceph osd dump -o -
...
max_osd 3
osd0 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)
osd1 in weight 1 up   (up_from 2 up_thru 2 down_at 0 last_clean 0-0) 10.243.150.209:6801/20989 10.243.150.209:6802/20989
osd2 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)

The reason: I forgot to start the ceph server at osd2.
Now, it looks better.

# ceph osd dump -o -
...
max_osd 3
osd0 out down (up_from 0 up_thru 0 down_at 0 last_clean 0-0)
osd1 in weight 1 up   (up_from 2 up_thru 6 down_at 0 last_clean 0-0) 10.243.150.209:6801/20989 10.243.150.209:6802/20989
osd2 in weight 1 up   (up_from 5 up_thru 5 down_at 0 last_clean 0-0) 10.212.118.67:6800/20983 10.212.118.67:6801/20983

Best Regards and thanks again,
   Christian


Am Montag, 5. Juli 2010 schrieb Thomas Mueller:
> Am Mon, 05 Jul 2010 00:33:50 +0200 schrieb Christian Baun:
> 
> > Hi,
> > 
> > I created 2 servers and one client
> > 
> > Server 1 => mon, mds, osd
> > Server 2 => osd
> > 
> > I tried some tests with iozone and I don't think Server 2 is used.
> 
> 
> did you read:
> 
> http://ceph.newdream.net/wiki/Adjusting_replication_level
> 
> 
> 
> - Thomas


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Redundancy with Ceph
  2010-07-05  7:55       ` Christian Baun
@ 2010-07-05  8:04         ` Thomas Mueller
  0 siblings, 0 replies; 6+ messages in thread
From: Thomas Mueller @ 2010-07-05  8:04 UTC (permalink / raw)
  To: Christian Baun; +Cc: ceph-devel

On 05.07.2010 09:55, Christian Baun wrote:
> Hi Thomas,
>
> Thanks a lot for the link and your help!
> Now, the issue are clear.
>
> The output of "ceph osd dump -o -" said that there shall be replication because max_osd is 3 and the size of all pools is 2. But osd2 was "out down".
> The reason: I forgot to start the ceph server at osd2.


you can also use the --allhosts with /etc/init.d/ceph

example:

/etc/init.d/ceph --allhosts start

this starts ceph on all configured hosts with ssh.

- Thomas

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2010-07-05  8:04 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-07-04 20:34 Redundancy with Ceph Christian Baun
2010-07-04 20:41 ` Gregory Farnum
2010-07-04 22:33   ` Christian Baun
2010-07-05  5:14     ` Thomas Mueller
2010-07-05  7:55       ` Christian Baun
2010-07-05  8:04         ` Thomas Mueller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).