All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Fwd: Hammer OSD memory increase when add new machine
@ 2016-11-08 14:48 zphj1987
  2016-11-09  5:52 ` Fwd: [ceph-users] " Dong Wu
  0 siblings, 1 reply; 2+ messages in thread
From: zphj1987 @ 2016-11-08 14:48 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-users, The Sacred Order of the Squid Cybernetic


[-- Attachment #1.1: Type: text/plain, Size: 2700 bytes --]

I  remember CERN had a test ceph cluster 30PB  and the osd use more memery
 than usual   ,and thay tune osdmap_epochs ,if it is the osdmap make it use
more memery,ithink  you may have a test use less osdmap_epochs to see if
have some change

default mon_min_osdmap_epochs is 500


zphj1987

2016-11-08 22:08 GMT+08:00 Sage Weil <sage-BnTBU8nroG7k1uMJSBkQmQ@public.gmane.org>:

> > ---------- Forwarded message ----------
> > From: Dong Wu <archer.wudong-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > Date: 2016-10-27 18:50 GMT+08:00
> > Subject: Re: [ceph-users] Hammer OSD memory increase when add new machine
> > To: huang jun <hjwsm1989-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> > 抄送: ceph-users <ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>
> >
> >
> > 2016-10-27 17:50 GMT+08:00 huang jun <hjwsm1989-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>:
> > > how do you add the new machine ?
> > > does it first added to default ruleset and then you add the new rule
> > > for this group?
> > > do you have data pool use the default rule, does these pool contain
> data?
> >
> > we dont use default ruleset, when we add new group machine,
> > crush_location auto generate root and chassis, then we add a new rule
> > for this group.
> >
> >
> > > 2016-10-27 17:34 GMT+08:00 Dong Wu <archer.wudong-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>:
> > >> Hi all,
> > >>
> > >> We have a ceph cluster only use rbd. The cluster contains several
> > >> group machines, each group contains several machines, then each
> > >> machine has 12 SSDs, each ssd as an OSD (journal and data together).
> > >> eg:
> > >> group1: machine1~machine12
> > >> group2: machine13~machine24
> > >> ......
> > >> each group is separated with other group, which means each group has
> > >> separated pools.
> > >>
> > >> we use Hammer(0.94.6) compiled with jemalloc(4.2).
> > >>
> > >> We have found that when we add a new group machine, the other group
> > >> machine's memory increase 5% more or less (OSDs usage).
> > >>
> > >> each group's data is separated with others, so backfill only in group,
> > >> not across.
> > >> Why add a group of machine cause others memory increase? Is this
> reasonable?
>
> It could be cached OSDmaps (they get slightly larger when you add OSDs)
> but it's hard to say.  It seems more likely that the pools and crush rules
> aren't configured right and you're adding OSDs to the wrong group.
>
> If you look at the 'ceph daemon osd.NNN perf dump' output you can see,
> among other things, how many PGs are on the OSD.  Can you capture the
> output before and after the change (and 5% memory footprint increase)?
>
> sage
>

[-- Attachment #1.2: Type: text/html, Size: 4082 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Fwd: [ceph-users] Hammer OSD memory increase when add new machine
  2016-11-08 14:48 Fwd: Hammer OSD memory increase when add new machine zphj1987
@ 2016-11-09  5:52 ` Dong Wu
  0 siblings, 0 replies; 2+ messages in thread
From: Dong Wu @ 2016-11-09  5:52 UTC (permalink / raw)
  To: zphj1987, Sage Weil, ceph-users,
	The Sacred Order of the Squid Cybernetic

Thanks, though CERN 30PB cluster test, the osdmap caches causes memory
increase, I'll test how these configs( osd_map_cache_size,
osd_map_max_advance, etc.) influence the memory usage.

2016-11-08 22:48 GMT+08:00 zphj1987 <zphj1987@gmail.com>:
> I  remember CERN had a test ceph cluster 30PB  and the osd use more memery
> than usual   ,and thay tune osdmap_epochs ,if it is the osdmap make it use
> more memery,ithink  you may have a test use less osdmap_epochs to see if
> have some change
>
> default mon_min_osdmap_epochs is 500
>
>
> zphj1987
>
> 2016-11-08 22:08 GMT+08:00 Sage Weil <sage@newdream.net>:
>>
>> > ---------- Forwarded message ----------
>> > From: Dong Wu <archer.wudong@gmail.com>
>> > Date: 2016-10-27 18:50 GMT+08:00
>> > Subject: Re: [ceph-users] Hammer OSD memory increase when add new
>> > machine
>> > To: huang jun <hjwsm1989@gmail.com>
>> > 抄送: ceph-users <ceph-users@lists.ceph.com>
>> >
>> >
>> > 2016-10-27 17:50 GMT+08:00 huang jun <hjwsm1989@gmail.com>:
>> > > how do you add the new machine ?
>> > > does it first added to default ruleset and then you add the new rule
>> > > for this group?
>> > > do you have data pool use the default rule, does these pool contain
>> > > data?
>> >
>> > we dont use default ruleset, when we add new group machine,
>> > crush_location auto generate root and chassis, then we add a new rule
>> > for this group.
>> >
>> >
>> > > 2016-10-27 17:34 GMT+08:00 Dong Wu <archer.wudong@gmail.com>:
>> > >> Hi all,
>> > >>
>> > >> We have a ceph cluster only use rbd. The cluster contains several
>> > >> group machines, each group contains several machines, then each
>> > >> machine has 12 SSDs, each ssd as an OSD (journal and data together).
>> > >> eg:
>> > >> group1: machine1~machine12
>> > >> group2: machine13~machine24
>> > >> ......
>> > >> each group is separated with other group, which means each group has
>> > >> separated pools.
>> > >>
>> > >> we use Hammer(0.94.6) compiled with jemalloc(4.2).
>> > >>
>> > >> We have found that when we add a new group machine, the other group
>> > >> machine's memory increase 5% more or less (OSDs usage).
>> > >>
>> > >> each group's data is separated with others, so backfill only in
>> > >> group,
>> > >> not across.
>> > >> Why add a group of machine cause others memory increase? Is this
>> > >> reasonable?
>>
>> It could be cached OSDmaps (they get slightly larger when you add OSDs)
>> but it's hard to say.  It seems more likely that the pools and crush rules
>> aren't configured right and you're adding OSDs to the wrong group.
>>
>> If you look at the 'ceph daemon osd.NNN perf dump' output you can see,
>> among other things, how many PGs are on the OSD.  Can you capture the
>> output before and after the change (and 5% memory footprint increase)?
>>
>> sage
>
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-11-09  5:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-08 14:48 Fwd: Hammer OSD memory increase when add new machine zphj1987
2016-11-09  5:52 ` Fwd: [ceph-users] " Dong Wu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.