All of lore.kernel.org
 help / color / mirror / Atom feed
* stuff for v0.56.4
@ 2013-03-05 23:10 Sage Weil
  2013-03-06  8:37 ` Wido den Hollander
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Sage Weil @ 2013-03-05 23:10 UTC (permalink / raw)
  To: ceph-devel

There have been a few important bug fixes that people are hitting or 
want:

- the journal replay bug (5d54ab154ca790688a6a1a2ad5f869c17a23980a)
- the - _ pool name vs cap parsing thing that is biting openstack users
- ceph-disk-* changes to support latest ceph-deploy

If there are other things that we want to include in 0.56.4, lets get them 
into the bobtial branch sooner rather than later.

Possible items:

- pg log trimming (probably a conservative subset) to avoid memory bloat
- omap scrub?
- pg temp collection removal?
- buffer::cmp fix from loic?

Are there other items that we are missing?

sage


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: stuff for v0.56.4
  2013-03-05 23:10 stuff for v0.56.4 Sage Weil
@ 2013-03-06  8:37 ` Wido den Hollander
  2013-03-07 15:08   ` Travis Rhoden
  2013-03-07 18:24 ` Yehuda Sadeh
  2013-03-07 21:05 ` Bryan K. Wright
  2 siblings, 1 reply; 9+ messages in thread
From: Wido den Hollander @ 2013-03-06  8:37 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel

On 03/06/2013 12:10 AM, Sage Weil wrote:
> There have been a few important bug fixes that people are hitting or
> want:
>
> - the journal replay bug (5d54ab154ca790688a6a1a2ad5f869c17a23980a)
> - the - _ pool name vs cap parsing thing that is biting openstack users
> - ceph-disk-* changes to support latest ceph-deploy
>
> If there are other things that we want to include in 0.56.4, lets get them
> into the bobtial branch sooner rather than later.
>
> Possible items:
>
> - pg log trimming (probably a conservative subset) to avoid memory bloat
> - omap scrub?
> - pg temp collection removal?
> - buffer::cmp fix from loic?
>
> Are there other items that we are missing?
>

I'm still seeing #3816 on my systems. The fix in wip-3816 did not 
resolve it for me.

Wido

> sage
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


-- 
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: stuff for v0.56.4
  2013-03-06  8:37 ` Wido den Hollander
@ 2013-03-07 15:08   ` Travis Rhoden
  0 siblings, 0 replies; 9+ messages in thread
From: Travis Rhoden @ 2013-03-07 15:08 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel

As long as the fix for...

osdc/ObjectCacher.cc: In function 'void
ObjectCacher::bh_write_commit(int64_t, sobject_t, loff_t, uint64_t,
tid_t, int)' thread 7fd316a50700 time 2013-03-07 15:03:21.641190
osdc/ObjectCacher.cc: 834: FAILED assert(ob->last_commit_tid < tid)

...s in there (which you already put on the bobtail branch, I believe)
I will be happy.  This particular bug crashes several VMs a day for
me.

 - Travis

On Wed, Mar 6, 2013 at 3:37 AM, Wido den Hollander <wido@42on.com> wrote:
> On 03/06/2013 12:10 AM, Sage Weil wrote:
>>
>> There have been a few important bug fixes that people are hitting or
>> want:
>>
>> - the journal replay bug (5d54ab154ca790688a6a1a2ad5f869c17a23980a)
>> - the - _ pool name vs cap parsing thing that is biting openstack users
>> - ceph-disk-* changes to support latest ceph-deploy
>>
>> If there are other things that we want to include in 0.56.4, lets get them
>> into the bobtial branch sooner rather than later.
>>
>> Possible items:
>>
>> - pg log trimming (probably a conservative subset) to avoid memory bloat
>> - omap scrub?
>> - pg temp collection removal?
>> - buffer::cmp fix from loic?
>>
>> Are there other items that we are missing?
>>
>
> I'm still seeing #3816 on my systems. The fix in wip-3816 did not resolve it
> for me.
>
> Wido
>
>
>> sage
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: stuff for v0.56.4
  2013-03-05 23:10 stuff for v0.56.4 Sage Weil
  2013-03-06  8:37 ` Wido den Hollander
@ 2013-03-07 18:24 ` Yehuda Sadeh
  2013-03-07 21:05 ` Bryan K. Wright
  2 siblings, 0 replies; 9+ messages in thread
From: Yehuda Sadeh @ 2013-03-07 18:24 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel

On Tue, Mar 5, 2013 at 3:10 PM, Sage Weil <sage@inktank.com> wrote:
> There have been a few important bug fixes that people are hitting or
> want:
>
> - the journal replay bug (5d54ab154ca790688a6a1a2ad5f869c17a23980a)
> - the - _ pool name vs cap parsing thing that is biting openstack users
> - ceph-disk-* changes to support latest ceph-deploy
>
> If there are other things that we want to include in 0.56.4, lets get them
> into the bobtial branch sooner rather than later.
>
> Possible items:
>
> - pg log trimming (probably a conservative subset) to avoid memory bloat
> - omap scrub?
> - pg temp collection removal?
> - buffer::cmp fix from loic?
>
> Are there other items that we are missing?
>

wip-4247-bobtail (pending review)

Yehuda

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: stuff for v0.56.4
  2013-03-05 23:10 stuff for v0.56.4 Sage Weil
  2013-03-06  8:37 ` Wido den Hollander
  2013-03-07 18:24 ` Yehuda Sadeh
@ 2013-03-07 21:05 ` Bryan K. Wright
  2013-03-07 21:27   ` Sage Weil
  2 siblings, 1 reply; 9+ messages in thread
From: Bryan K. Wright @ 2013-03-07 21:05 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel


sage@inktank.com said:
> - pg log trimming (probably a conservative subset) to avoid memory bloat 

Anything that reduces the size of OSD processes would be appreciated.

					Bryan
-- 
========================================================================
Bryan Wright              |"If you take cranberries and stew them like 
Physics Department        | applesauce, they taste much more like prunes
University of Virginia    | than rhubarb does."  --  Groucho 
Charlottesville, VA  22901|			
(434) 924-7218            |         bryan@virginia.edu
========================================================================



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: stuff for v0.56.4
  2013-03-07 21:05 ` Bryan K. Wright
@ 2013-03-07 21:27   ` Sage Weil
  2013-03-11 15:10     ` Estimating OSD memory requirements (was Re: stuff for v0.56.4) Bryan K. Wright
  0 siblings, 1 reply; 9+ messages in thread
From: Sage Weil @ 2013-03-07 21:27 UTC (permalink / raw)
  To: Bryan K. Wright; +Cc: ceph-devel

On Thu, 7 Mar 2013, Bryan K. Wright wrote:
> 
> sage@inktank.com said:
> > - pg log trimming (probably a conservative subset) to avoid memory bloat 
> 
> Anything that reduces the size of OSD processes would be appreciated.

You can probably do this with just

 log max recent = 1000

By default it's keeping 100k lines of logs in memory, which can eat a lot 
of ram (but is great when debugging issues).

s


> 
> 					Bryan
> -- 
> ========================================================================
> Bryan Wright              |"If you take cranberries and stew them like 
> Physics Department        | applesauce, they taste much more like prunes
> University of Virginia    | than rhubarb does."  --  Groucho 
> Charlottesville, VA  22901|			
> (434) 924-7218            |         bryan@virginia.edu
> ========================================================================
> 
> 
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Estimating OSD memory requirements (was Re: stuff for v0.56.4)
  2013-03-07 21:27   ` Sage Weil
@ 2013-03-11 15:10     ` Bryan K. Wright
  2013-03-11 15:51       ` Greg Farnum
  2013-03-11 18:01       ` Jim Schutt
  0 siblings, 2 replies; 9+ messages in thread
From: Bryan K. Wright @ 2013-03-11 15:10 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel

sage@inktank.com said:
> On Thu, 7 Mar 2013, Bryan K. Wright wrote:
> 
> sage@inktank.com said:
> > - pg log trimming (probably a conservative subset) to avoid memory bloat 
> 
> Anything that reduces the size of OSD processes would be appreciated.
> You can probably do this with just
>  log max recent = 1000
> By default it's keeping 100k lines of logs in memory, which can eat a lot  of
> ram (but is great when debugging issues).

	Thanks for the tip about "log max recent".  I've made this 
change, but it doesn't seem to significantly reduce the size of the 
OSD processes.

	In general, are there some rules of thumb for estimated the
memory requirements for OSDs?  I see processes blow up to 8gb of 
resident memory sometimes.  If I need to allow for that much memory
per OSD process, I may have to just walk away from ceph.

	Does the memory usage scale with the size of the disks?
I've been trying to run 12 OSDs with 12 2TB disks on a single box.
Would I be better off (memory-usage-wise) if I RAIDed the disks
together and used a single OSD process?

	Thanks for any advice.

					Bryan


-- 
========================================================================
Bryan Wright              |"If you take cranberries and stew them like 
Physics Department        | applesauce, they taste much more like prunes
University of Virginia    | than rhubarb does."  --  Groucho 
Charlottesville, VA  22901|			
(434) 924-7218            |         bryan@virginia.edu
========================================================================



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Estimating OSD memory requirements (was Re: stuff for v0.56.4)
  2013-03-11 15:10     ` Estimating OSD memory requirements (was Re: stuff for v0.56.4) Bryan K. Wright
@ 2013-03-11 15:51       ` Greg Farnum
  2013-03-11 18:01       ` Jim Schutt
  1 sibling, 0 replies; 9+ messages in thread
From: Greg Farnum @ 2013-03-11 15:51 UTC (permalink / raw)
  To: bkw1a; +Cc: Sage Weil, ceph-devel

On Monday, March 11, 2013 at 8:10 AM, Bryan K. Wright wrote:

> sage@inktank.com said:
> > On Thu, 7 Mar 2013, Bryan K. Wright wrote:
> > 
> > sage@inktank.com said:
> > > - pg log trimming (probably a conservative subset) to avoid memory bloat 
> > 
> > 
> > 
> > Anything that reduces the size of OSD processes would be appreciated.
> > You can probably do this with just
> > log max recent = 1000
> > By default it's keeping 100k lines of logs in memory, which can eat a lot of
> > ram (but is great when debugging issues).
> 
> 
> 
> Thanks for the tip about "log max recent". I've made this 
> change, but it doesn't seem to significantly reduce the size of the 
> OSD processes.
> 
> In general, are there some rules of thumb for estimated the
> memory requirements for OSDs? I see processes blow up to 8gb of 
> resident memory sometimes. If I need to allow for that much memory
> per OSD process, I may have to just walk away from ceph.
> 
> Does the memory usage scale with the size of the disks?
> I've been trying to run 12 OSDs with 12 2TB disks on a single box.
> Would I be better off (memory-usage-wise) if I RAIDed the disks
> together and used a single OSD process?
> 


Memory use depends on several things, but the most important are how many PGs the daemon is hosting, and whether it's undergoing recovery of some kind. (Absolute disk size is not involved.) If you're getting up to 8GB per, it sounds as if you may have a bit too many PGs.
You could try RAIDing some of your drives together instead, yes -- memory & CPU utilization is one of the trade offs there, balanced against larger discrete failure units and the loss of space or reliability (depending on the RAID chosen).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Estimating OSD memory requirements (was Re: stuff for v0.56.4)
  2013-03-11 15:10     ` Estimating OSD memory requirements (was Re: stuff for v0.56.4) Bryan K. Wright
  2013-03-11 15:51       ` Greg Farnum
@ 2013-03-11 18:01       ` Jim Schutt
  1 sibling, 0 replies; 9+ messages in thread
From: Jim Schutt @ 2013-03-11 18:01 UTC (permalink / raw)
  To: bkw1a; +Cc: Sage Weil, ceph-devel

Hi Bryan,

On 03/11/2013 09:10 AM, Bryan K. Wright wrote:
> sage@inktank.com said:
>> On Thu, 7 Mar 2013, Bryan K. Wright wrote:
>>
>> sage@inktank.com said:
>>> - pg log trimming (probably a conservative subset) to avoid memory bloat 
>>
>> Anything that reduces the size of OSD processes would be appreciated.
>> You can probably do this with just
>>  log max recent = 1000
>> By default it's keeping 100k lines of logs in memory, which can eat a lot  of
>> ram (but is great when debugging issues).
> 
> 	Thanks for the tip about "log max recent".  I've made this 
> change, but it doesn't seem to significantly reduce the size of the 
> OSD processes.
> 
> 	In general, are there some rules of thumb for estimated the
> memory requirements for OSDs?  I see processes blow up to 8gb of 
> resident memory sometimes.  If I need to allow for that much memory
> per OSD process, I may have to just walk away from ceph.
> 
> 	Does the memory usage scale with the size of the disks?
> I've been trying to run 12 OSDs with 12 2TB disks on a single box.
> Would I be better off (memory-usage-wise) if I RAIDed the disks
> together and used a single OSD process?
> 
> 	Thanks for any advice.

You might also try tuning "osd client message size cap"; its
current default is 500 MiB.

During the periods your aggregate applied write load is higher
than your OSD aggregate write bandwidth (taking into account
replicas), you'll be buffering up this amount of client data.

Since it only applies to incoming client messages, to figure
total memory use I believe you need to multiply that by the
number of replicas you're using.

FWIW, for sequential writes from lots of clients, I can
maintain full write bandwidth with "osd client message size
cap" tuned to 60 MiB.

-- Jim

> 
> 					Bryan
> 
> 



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-03-11 18:02 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-05 23:10 stuff for v0.56.4 Sage Weil
2013-03-06  8:37 ` Wido den Hollander
2013-03-07 15:08   ` Travis Rhoden
2013-03-07 18:24 ` Yehuda Sadeh
2013-03-07 21:05 ` Bryan K. Wright
2013-03-07 21:27   ` Sage Weil
2013-03-11 15:10     ` Estimating OSD memory requirements (was Re: stuff for v0.56.4) Bryan K. Wright
2013-03-11 15:51       ` Greg Farnum
2013-03-11 18:01       ` Jim Schutt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.