All of lore.kernel.org
 help / color / mirror / Atom feed
* sstate management
@ 2015-02-02 16:42 Gary Thomas
  2015-02-02 16:48 ` Christopher Larson
  0 siblings, 1 reply; 5+ messages in thread
From: Gary Thomas @ 2015-02-02 16:42 UTC (permalink / raw)
  To: Yocto Project

I'm looking into using sstate more and in particular sharing
it amongst a number of builds.  I have a couple of questions
which I didn't find much info about.

* The sstate-cache for a given build/target seems to grow
   without bounds.  I have one build which I've been reusing
   since last November has grown to 62GB.  A very similar
   build which hasn't see quite so many 'bakes' is only 27GB.

   Is there some maintenance to be done on the sstate-cache?
   I'm thinking I want to set up a shared cache which might
   last for a long time and I would like to only keep the bits
   that are really needed.

* The second operational question I have is if I have a shared
   sstate cache and I make some sort of build, what is the best
   way (if any) to share any newly created objects so that my
   other builds can make use of them?

I've not actually tried to share any caches yet, so these
questions are just based on my rough understanding of the
use of sstate.  Please feel free to correct me if I've got
it [totally] wrong.

Thanks

-- 
------------------------------------------------------------
Gary Thomas                 |  Consulting for the
MLB Associates              |    Embedded world
------------------------------------------------------------


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: sstate management
  2015-02-02 16:42 sstate management Gary Thomas
@ 2015-02-02 16:48 ` Christopher Larson
  2015-02-02 17:33   ` Burton, Ross
  0 siblings, 1 reply; 5+ messages in thread
From: Christopher Larson @ 2015-02-02 16:48 UTC (permalink / raw)
  To: Gary Thomas; +Cc: Yocto Project

[-- Attachment #1: Type: text/plain, Size: 1263 bytes --]

On Mon, Feb 2, 2015 at 9:42 AM, Gary Thomas <gary@mlbassoc.com> wrote:

> * The sstate-cache for a given build/target seems to grow
>   without bounds.  I have one build which I've been reusing
>   since last November has grown to 62GB.  A very similar
>   build which hasn't see quite so many 'bakes' is only 27GB.
>
>   Is there some maintenance to be done on the sstate-cache?
>   I'm thinking I want to set up a shared cache which might
>   last for a long time and I would like to only keep the bits
>   that are really needed.
>

In the past i've either used sstate-cache-management.sh or ensured that
SSTATE_DIR is on a mount with atime enabled and just periodically wiped
anything that hasn't been accessed in over a week.

* The second operational question I have is if I have a shared
>   sstate cache and I make some sort of build, what is the best
>   way (if any) to share any newly created objects so that my
>   other builds can make use of them?
>

I doubt there's a "best" on that. Personally I either use a shared
filesystem path (e.g. nfs) or hook up rsync.
-- 
Christopher Larson
clarson at kergoth dot com
Founder - BitBake, OpenEmbedded, OpenZaurus
Maintainer - Tslib
Senior Software Engineer, Mentor Graphics

[-- Attachment #2: Type: text/html, Size: 1932 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: sstate management
  2015-02-02 16:48 ` Christopher Larson
@ 2015-02-02 17:33   ` Burton, Ross
  2015-02-02 17:43     ` Paul Eggleton
  0 siblings, 1 reply; 5+ messages in thread
From: Burton, Ross @ 2015-02-02 17:33 UTC (permalink / raw)
  To: Christopher Larson; +Cc: Yocto Project, Gary Thomas

[-- Attachment #1: Type: text/plain, Size: 711 bytes --]

On 2 February 2015 at 16:48, Christopher Larson <clarson@kergoth.com> wrote:

>   Is there some maintenance to be done on the sstate-cache?
>>   I'm thinking I want to set up a shared cache which might
>>   last for a long time and I would like to only keep the bits
>>   that are really needed.
>>
>
> In the past i've either used sstate-cache-management.sh or ensured that
> SSTATE_DIR is on a mount with atime enabled and just periodically wiped
> anything that hasn't been accessed in over a week.
>

Seconded on this - after doing lots of builds in the last few weeks
involving five machines and new eglibc/gcc patches my sstate was 1.2TB.  A
quick find -atime -delete did the job.

Ross

[-- Attachment #2: Type: text/html, Size: 1241 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: sstate management
  2015-02-02 17:33   ` Burton, Ross
@ 2015-02-02 17:43     ` Paul Eggleton
  2015-02-02 18:05       ` Rifenbark, Scott M
  0 siblings, 1 reply; 5+ messages in thread
From: Paul Eggleton @ 2015-02-02 17:43 UTC (permalink / raw)
  To: Burton, Ross, Christopher Larson, Gary Thomas; +Cc: yocto

On Monday 02 February 2015 17:33:23 Burton, Ross wrote:
> On 2 February 2015 at 16:48, Christopher Larson <clarson@kergoth.com> wrote:
> >   Is there some maintenance to be done on the sstate-cache?
> >   
> >>   I'm thinking I want to set up a shared cache which might
> >>   last for a long time and I would like to only keep the bits
> >>   that are really needed.
> > 
> > In the past i've either used sstate-cache-management.sh or ensured that
> > SSTATE_DIR is on a mount with atime enabled and just periodically wiped
> > anything that hasn't been accessed in over a week.
> 
> Seconded on this - after doing lots of builds in the last few weeks
> involving five machines and new eglibc/gcc patches my sstate was 1.2TB.  A
> quick find -atime -delete did the job.

I don't suppose I can talk one or more of you into writing up some 
documentation for this to add to the manual? (As usual, something raw in the 
form of a wiki page that Scott can then adapt would be ideal.)

Cheers,
Paul

-- 

Paul Eggleton
Intel Open Source Technology Centre


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: sstate management
  2015-02-02 17:43     ` Paul Eggleton
@ 2015-02-02 18:05       ` Rifenbark, Scott M
  0 siblings, 0 replies; 5+ messages in thread
From: Rifenbark, Scott M @ 2015-02-02 18:05 UTC (permalink / raw)
  To: Paul Eggleton, Burton, Ross, Christopher Larson, Gary Thomas; +Cc: yocto

You might want to quickly look through http://www.yoctoproject.org/docs/1.8/ref-manual/ref-manual.html#shared-state-cache in the ref-manual before supplying raw documentation information.  This section is our current discourse on sstate.

Scott

>-----Original Message-----
>From: yocto-bounces@yoctoproject.org [mailto:yocto-
>bounces@yoctoproject.org] On Behalf Of Paul Eggleton
>Sent: Monday, February 02, 2015 9:43 AM
>To: Burton, Ross; Christopher Larson; Gary Thomas
>Cc: yocto@yoctoproject.org
>Subject: Re: [yocto] sstate management
>
>On Monday 02 February 2015 17:33:23 Burton, Ross wrote:
>> On 2 February 2015 at 16:48, Christopher Larson <clarson@kergoth.com>
>wrote:
>> >   Is there some maintenance to be done on the sstate-cache?
>> >
>> >>   I'm thinking I want to set up a shared cache which might
>> >>   last for a long time and I would like to only keep the bits
>> >>   that are really needed.
>> >
>> > In the past i've either used sstate-cache-management.sh or ensured
>> > that SSTATE_DIR is on a mount with atime enabled and just
>> > periodically wiped anything that hasn't been accessed in over a week.
>>
>> Seconded on this - after doing lots of builds in the last few weeks
>> involving five machines and new eglibc/gcc patches my sstate was
>> 1.2TB.  A quick find -atime -delete did the job.
>
>I don't suppose I can talk one or more of you into writing up some
>documentation for this to add to the manual? (As usual, something raw in the
>form of a wiki page that Scott can then adapt would be ideal.)
>
>Cheers,
>Paul
>
>--
>
>Paul Eggleton
>Intel Open Source Technology Centre
>--
>_______________________________________________
>yocto mailing list
>yocto@yoctoproject.org
>https://lists.yoctoproject.org/listinfo/yocto


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-02-02 18:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-02 16:42 sstate management Gary Thomas
2015-02-02 16:48 ` Christopher Larson
2015-02-02 17:33   ` Burton, Ross
2015-02-02 17:43     ` Paul Eggleton
2015-02-02 18:05       ` Rifenbark, Scott M

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.