All of lore.kernel.org
 help / color / mirror / Atom feed
* Multi-configuration builds
@ 2016-06-10 15:33 Richard Purdie
  2016-06-10 16:07 ` Trevor Woerner
  0 siblings, 1 reply; 11+ messages in thread
From: Richard Purdie @ 2016-06-10 15:33 UTC (permalink / raw)
  To: openembedded-architecture; +Cc: bitbake-devel

A few people have asked about multi-machine builds. Firstly, to be
clear, the way to implement this is as multiple configuration builds.
This is because:

a) Bitbake has no knowledge of MACHINE, its an OE implementation detail
b) There are other variables which would make sense to vary in builds
such as SDKMACHINE, possibly libc or distro too.

I have been wanting to experiment with the codebase a bit before
committing to exactly how this would work. This is so that we can try
and avoid huge invasive changes to bitbake if we can help it and to
ensure we get an optimal implementation. With the work I've done over
the past couple of weeks, I do have some idea how its likely to work
though.

To enable it, there would be a line in local.conf a bit like:

BBMULTICONFIG = "configA configB configC"

(or likely from the environment in the future like MACHINE).

This would tell bitbake that before it parses the base configuration,
it should load conf/configA.conf and so on for each different
configuration. These would contain lines like:

MACHINE = "A"

or other variables which can be set which can be built in the same
build directory (or change TMPDIR not to conflict).

One downside I've already discovered is that if we want to inherit this
file right at the start of parsing, the only place you can put the
configurations is in "cwd", since BBPATH isn't constructed until the
layers are parsed and therefore using it as a preconf file isn't
possible unless its located there. I've decided to worry about that
later right now.

Execution of these targets would likely be in the form "bitbake
multiconfig:configA:core-image-minimal core-image-sato" so similar to
our virtclass approach for native/nativesdk/multilib.

Implementation wise, the implication is that instead of tasks being
uniquely referenced with "recipename/fn:task" it now needs to be
"configuration:recipename:task".

We already started using "virtual" filenames for recipes when we
implemented BBCLASSEXTEND and my proposal is to add a new prefix to
these, "multiconfig:<configname>:" and hence avoid changes to a large
part of the codebase thanks to this. I have an approach where
databuilder has an internal array of data stores and uses the right one
depending on the supplied virtual filename which seems to work well.

That trick allows us to use the existing parsing code including the
multithreading mostly unchanged. The next issue is the cache and here,
we can again make it use the virtual filenames without much pain.

The problems start where the recipe cache is parsed into an object
which lists providers and so on (cooker.recipecache). Here it makes
sense to change to an array of recipecaches, one for each
configuration.

The real problems are the apparent as taskdata and runqueue can only
deal with one recipecache, not multiple entries.

My initial plan is to have completely unlinked builds, so iterate
recipecaches, building individual taskdata objects from them, then
teach runqueue to process the multiple taskdata objects into a single
runqueue. There would be no support for dependencies between the
different configurations.

Once that works, we can look at allowing dependencies between the
different configurations. Rather than overload DEPENDS further, I'd
likely prefer to use a new task flag for these inter-configuration
dependencies.

Initially, I'm planning to implement this without sstate optimisations.
This means if the build uses the same object twice in say two different
TMPDIRs, it will either load from an existing sstate cache at the start
or build it twice. We can then in due course look at ways in which it
would only build it once and then reuse it. This will likely need
significant changes to the way sstate currenty works to make that
possible.

I've shared a branch with the work I have so far on it:

http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/mul
ticonfig

Its all in the top commit and not split into logical commits yet or
anything very neat. It can:

* Parse the multiple configurations
* Load the multiple configurations from the cache
* Run tasks with -b (bitbake -b multiconfig:qemuppc:bash_4)

which is a good start but as yet the taskdata and runqueue changes aren
't done so there is no dependency handling.

Cheers,

Richard



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Multi-configuration builds
  2016-06-10 15:33 Multi-configuration builds Richard Purdie
@ 2016-06-10 16:07 ` Trevor Woerner
  2016-06-10 16:13   ` Richard Purdie
  0 siblings, 1 reply; 11+ messages in thread
From: Trevor Woerner @ 2016-06-10 16:07 UTC (permalink / raw)
  To: Richard Purdie; +Cc: openembedded-architecture, bitbake-devel

On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
> A few people have asked about multi-machine builds.

Do you envision each config also pointing to individual bblayer configurations
too? I.e. if I'm building for 3 different MACHINEs, with 3 different configs
(local.conf?), then there would also be 3 different bblayers.conf's?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Multi-configuration builds
  2016-06-10 16:07 ` Trevor Woerner
@ 2016-06-10 16:13   ` Richard Purdie
  2016-06-13 13:48     ` [Openembedded-architecture] " Otavio Salvador
  2016-06-20  2:46     ` Trevor Woerner
  0 siblings, 2 replies; 11+ messages in thread
From: Richard Purdie @ 2016-06-10 16:13 UTC (permalink / raw)
  To: Trevor Woerner; +Cc: openembedded-architecture, bitbake-devel

On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
> On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
> > A few people have asked about multi-machine builds.
> 
> Do you envision each config also pointing to individual bblayer
> configurations
> too? I.e. if I'm building for 3 different MACHINEs, with 3 different
> configs
> (local.conf?), then there would also be 3 different bblayers.conf's?


No, there is one local.conf and one bblayers.conf file and then three
different multiconfig files, each one of which sets a different
MACHINE.

Would people really want to support different bblayer files? That would
complicate things quite a lot :/.

Cheers,

Richard


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Openembedded-architecture] Multi-configuration builds
  2016-06-10 16:13   ` Richard Purdie
@ 2016-06-13 13:48     ` Otavio Salvador
  2016-06-13 14:20       ` Patrick Ohly
  2016-06-20  2:46     ` Trevor Woerner
  1 sibling, 1 reply; 11+ messages in thread
From: Otavio Salvador @ 2016-06-13 13:48 UTC (permalink / raw)
  To: Richard Purdie; +Cc: openembedded-architecture, bitbake-devel

On Fri, Jun 10, 2016 at 1:13 PM, Richard Purdie
<richard.purdie@linuxfoundation.org> wrote:
> On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
>> On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
>> > A few people have asked about multi-machine builds.
>>
>> Do you envision each config also pointing to individual bblayer
>> configurations
>> too? I.e. if I'm building for 3 different MACHINEs, with 3 different
>> configs
>> (local.conf?), then there would also be 3 different bblayers.conf's?
>
>
> No, there is one local.conf and one bblayers.conf file and then three
> different multiconfig files, each one of which sets a different
> MACHINE.
>
> Would people really want to support different bblayer files? That would
> complicate things quite a lot :/.

The multi configuration should imply same metadata setup; for
different things people need to use different builddirs.

My $ 0.02c

-- 
Otavio Salvador                             O.S. Systems
http://www.ossystems.com.br        http://code.ossystems.com.br
Mobile: +55 (53) 9981-7854            Mobile: +1 (347) 903-9750


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Openembedded-architecture] Multi-configuration builds
  2016-06-13 13:48     ` [Openembedded-architecture] " Otavio Salvador
@ 2016-06-13 14:20       ` Patrick Ohly
  0 siblings, 0 replies; 11+ messages in thread
From: Patrick Ohly @ 2016-06-13 14:20 UTC (permalink / raw)
  To: Otavio Salvador; +Cc: bitbake-devel, openembedded-architecture

On Mon, 2016-06-13 at 10:48 -0300, Otavio Salvador wrote:
> > Would people really want to support different bblayer files? That would
> > complicate things quite a lot :/.
> 
> The multi configuration should imply same metadata setup; for
> different things people need to use different builddirs.

I agree. Using the same set of layers is sometimes a bit painful to
achieve when BSP layers assume that merely adding the layer is supposed
to modify the resulting build, regardless which MACHINE is selected, but
that's a problem that should be fixed in those BSP layers.

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Multi-configuration builds
  2016-06-10 16:13   ` Richard Purdie
  2016-06-13 13:48     ` [Openembedded-architecture] " Otavio Salvador
@ 2016-06-20  2:46     ` Trevor Woerner
  2016-06-20  3:14       ` [Openembedded-architecture] " nick
  1 sibling, 1 reply; 11+ messages in thread
From: Trevor Woerner @ 2016-06-20  2:46 UTC (permalink / raw)
  To: openembedded-architecture; +Cc: bitbake-devel

On Fri 2016-06-10 @ 05:13:43 PM, Richard Purdie wrote:
> On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
> > On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
> > > A few people have asked about multi-machine builds.
> > 
> > Do you envision each config also pointing to individual bblayer
> > configurations
> > too? I.e. if I'm building for 3 different MACHINEs, with 3 different
> > configs
> > (local.conf?), then there would also be 3 different bblayers.conf's?
> 
> 
> No, there is one local.conf and one bblayers.conf file and then three
> different multiconfig files, each one of which sets a different
> MACHINE.
> 
> Would people really want to support different bblayer files? That would
> complicate things quite a lot :/.

Personally I have a common "Downloads" directory (this is probably quite normal).

Then, I have a common "layers" directory in which I checkout every layer of
which I'm aware. I also have a script that I run manually from time to time to
keep each layer up to date (although it's capable of running any general git
command on each git repository it finds one level beneath it):
	https://github.com/twoerner/oe-misc/blob/master/scripts/gitcmd.sh

I then create separate directories for each platform for which I'm interested
in building (e.g. raspi2, raspi3, minnow, dragon, etc...). In each of those
directories I have separate local.conf, bblayers.conf, sstate-cache, and tmp
directories.

I know most will disagree with this arrangement (especially the separate
sstate-cache directories) but it's a system that has evolved over time, each
decision was made based on experience, and it works great for me!

It's been my experience that having too many layers in a build slows down the
initial parsing stage noticeably and too often layers don't play well with
each other. Also *many* build issues after an update can be fixed by blowing
away tmp *and* sstate and starting over. Often, building for a particular
board requires particular tweaks to local.conf (whether to enable a vendor
license or to enable specific hardware/features) which don't apply to other
boards and builds.

I'm happy with the speed of my builds, and I have enough disk space to
maintain the multiple sstates/tmps/etc. *Most* of my builds are
core-image-full-cmdline-type builds and I can crank one of those out from
scratch in 20 minutes (assuming the majority of sources have already been
downloaded). Although I do sometimes need chromium (which takes an hour on its
own) and I used to do qt (which is also quite painful). So I can understand
how sstate might be more useful to others, but for me, not so much.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Openembedded-architecture] Multi-configuration builds
  2016-06-20  2:46     ` Trevor Woerner
@ 2016-06-20  3:14       ` nick
  2016-06-21 13:26         ` Koen Kooi
  0 siblings, 1 reply; 11+ messages in thread
From: nick @ 2016-06-20  3:14 UTC (permalink / raw)
  To: Trevor Woerner, openembedded-architecture; +Cc: bitbake-devel



On 2016-06-19 10:46 PM, Trevor Woerner wrote:
> On Fri 2016-06-10 @ 05:13:43 PM, Richard Purdie wrote:
>> On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
>>> On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
>>>> A few people have asked about multi-machine builds.
>>>
>>> Do you envision each config also pointing to individual bblayer
>>> configurations
>>> too? I.e. if I'm building for 3 different MACHINEs, with 3 different
>>> configs
>>> (local.conf?), then there would also be 3 different bblayers.conf's?
>>
>>
>> No, there is one local.conf and one bblayers.conf file and then three
>> different multiconfig files, each one of which sets a different
>> MACHINE.
>>
>> Would people really want to support different bblayer files? That would
>> complicate things quite a lot :/.
> 
> Personally I have a common "Downloads" directory (this is probably quite normal).
> 
> Then, I have a common "layers" directory in which I checkout every layer of
> which I'm aware. I also have a script that I run manually from time to time to
> keep each layer up to date (although it's capable of running any general git
> command on each git repository it finds one level beneath it):
> 	https://github.com/twoerner/oe-misc/blob/master/scripts/gitcmd.sh
> 
> I then create separate directories for each platform for which I'm interested
> in building (e.g. raspi2, raspi3, minnow, dragon, etc...). In each of those
> directories I have separate local.conf, bblayers.conf, sstate-cache, and tmp
> directories.
> 
> I know most will disagree with this arrangement (especially the separate
> sstate-cache directories) but it's a system that has evolved over time, each
> decision was made based on experience, and it works great for me!
> 
> It's been my experience that having too many layers in a build slows down the
> initial parsing stage noticeably and too often layers don't play well with
> each other. Also *many* build issues after an update can be fixed by blowing
> away tmp *and* sstate and starting over. Often, building for a particular
> board requires particular tweaks to local.conf (whether to enable a vendor
> license or to enable specific hardware/features) which don't apply to other
> boards and builds.
> 
> I'm happy with the speed of my builds, and I have enough disk space to
> maintain the multiple sstates/tmps/etc. *Most* of my builds are
> core-image-full-cmdline-type builds and I can crank one of those out from
> scratch in 20 minutes (assuming the majority of sources have already been
> downloaded). Although I do sometimes need chromium (which takes an hour on its
> own) and I used to do qt (which is also quite painful). So I can understand
> how sstate might be more useful to others, but for me, not so much.
I second Trevor on this unless your building gui or media based packages sstate
is not very useful if you have a modern system with a 4 to 8 core CPU with 8 to
16GB of ram. However if your trying to just do small tweaks to the same board and
test it may be of use as I use something similar for building the kernel called
ccache. Again as Trevor stated you may want to benchmark the results and see if
sstate actually decreases your build time by a significant margin i.e probably
more then half decreases your build speed. Otherwise I would agree with Trevor
and just not worry about sstate.
Cheers,
Nick 
> _______________________________________________
> Openembedded-architecture mailing list
> Openembedded-architecture@lists.openembedded.org
> http://lists.openembedded.org/mailman/listinfo/openembedded-architecture
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Openembedded-architecture] Multi-configuration builds
  2016-06-20  3:14       ` [Openembedded-architecture] " nick
@ 2016-06-21 13:26         ` Koen Kooi
  2016-06-21 18:37           ` nick
  0 siblings, 1 reply; 11+ messages in thread
From: Koen Kooi @ 2016-06-21 13:26 UTC (permalink / raw)
  To: nick; +Cc: openembedded-architecture, bitbake-devel


> Op 20 jun. 2016, om 05:14 heeft nick <xerofoify@gmail.com> het volgende geschreven:
> 
> 
> 
> On 2016-06-19 10:46 PM, Trevor Woerner wrote:
>> On Fri 2016-06-10 @ 05:13:43 PM, Richard Purdie wrote:
>>> On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
>>>> On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
>>>>> A few people have asked about multi-machine builds.
>>>> 
>>>> Do you envision each config also pointing to individual bblayer
>>>> configurations
>>>> too? I.e. if I'm building for 3 different MACHINEs, with 3 different
>>>> configs
>>>> (local.conf?), then there would also be 3 different bblayers.conf's?
>>> 
>>> 
>>> No, there is one local.conf and one bblayers.conf file and then three
>>> different multiconfig files, each one of which sets a different
>>> MACHINE.
>>> 
>>> Would people really want to support different bblayer files? That would
>>> complicate things quite a lot :/.
>> 
>> Personally I have a common "Downloads" directory (this is probably quite normal).
>> 
>> Then, I have a common "layers" directory in which I checkout every layer of
>> which I'm aware. I also have a script that I run manually from time to time to
>> keep each layer up to date (although it's capable of running any general git
>> command on each git repository it finds one level beneath it):
>> 	https://github.com/twoerner/oe-misc/blob/master/scripts/gitcmd.sh
>> 
>> I then create separate directories for each platform for which I'm interested
>> in building (e.g. raspi2, raspi3, minnow, dragon, etc...). In each of those
>> directories I have separate local.conf, bblayers.conf, sstate-cache, and tmp
>> directories.
>> 
>> I know most will disagree with this arrangement (especially the separate
>> sstate-cache directories) but it's a system that has evolved over time, each
>> decision was made based on experience, and it works great for me!
>> 
>> It's been my experience that having too many layers in a build slows down the
>> initial parsing stage noticeably and too often layers don't play well with
>> each other. Also *many* build issues after an update can be fixed by blowing
>> away tmp *and* sstate and starting over. Often, building for a particular
>> board requires particular tweaks to local.conf (whether to enable a vendor
>> license or to enable specific hardware/features) which don't apply to other
>> boards and builds.
>> 
>> I'm happy with the speed of my builds, and I have enough disk space to
>> maintain the multiple sstates/tmps/etc. *Most* of my builds are
>> core-image-full-cmdline-type builds and I can crank one of those out from
>> scratch in 20 minutes (assuming the majority of sources have already been
>> downloaded). Although I do sometimes need chromium (which takes an hour on its
>> own) and I used to do qt (which is also quite painful). So I can understand
>> how sstate might be more useful to others, but for me, not so much.
> I second Trevor on this unless your building gui or media based packages sstate
> is not very useful if you have a modern system with a 4 to 8 core CPU with 8 to
> 16GB of ram. However if your trying to just do small tweaks to the same board and
> test it may be of use as I use something similar for building the kernel called
> ccache. Again as Trevor stated you may want to benchmark the results and see if
> sstate actually decreases your build time by a significant margin i.e probably
> more then half decreases your build speed. Otherwise I would agree with Trevor
> and just not worry about sstate.

My CI run that builds a basic image for about 30 machines drops from ~16 hours to about 1 hour after the first build with most of the remaining time spent in:

1) xz’ing the images
2) importing prserv-export.conf
3) parsing

That’s with WORKDIR in tmpfs or nvme ssd, SSTATEDIR and DL_DIR on spinning rust RAID5, metadata on a regular ssd.

regards,

Koen



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Openembedded-architecture] Multi-configuration builds
  2016-06-21 13:26         ` Koen Kooi
@ 2016-06-21 18:37           ` nick
  2016-06-21 18:48             ` Koen Kooi
  0 siblings, 1 reply; 11+ messages in thread
From: nick @ 2016-06-21 18:37 UTC (permalink / raw)
  To: Koen Kooi; +Cc: openembedded-architecture, bitbake-devel



On 2016-06-21 09:26 AM, Koen Kooi wrote:
> 
>> Op 20 jun. 2016, om 05:14 heeft nick <xerofoify@gmail.com> het volgende geschreven:
>>
>>
>>
>> On 2016-06-19 10:46 PM, Trevor Woerner wrote:
>>> On Fri 2016-06-10 @ 05:13:43 PM, Richard Purdie wrote:
>>>> On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
>>>>> On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
>>>>>> A few people have asked about multi-machine builds.
>>>>>
>>>>> Do you envision each config also pointing to individual bblayer
>>>>> configurations
>>>>> too? I.e. if I'm building for 3 different MACHINEs, with 3 different
>>>>> configs
>>>>> (local.conf?), then there would also be 3 different bblayers.conf's?
>>>>
>>>>
>>>> No, there is one local.conf and one bblayers.conf file and then three
>>>> different multiconfig files, each one of which sets a different
>>>> MACHINE.
>>>>
>>>> Would people really want to support different bblayer files? That would
>>>> complicate things quite a lot :/.
>>>
>>> Personally I have a common "Downloads" directory (this is probably quite normal).
>>>
>>> Then, I have a common "layers" directory in which I checkout every layer of
>>> which I'm aware. I also have a script that I run manually from time to time to
>>> keep each layer up to date (although it's capable of running any general git
>>> command on each git repository it finds one level beneath it):
>>> 	https://github.com/twoerner/oe-misc/blob/master/scripts/gitcmd.sh
>>>
>>> I then create separate directories for each platform for which I'm interested
>>> in building (e.g. raspi2, raspi3, minnow, dragon, etc...). In each of those
>>> directories I have separate local.conf, bblayers.conf, sstate-cache, and tmp
>>> directories.
>>>
>>> I know most will disagree with this arrangement (especially the separate
>>> sstate-cache directories) but it's a system that has evolved over time, each
>>> decision was made based on experience, and it works great for me!
>>>
>>> It's been my experience that having too many layers in a build slows down the
>>> initial parsing stage noticeably and too often layers don't play well with
>>> each other. Also *many* build issues after an update can be fixed by blowing
>>> away tmp *and* sstate and starting over. Often, building for a particular
>>> board requires particular tweaks to local.conf (whether to enable a vendor
>>> license or to enable specific hardware/features) which don't apply to other
>>> boards and builds.
>>>
>>> I'm happy with the speed of my builds, and I have enough disk space to
>>> maintain the multiple sstates/tmps/etc. *Most* of my builds are
>>> core-image-full-cmdline-type builds and I can crank one of those out from
>>> scratch in 20 minutes (assuming the majority of sources have already been
>>> downloaded). Although I do sometimes need chromium (which takes an hour on its
>>> own) and I used to do qt (which is also quite painful). So I can understand
>>> how sstate might be more useful to others, but for me, not so much.
>> I second Trevor on this unless your building gui or media based packages sstate
>> is not very useful if you have a modern system with a 4 to 8 core CPU with 8 to
>> 16GB of ram. However if your trying to just do small tweaks to the same board and
>> test it may be of use as I use something similar for building the kernel called
>> ccache. Again as Trevor stated you may want to benchmark the results and see if
>> sstate actually decreases your build time by a significant margin i.e probably
>> more then half decreases your build speed. Otherwise I would agree with Trevor
>> and just not worry about sstate.
> 
> My CI run that builds a basic image for about 30 machines drops from ~16 hours to about 1 hour after the first build with most of the remaining time spent in:
> 
> 1) xz’ing the images
> 2) importing prserv-export.conf
> 3) parsing
> 
> That’s with WORKDIR in tmpfs or nvme ssd, SSTATEDIR and DL_DIR on spinning rust RAID5, metadata on a regular ssd.
> 
> regards,
> 
> Koen
> 
Koen,
I don't known the bitbake commands that well but to my knowledge there is a profile option that can tell you where you 
build is taking the most time. However why not try moving your whole project setup to the NVME ssd if there is enough
space on it, this may help improve the speed of your builds even further.
Hope this helps,
Nick


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Openembedded-architecture] Multi-configuration builds
  2016-06-21 18:37           ` nick
@ 2016-06-21 18:48             ` Koen Kooi
  2016-06-21 19:18               ` nick
  0 siblings, 1 reply; 11+ messages in thread
From: Koen Kooi @ 2016-06-21 18:48 UTC (permalink / raw)
  To: nick; +Cc: openembedded-architecture, bitbake-devel



> Op 21 jun. 2016 om 20:37 heeft nick <xerofoify@gmail.com> het volgende geschreven:
> 
> 
> 
>> On 2016-06-21 09:26 AM, Koen Kooi wrote:
>> 
>>> Op 20 jun. 2016, om 05:14 heeft nick <xerofoify@gmail.com> het volgende geschreven:
>>> 
>>> 
>>> 
>>>> On 2016-06-19 10:46 PM, Trevor Woerner wrote:
>>>>> On Fri 2016-06-10 @ 05:13:43 PM, Richard Purdie wrote:
>>>>>> On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
>>>>>>> On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
>>>>>>> A few people have asked about multi-machine builds.
>>>>>> 
>>>>>> Do you envision each config also pointing to individual bblayer
>>>>>> configurations
>>>>>> too? I.e. if I'm building for 3 different MACHINEs, with 3 different
>>>>>> configs
>>>>>> (local.conf?), then there would also be 3 different bblayers.conf's?
>>>>> 
>>>>> 
>>>>> No, there is one local.conf and one bblayers.conf file and then three
>>>>> different multiconfig files, each one of which sets a different
>>>>> MACHINE.
>>>>> 
>>>>> Would people really want to support different bblayer files? That would
>>>>> complicate things quite a lot :/.
>>>> 
>>>> Personally I have a common "Downloads" directory (this is probably quite normal).
>>>> 
>>>> Then, I have a common "layers" directory in which I checkout every layer of
>>>> which I'm aware. I also have a script that I run manually from time to time to
>>>> keep each layer up to date (although it's capable of running any general git
>>>> command on each git repository it finds one level beneath it):
>>>>    https://github.com/twoerner/oe-misc/blob/master/scripts/gitcmd.sh
>>>> 
>>>> I then create separate directories for each platform for which I'm interested
>>>> in building (e.g. raspi2, raspi3, minnow, dragon, etc...). In each of those
>>>> directories I have separate local.conf, bblayers.conf, sstate-cache, and tmp
>>>> directories.
>>>> 
>>>> I know most will disagree with this arrangement (especially the separate
>>>> sstate-cache directories) but it's a system that has evolved over time, each
>>>> decision was made based on experience, and it works great for me!
>>>> 
>>>> It's been my experience that having too many layers in a build slows down the
>>>> initial parsing stage noticeably and too often layers don't play well with
>>>> each other. Also *many* build issues after an update can be fixed by blowing
>>>> away tmp *and* sstate and starting over. Often, building for a particular
>>>> board requires particular tweaks to local.conf (whether to enable a vendor
>>>> license or to enable specific hardware/features) which don't apply to other
>>>> boards and builds.
>>>> 
>>>> I'm happy with the speed of my builds, and I have enough disk space to
>>>> maintain the multiple sstates/tmps/etc. *Most* of my builds are
>>>> core-image-full-cmdline-type builds and I can crank one of those out from
>>>> scratch in 20 minutes (assuming the majority of sources have already been
>>>> downloaded). Although I do sometimes need chromium (which takes an hour on its
>>>> own) and I used to do qt (which is also quite painful). So I can understand
>>>> how sstate might be more useful to others, but for me, not so much.
>>> I second Trevor on this unless your building gui or media based packages sstate
>>> is not very useful if you have a modern system with a 4 to 8 core CPU with 8 to
>>> 16GB of ram. However if your trying to just do small tweaks to the same board and
>>> test it may be of use as I use something similar for building the kernel called
>>> ccache. Again as Trevor stated you may want to benchmark the results and see if
>>> sstate actually decreases your build time by a significant margin i.e probably
>>> more then half decreases your build speed. Otherwise I would agree with Trevor
>>> and just not worry about sstate.
>> 
>> My CI run that builds a basic image for about 30 machines drops from ~16 hours to about 1 hour after the first build with most of the remaining time spent in:
>> 
>> 1) xz’ing the images
>> 2) importing prserv-export.conf
>> 3) parsing
>> 
>> That’s with WORKDIR in tmpfs or nvme ssd, SSTATEDIR and DL_DIR on spinning rust RAID5, metadata on a regular ssd.
>> 
>> regards,
>> 
>> Koen
> Koen,
> I don't known the bitbake commands that well but to my knowledge there is a profile option that can tell you where you 
> build is taking the most time. However why not try moving your whole project setup to the NVME ssd if there is enough
> space on it, this may help improve the speed of your builds even further.

Nothing will be faster than tmpfs and the raid array is faster than pigz/Xz/bzip can process.

> Hope this helps,
> Nick


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Openembedded-architecture] Multi-configuration builds
  2016-06-21 18:48             ` Koen Kooi
@ 2016-06-21 19:18               ` nick
  0 siblings, 0 replies; 11+ messages in thread
From: nick @ 2016-06-21 19:18 UTC (permalink / raw)
  To: Koen Kooi; +Cc: openembedded-architecture, bitbake-devel



On 2016-06-21 02:48 PM, Koen Kooi wrote:
> 
> 
>> Op 21 jun. 2016 om 20:37 heeft nick <xerofoify@gmail.com> het volgende geschreven:
>>
>>
>>
>>> On 2016-06-21 09:26 AM, Koen Kooi wrote:
>>>
>>>> Op 20 jun. 2016, om 05:14 heeft nick <xerofoify@gmail.com> het volgende geschreven:
>>>>
>>>>
>>>>
>>>>> On 2016-06-19 10:46 PM, Trevor Woerner wrote:
>>>>>> On Fri 2016-06-10 @ 05:13:43 PM, Richard Purdie wrote:
>>>>>>> On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
>>>>>>>> On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
>>>>>>>> A few people have asked about multi-machine builds.
>>>>>>>
>>>>>>> Do you envision each config also pointing to individual bblayer
>>>>>>> configurations
>>>>>>> too? I.e. if I'm building for 3 different MACHINEs, with 3 different
>>>>>>> configs
>>>>>>> (local.conf?), then there would also be 3 different bblayers.conf's?
>>>>>>
>>>>>>
>>>>>> No, there is one local.conf and one bblayers.conf file and then three
>>>>>> different multiconfig files, each one of which sets a different
>>>>>> MACHINE.
>>>>>>
>>>>>> Would people really want to support different bblayer files? That would
>>>>>> complicate things quite a lot :/.
>>>>>
>>>>> Personally I have a common "Downloads" directory (this is probably quite normal).
>>>>>
>>>>> Then, I have a common "layers" directory in which I checkout every layer of
>>>>> which I'm aware. I also have a script that I run manually from time to time to
>>>>> keep each layer up to date (although it's capable of running any general git
>>>>> command on each git repository it finds one level beneath it):
>>>>>    https://github.com/twoerner/oe-misc/blob/master/scripts/gitcmd.sh
>>>>>
>>>>> I then create separate directories for each platform for which I'm interested
>>>>> in building (e.g. raspi2, raspi3, minnow, dragon, etc...). In each of those
>>>>> directories I have separate local.conf, bblayers.conf, sstate-cache, and tmp
>>>>> directories.
>>>>>
>>>>> I know most will disagree with this arrangement (especially the separate
>>>>> sstate-cache directories) but it's a system that has evolved over time, each
>>>>> decision was made based on experience, and it works great for me!
>>>>>
>>>>> It's been my experience that having too many layers in a build slows down the
>>>>> initial parsing stage noticeably and too often layers don't play well with
>>>>> each other. Also *many* build issues after an update can be fixed by blowing
>>>>> away tmp *and* sstate and starting over. Often, building for a particular
>>>>> board requires particular tweaks to local.conf (whether to enable a vendor
>>>>> license or to enable specific hardware/features) which don't apply to other
>>>>> boards and builds.
>>>>>
>>>>> I'm happy with the speed of my builds, and I have enough disk space to
>>>>> maintain the multiple sstates/tmps/etc. *Most* of my builds are
>>>>> core-image-full-cmdline-type builds and I can crank one of those out from
>>>>> scratch in 20 minutes (assuming the majority of sources have already been
>>>>> downloaded). Although I do sometimes need chromium (which takes an hour on its
>>>>> own) and I used to do qt (which is also quite painful). So I can understand
>>>>> how sstate might be more useful to others, but for me, not so much.
>>>> I second Trevor on this unless your building gui or media based packages sstate
>>>> is not very useful if you have a modern system with a 4 to 8 core CPU with 8 to
>>>> 16GB of ram. However if your trying to just do small tweaks to the same board and
>>>> test it may be of use as I use something similar for building the kernel called
>>>> ccache. Again as Trevor stated you may want to benchmark the results and see if
>>>> sstate actually decreases your build time by a significant margin i.e probably
>>>> more then half decreases your build speed. Otherwise I would agree with Trevor
>>>> and just not worry about sstate.
>>>
>>> My CI run that builds a basic image for about 30 machines drops from ~16 hours to about 1 hour after the first build with most of the remaining time spent in:
>>>
>>> 1) xz’ing the images
>>> 2) importing prserv-export.conf
>>> 3) parsing
>>>
>>> That’s with WORKDIR in tmpfs or nvme ssd, SSTATEDIR and DL_DIR on spinning rust RAID5, metadata on a regular ssd.
>>>
>>> regards,
>>>
>>> Koen
>> Koen,
>> I don't known the bitbake commands that well but to my knowledge there is a profile option that can tell you where you 
>> build is taking the most time. However why not try moving your whole project setup to the NVME ssd if there is enough
>> space on it, this may help improve the speed of your builds even further.
> 
> Nothing will be faster than tmpfs and the raid array is faster than pigz/Xz/bzip can process.
> 
That's true, so it seems fine to me but are you actually happy with or just curious is it's normal.
Seems pretty normal to be but then again I haven't tested on a nvme/tmpfs setup with a raid. You
can always update the CPU to more cores but it probably won't build much faster, maybe twice as 
fast depending on the processor you update to.
Nick
>> Hope this helps,
>> Nick


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-06-21 19:18 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-10 15:33 Multi-configuration builds Richard Purdie
2016-06-10 16:07 ` Trevor Woerner
2016-06-10 16:13   ` Richard Purdie
2016-06-13 13:48     ` [Openembedded-architecture] " Otavio Salvador
2016-06-13 14:20       ` Patrick Ohly
2016-06-20  2:46     ` Trevor Woerner
2016-06-20  3:14       ` [Openembedded-architecture] " nick
2016-06-21 13:26         ` Koen Kooi
2016-06-21 18:37           ` nick
2016-06-21 18:48             ` Koen Kooi
2016-06-21 19:18               ` nick

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.