All of lore.kernel.org
 help / color / mirror / Atom feed
From: Frans Meulenbroeks <fransmeulenbroeks@gmail.com>
To: openembedded-devel@lists.openembedded.org
Subject: Re: [RFC] turning conf/machine into a set of bblayers
Date: Wed, 3 Nov 2010 09:15:01 +0100	[thread overview]
Message-ID: <AANLkTi=3GwhFb27DDyHnOcQEk1oskP1NSEUw_qaZdNyr@mail.gmail.com> (raw)
In-Reply-To: <4CD080E2.2000706@mentor.com>

2010/11/2 Tom Rini <tom_rini@mentor.com>:
> Eric Bénard wrote:
>>
>> Hi,
>>
>> Le 02/11/2010 21:46, Koen Kooi a écrit :
>>>
>>> I do fear that pulling things into seperate layers too much will make it
>>> harder to propagate fixes...
>>>
>> yes, in your example, the fines in conf/machine/include are common to all
>> omap boards (and even all cortexa8 for tune-cortexa8.inc) and thus when
>> fixing one BSP you have to think to fix the others (and to communicate the
>> fix to other BSP maintainers).
>> The same apply for most of the .inc in recipes-bsp/*/.
>>
>> Do you think the following setup is possible ?
>>
>> - ARM overlay (containing all generic files for ARM achitecture :
>> conf/machines/include for example)
>>
>> - OMAP3 overlay (containing all generic files for OMAP3 SOC :
>> conf/machines/include/omap* + recipes/linux u-boot x-load base files for
>> omap3 architecture,
>>
>> - specific board overlay (conf/machine/themachine.conf + board specific
>> additions in recipes/linux u-boot & x-load (with patches based on top of the
>> OMAP3 overlay).
>
> How about:
>
> - allow some form of conf/machine/include to continue to exist in the main
> layer
>
> ? There would have to be some judgment calls, but I don't think that should
> be too hard, over when it's SOC_FAMILY or when it's very generic.  Basically
> the ARM overlay wouldn't be created in this case (nor the PPC nor MIPS nor
> ...).  But we must avoid duplicating tune-coretexa8.inc and similar.
>

I'd say it is definitely nice to have a arch specific overlay (e.g.
ARM, MIPS, PPC, Nios2) which contains the specific recipes for that
architecture.
To give an example:
For nios2 the only backend is for gcc 4.1.2 and binutils
17.50.something. I can imagine that at some point in time it is
decided not to support these in the mainline/standard/common/base
system. In such a case I think the arch specific overlay would be a
good place.

Whether there should be an omap3 specific overlay (or wheter it should
be cortexA8, or maybe cortexA8 and omap3) remains probably to be seen.
I would suggest initially storing these in the arm machine overlay. If
that one becomes too crowded we alwasy can create an additional layer.

Khem wrote:
> in general we should try to move minimal stuff into machine layers for obvious maintenance
> burdening reasons. I am afraid that this has potential of leading usinto maintenance problems
> if we hold this loosely.

I fully agree with this.
In my opinion the rule should be:
machine specific stuff should go into the machine overlay. A machine
overlay could cover several closely related machines (e.g.
beagleboard, beagleboard-XM (should these be considered as different)
arch specific stuff (including stuff that is appropriate for multiple
machines in the arch.
non hw specific stuff should go into the common base.

We should definitely avoid that multiple recipes and multiple recipe
variants are created as that creates a maintenance nightmare.

Coming back on the gcc/nios2 example.
I'd expect this to be in the common base, but if at some point in time
it is decided to eliminate it from there, it should move to the nios2
overlay.
Maintenance responsibility then shifts from the maintainers of the
common base to the maintainer of the nios2 layer.

Frans.



  reply	other threads:[~2010-11-03  8:15 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-21  9:33 [RFC] turning conf/machine into a set of bblayers Koen Kooi
2010-10-21  9:52 ` Graeme Gregory
2010-10-21  9:59   ` Koen Kooi
2010-10-21 10:04     ` Graeme Gregory
2010-10-21 10:17       ` Frans Meulenbroeks
2010-10-21 10:20       ` Frans Meulenbroeks
2010-10-21 10:38         ` Richard Purdie
2010-10-21 12:01           ` Frans Meulenbroeks
2010-10-21 13:46             ` Maupin, Chase
2010-10-21 14:21               ` Chris Larson
2010-10-21 16:11                 ` Denys Dmytriyenko
2010-11-01 21:04         ` Tom Rini
2010-10-21 10:48     ` Richard Purdie
2010-10-21 11:22       ` Graeme Gregory
2010-10-21 14:21     ` Chris Larson
2010-10-21 10:36 ` Richard Purdie
2010-11-02  7:02 ` Frans Meulenbroeks
2010-11-02 20:46   ` Koen Kooi
2010-11-02 21:14     ` Eric Bénard
2010-11-02 21:19       ` Koen Kooi
2010-11-02 21:21       ` Tom Rini
2010-11-03  8:15         ` Frans Meulenbroeks [this message]
2010-11-03 14:59           ` Tom Rini
2010-11-03 18:59             ` Frans Meulenbroeks
2010-11-03 20:17               ` Tom Rini
2010-11-03 20:44                 ` Khem Raj
2010-11-03 21:06                   ` Frans Meulenbroeks
2010-11-03 22:13                     ` Khem Raj
2010-11-04  7:48                   ` Koen Kooi
2010-11-02 21:57     ` Khem Raj

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTi=3GwhFb27DDyHnOcQEk1oskP1NSEUw_qaZdNyr@mail.gmail.com' \
    --to=fransmeulenbroeks@gmail.com \
    --cc=openembedded-devel@lists.openembedded.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.