* RFC: Improving the developer workflow
@ 2014-08-07 9:10 Paul Eggleton
2014-08-07 10:13 ` [yocto] " Alex J Lennon
` (3 more replies)
0 siblings, 4 replies; 23+ messages in thread
From: Paul Eggleton @ 2014-08-07 9:10 UTC (permalink / raw)
To: openembedded-core, yocto
Hi folks,
As most of you know within the Yocto Project and OpenEmbedded we've been
trying to figure out how to improve the OE developer workflow. This potentially
covers a lot of different areas, but one in particular I where think we can
have some impact is helping application developers - people who are working on
some application or component of the system, rather than the OS as a whole.
Currently, what we provide is an installable SDK containing the toolchain,
libraries and headers; we also have the ADT which additionally provides some
Eclipse integration (which I'll leave aside for the moment) and has some
ability to be extended / updated using opkg only.
The pros:
* Self contained, no extra dependencies
* Relocatable, can be installed anywhere
* Runs on lots of different systems
* Mostly pre-configured for the desired target machine
The cons:
* No ability to migrate into the build environment
* No helper scripts/tools beyond the basic environment setup
* No real upgrade workflow (package feed upgrade possible in theory, but no
tools to help manage the feeds and difficult to scale with multiple releases and
targets)
As the ADT/SDK stand, they do provide an easy way to run the cross-compilation
on a separate machine; but that's about it - you're somewhat on your own as
far as telling whatever build system your application / some third-party
library you need uses to use that toolchain, and completely on your own as far
as getting your changes that code into your image or getting those changes
integrated into the build system is concerned. We can do better.
Bridging the gap
================
We have a lot of power in the build system - e.g. the cross-compilation tools
and helper classes. I think it would help a lot if we could give the developer
access to some of this power, but in a manner that does not force the
developer to have to deal with the pain of actually setting up the build
system and keeping it running. I think there is a path forward where we can
integrate the build system into the SDK and wrap it in some helper scripts in
such a way that we:
* Avoid the need to configure the build system - it comes pre-configured. The
developer is not expected to need to touch the configuration files at all.
* Avoid building anything on the developer's machine that we don't need to -
lock the sstate signatures such that only components that the developer ends
up building are the ones that they've selected to work on, which are tracked
by the tools, and the rest comes from sstate - and perhaps a portion of the
sstate is already part of the downloaded SDK to avoid too much fetching during
builds, either in the form of sstate packages or already populated into the
target sysroot and other places within the internal copy of the build system.
This should reduce the likelihood of the system breaking on the developer's
machine as well as reduce
the number of host dependencies.
* Provide tools to add new software - in practical terms this means creating a
new recipe in an automated/guided manner (figuring out as much as we can
looking at the source tree) and then configuring the build to use the
developer's external source tree rather than SRC_URI, by making use of the
externalsrc class. This also gives a head start when it comes to integrating
the new software into the build - you already have a recipe, even if some
additional tweaking is required.
* Provide tools to allow modifying software for which a recipe already exists.
If the user has an external source tree we use that, otherwise we can fetch
the source, apply any patches and place the result in an external source tree,
possibly managed with git. (It's fair to say this is perhaps less of an
application developer type task, but still an important one and fairly simple
to add once we have the rest of the tooling.)
* Provide tools to get your changes onto the target in order to test them.
With access to the build system, rebuilding the image with changes to a target
component is fairly trivial; but we can go further - assuming a network
connection to the target is available we can provide tools to simply deploy
the files installed by the changed recipe onto the running device (using an
"sstate-like" mechanism - remove the old list of files and then install the new
ones).
* Provide tools to get your changes to the code or the metadata into a form
that you can submit somewhere.
For compilation, this would mean that we use the normal native / cross tools
instead of nativesdk; the only parts that remain as nativesdk are those that
we need to provide to isolate the SDK from differences in the host system (such
as Python / libc). We'll need to do some additional loader tricks on top of
what we currently do for nativesdk so that the native / cross tools can make
use of the nativesdk libc in the SDK, but this shouldn't be a serious barrier.
Example workflow
================
I won't give a workflow for every possible usage, but just to give a basic
example - let's assume you want to build a "new" piece of software for which
you have your own source tree on your machine. The rough set of steps required
would be something like this (rough, e.g. the command names given shouldn't be
read as final):
1. Install the SDK
2. Run a setup script to make the SDK tools available
3. Add a new recipe with "sdktool add <recipename>" - interactive process. The
tool records that <recipename> is being worked on, creates a recipe that can
be used to build the software using your external source tree, and places the
recipe where it will be used automatically by other steps.
4. Build the recipe with "sdktool build <recipename>". This probably only goes
as far as do_install or possibly do_package_qa; in any case the QA process
would be less stringent than with the standard build system however in order
to avoid putting too many barriers in the way of testing on the target.
5. Fix any failures and repeat from the previous step as necessary.
6. Deploy changes to target with "sdktool deploy-target <ip address>" assuming
SSH is available on the target. Alternatively "sdktool build-image
<imagename>" can be used to regenerate an image with the changes in it;
"sdktool runqemu" could do that (if necessary) and then run the result within
QEMU with the appropriate options set.
Construction & Updating
=======================
At some point, you need to update the installed SDK after changes on the build
system side. Our current SDK has no capability to do this - you just install a
new one and delete the old. The ADT supports opkg, but then you have another
set of feeds to maintain and we don't really provide any tools to help with
that.
If we're already talking about replacing the SDK's target sysroot and most of
the host part by using the build system + pre-built components from sstate,
then it would perhaps make sense to construct the new SDK itself from sstate
packages and add some tools around filtering and publishing the sstate cache at
the same time. (We can even look at ways to compare the contents of two sstate
packages which have different signatures to see if the output really has
changed, and simply not publish the new sstate package and preserve the locked
signature for those have not.)
We can then have a simple update tool shipped with the SDK along with a
manifest of the components + their signatures. The update tool downloads the
new manifest from the server and removes / extracts sstate packages until the
result matches the manifest.
Where to from here?
===================
I'd like to get some feedback on the above. Within the Yocto Project we've
committed to doing something to improve the developer experience in the 1.7
timeframe, so I'd hope that if there are no violent objections we could at
least have enough of this working for 1.7 so that the concept can be put to
the test.
[Note: we would preserve the ability to produce the existing SDK as-is - we
wouldn't be outright replacing that, at least not just yet; it will likely
replace the ADT more immediately however.]
Cheers,
Paul
--
Paul Eggleton
Intel Open Source Technology Centre
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
2014-08-07 9:10 RFC: Improving the developer workflow Paul Eggleton
@ 2014-08-07 10:13 ` Alex J Lennon
2014-08-07 12:09 ` Bryan Evenson
` (2 subsequent siblings)
3 siblings, 0 replies; 23+ messages in thread
From: Alex J Lennon @ 2014-08-07 10:13 UTC (permalink / raw)
To: Paul Eggleton; +Cc: yocto, openembedded-core
On 07/08/2014 10:10, Paul Eggleton wrote:
> Hi folks,
>
> As most of you know within the Yocto Project and OpenEmbedded we've been
> trying to figure out how to improve the OE developer workflow. This potentially
> covers a lot of different areas, but one in particular I where think we can
> have some impact is helping application developers - people who are working on
> some application or component of the system, rather than the OS as a whole.
>
> Currently, what we provide is an installable SDK containing the toolchain,
> libraries and headers; we also have the ADT which additionally provides some
> Eclipse integration (which I'll leave aside for the moment) and has some
> ability to be extended / updated using opkg only.
>
> The pros:
>
> * Self contained, no extra dependencies
> * Relocatable, can be installed anywhere
> * Runs on lots of different systems
> * Mostly pre-configured for the desired target machine
>
> The cons:
>
> * No ability to migrate into the build environment
> * No helper scripts/tools beyond the basic environment setup
> * No real upgrade workflow (package feed upgrade possible in theory, but no
> tools to help manage the feeds and difficult to scale with multiple releases and
> targets)
>
Very interesting Paul.
fwiw Upgrade solutions are something that is still a read need imho, as
I think we discussed at one of the FOSDEMs.
(The other real need being an on-board test framework, again imho, and
which I believe is ongoing)
Historically I, and I suspect others, have done full image updates of
the storage medium, onboard flash or whatever but these images are
getting so big now that I am trying to move away from that and into
using package feeds for updates to embedded targets.
My initial experience has been that
- as you mention it would be really helpful to have something "more"
around management of package feed releases / targets.
- some automation around deployment of package feeds to production
servers would help, or at least some documentation on best practice.
The other big issue I am seeing, which is mostly my own fault thus far,
is that I have sometimes taken the easy option of modifying the root
filesystem image in various ways within the image recipe (for example
changing a Webmin configuration perhaps)
However when I then come to upgrade a package in-situ, such as Webmin,
the changes are then overwritten.
I think this is probably also an issue when upgrading packages that have
had local modifications made, and I wonder whether there's a solution to
this that I'm not aware of?
I am aware of course that mainstream package management tools allow
diffing, upgrading, ignoring and such but I am unsure as to how that is
supported under Yocto at present?
As a minimum I will have to make sure my OEM recipe changes are all in
the correct .bbappends I believe think (more best practice notes there)
and I definitely need to understand better how configuration file
changes are handled when upgrading packages.
Cheers,
Alex
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
@ 2014-08-07 10:13 ` Alex J Lennon
0 siblings, 0 replies; 23+ messages in thread
From: Alex J Lennon @ 2014-08-07 10:13 UTC (permalink / raw)
To: Paul Eggleton; +Cc: yocto, openembedded-core
On 07/08/2014 10:10, Paul Eggleton wrote:
> Hi folks,
>
> As most of you know within the Yocto Project and OpenEmbedded we've been
> trying to figure out how to improve the OE developer workflow. This potentially
> covers a lot of different areas, but one in particular I where think we can
> have some impact is helping application developers - people who are working on
> some application or component of the system, rather than the OS as a whole.
>
> Currently, what we provide is an installable SDK containing the toolchain,
> libraries and headers; we also have the ADT which additionally provides some
> Eclipse integration (which I'll leave aside for the moment) and has some
> ability to be extended / updated using opkg only.
>
> The pros:
>
> * Self contained, no extra dependencies
> * Relocatable, can be installed anywhere
> * Runs on lots of different systems
> * Mostly pre-configured for the desired target machine
>
> The cons:
>
> * No ability to migrate into the build environment
> * No helper scripts/tools beyond the basic environment setup
> * No real upgrade workflow (package feed upgrade possible in theory, but no
> tools to help manage the feeds and difficult to scale with multiple releases and
> targets)
>
Very interesting Paul.
fwiw Upgrade solutions are something that is still a read need imho, as
I think we discussed at one of the FOSDEMs.
(The other real need being an on-board test framework, again imho, and
which I believe is ongoing)
Historically I, and I suspect others, have done full image updates of
the storage medium, onboard flash or whatever but these images are
getting so big now that I am trying to move away from that and into
using package feeds for updates to embedded targets.
My initial experience has been that
- as you mention it would be really helpful to have something "more"
around management of package feed releases / targets.
- some automation around deployment of package feeds to production
servers would help, or at least some documentation on best practice.
The other big issue I am seeing, which is mostly my own fault thus far,
is that I have sometimes taken the easy option of modifying the root
filesystem image in various ways within the image recipe (for example
changing a Webmin configuration perhaps)
However when I then come to upgrade a package in-situ, such as Webmin,
the changes are then overwritten.
I think this is probably also an issue when upgrading packages that have
had local modifications made, and I wonder whether there's a solution to
this that I'm not aware of?
I am aware of course that mainstream package management tools allow
diffing, upgrading, ignoring and such but I am unsure as to how that is
supported under Yocto at present?
As a minimum I will have to make sure my OEM recipe changes are all in
the correct .bbappends I believe think (more best practice notes there)
and I definitely need to understand better how configuration file
changes are handled when upgrading packages.
Cheers,
Alex
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
2014-08-07 9:10 RFC: Improving the developer workflow Paul Eggleton
2014-08-07 10:13 ` [yocto] " Alex J Lennon
@ 2014-08-07 12:09 ` Bryan Evenson
2014-08-08 8:04 ` Nicolas Dechesne
2014-08-08 12:56 ` Mike Looijmans
3 siblings, 0 replies; 23+ messages in thread
From: Bryan Evenson @ 2014-08-07 12:09 UTC (permalink / raw)
To: Paul Eggleton, openembedded-core, yocto
Paul,
I am using the Yocto Project tools almost purely for userspace applications. I have tried to use the ADT and SDK in the past with limited success. I try to keep my local poky/oe working copies near up to date, which would mean rebuilding the SDK/ADT for each poky point release. For me I've had better success by setting up an Eclipse project to point to the proper directories in sysroot and then copying the Eclipse project for the new application.
Any of the suggestions below to make the ADT or SDK easier to use and maintain would be appreciated.
Regards,
Bryan
> -----Original Message-----
> From: yocto-bounces@yoctoproject.org [mailto:yocto-
> bounces@yoctoproject.org] On Behalf Of Paul Eggleton
> Sent: Thursday, August 07, 2014 5:11 AM
> To: openembedded-core@lists.openembedded.org;
> yocto@yoctoproject.org
> Subject: [yocto] RFC: Improving the developer workflow
>
> Hi folks,
>
> As most of you know within the Yocto Project and OpenEmbedded we've
> been trying to figure out how to improve the OE developer workflow. This
> potentially covers a lot of different areas, but one in particular I where think
> we can have some impact is helping application developers - people who are
> working on some application or component of the system, rather than the
> OS as a whole.
>
> Currently, what we provide is an installable SDK containing the toolchain,
> libraries and headers; we also have the ADT which additionally provides some
> Eclipse integration (which I'll leave aside for the moment) and has some
> ability to be extended / updated using opkg only.
>
> The pros:
>
> * Self contained, no extra dependencies
> * Relocatable, can be installed anywhere
> * Runs on lots of different systems
> * Mostly pre-configured for the desired target machine
>
> The cons:
>
> * No ability to migrate into the build environment
> * No helper scripts/tools beyond the basic environment setup
> * No real upgrade workflow (package feed upgrade possible in theory, but
> no tools to help manage the feeds and difficult to scale with multiple releases
> and
> targets)
>
> As the ADT/SDK stand, they do provide an easy way to run the cross-
> compilation on a separate machine; but that's about it - you're somewhat on
> your own as far as telling whatever build system your application / some
> third-party library you need uses to use that toolchain, and completely on
> your own as far as getting your changes that code into your image or getting
> those changes integrated into the build system is concerned. We can do
> better.
>
> Bridging the gap
> ================
>
> We have a lot of power in the build system - e.g. the cross-compilation tools
> and helper classes. I think it would help a lot if we could give the developer
> access to some of this power, but in a manner that does not force the
> developer to have to deal with the pain of actually setting up the build
> system and keeping it running. I think there is a path forward where we can
> integrate the build system into the SDK and wrap it in some helper scripts in
> such a way that we:
>
> * Avoid the need to configure the build system - it comes pre-configured.
> The developer is not expected to need to touch the configuration files at all.
>
> * Avoid building anything on the developer's machine that we don't need to -
> lock the sstate signatures such that only components that the developer
> ends up building are the ones that they've selected to work on, which are
> tracked by the tools, and the rest comes from sstate - and perhaps a portion
> of the sstate is already part of the downloaded SDK to avoid too much
> fetching during builds, either in the form of sstate packages or already
> populated into the target sysroot and other places within the internal copy of
> the build system.
> This should reduce the likelihood of the system breaking on the developer's
> machine as well as reduce the number of host dependencies.
>
> * Provide tools to add new software - in practical terms this means creating a
> new recipe in an automated/guided manner (figuring out as much as we can
> looking at the source tree) and then configuring the build to use the
> developer's external source tree rather than SRC_URI, by making use of the
> externalsrc class. This also gives a head start when it comes to integrating the
> new software into the build - you already have a recipe, even if some
> additional tweaking is required.
>
> * Provide tools to allow modifying software for which a recipe already exists.
> If the user has an external source tree we use that, otherwise we can fetch
> the source, apply any patches and place the result in an external source tree,
> possibly managed with git. (It's fair to say this is perhaps less of an application
> developer type task, but still an important one and fairly simple to add once
> we have the rest of the tooling.)
>
> * Provide tools to get your changes onto the target in order to test them.
> With access to the build system, rebuilding the image with changes to a
> target component is fairly trivial; but we can go further - assuming a network
> connection to the target is available we can provide tools to simply deploy
> the files installed by the changed recipe onto the running device (using an
> "sstate-like" mechanism - remove the old list of files and then install the new
> ones).
>
> * Provide tools to get your changes to the code or the metadata into a form
> that you can submit somewhere.
>
> For compilation, this would mean that we use the normal native / cross tools
> instead of nativesdk; the only parts that remain as nativesdk are those that
> we need to provide to isolate the SDK from differences in the host system
> (such as Python / libc). We'll need to do some additional loader tricks on top
> of what we currently do for nativesdk so that the native / cross tools can
> make use of the nativesdk libc in the SDK, but this shouldn't be a serious
> barrier.
>
> Example workflow
> ================
>
> I won't give a workflow for every possible usage, but just to give a basic
> example - let's assume you want to build a "new" piece of software for which
> you have your own source tree on your machine. The rough set of steps
> required would be something like this (rough, e.g. the command names
> given shouldn't be read as final):
>
> 1. Install the SDK
>
> 2. Run a setup script to make the SDK tools available
>
> 3. Add a new recipe with "sdktool add <recipename>" - interactive process.
> The tool records that <recipename> is being worked on, creates a recipe that
> can be used to build the software using your external source tree, and places
> the recipe where it will be used automatically by other steps.
>
> 4. Build the recipe with "sdktool build <recipename>". This probably only
> goes as far as do_install or possibly do_package_qa; in any case the QA
> process would be less stringent than with the standard build system
> however in order to avoid putting too many barriers in the way of testing on
> the target.
>
> 5. Fix any failures and repeat from the previous step as necessary.
>
> 6. Deploy changes to target with "sdktool deploy-target <ip address>"
> assuming SSH is available on the target. Alternatively "sdktool build-image
> <imagename>" can be used to regenerate an image with the changes in it;
> "sdktool runqemu" could do that (if necessary) and then run the result within
> QEMU with the appropriate options set.
>
>
> Construction & Updating
> =======================
>
> At some point, you need to update the installed SDK after changes on the
> build system side. Our current SDK has no capability to do this - you just
> install a new one and delete the old. The ADT supports opkg, but then you
> have another set of feeds to maintain and we don't really provide any tools
> to help with that.
>
> If we're already talking about replacing the SDK's target sysroot and most of
> the host part by using the build system + pre-built components from sstate,
> then it would perhaps make sense to construct the new SDK itself from
> sstate packages and add some tools around filtering and publishing the sstate
> cache at the same time. (We can even look at ways to compare the contents
> of two sstate packages which have different signatures to see if the output
> really has changed, and simply not publish the new sstate package and
> preserve the locked signature for those have not.)
>
> We can then have a simple update tool shipped with the SDK along with a
> manifest of the components + their signatures. The update tool downloads
> the new manifest from the server and removes / extracts sstate packages
> until the result matches the manifest.
>
> Where to from here?
> ===================
>
> I'd like to get some feedback on the above. Within the Yocto Project we've
> committed to doing something to improve the developer experience in the
> 1.7 timeframe, so I'd hope that if there are no violent objections we could at
> least have enough of this working for 1.7 so that the concept can be put to
> the test.
>
> [Note: we would preserve the ability to produce the existing SDK as-is - we
> wouldn't be outright replacing that, at least not just yet; it will likely replace
> the ADT more immediately however.]
>
> Cheers,
> Paul
>
> --
>
> Paul Eggleton
> Intel Open Source Technology Centre
> --
> _______________________________________________
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
2014-08-07 10:13 ` [yocto] " Alex J Lennon
@ 2014-08-07 13:05 ` Paul Eggleton
-1 siblings, 0 replies; 23+ messages in thread
From: Paul Eggleton @ 2014-08-07 13:05 UTC (permalink / raw)
To: Alex J Lennon; +Cc: yocto, openembedded-core
Hi Alex,
On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
> On 07/08/2014 10:10, Paul Eggleton wrote:
> fwiw Upgrade solutions are something that is still a read need imho, as
> I think we discussed at one of the FOSDEMs.
>
> (The other real need being an on-board test framework, again imho, and
> which I believe is ongoing)
Indeed; I think we've made some pretty good progress here in that the Yocto
Project QA team is now using the automated runtime testing to do QA tests on
real hardware. Reporting and monitoring of ptest results is also being looked
at as well as integration with LAVA.
> Historically I, and I suspect others, have done full image updates of
> the storage medium, onboard flash or whatever but these images are
> getting so big now that I am trying to move away from that and into
> using package feeds for updates to embedded targets.
Personally with how fragile package management can end up being, I'm convinced
that full-image updates are the way to go for a lot of cases, but ideally with
some intelligence so that you only ship the changes (at a filesystem level
rather than a package or file level). This ensures that an upgraded image on
one device ends up exactly identical to any other device including a newly
deployed one. Of course it does assume that you have a read-only rootfs and
keep your configuration data / logs / other writeable data on a separate
partition or storage medium. However, beyond improvements to support for
having a read-only rootfs we haven't really achieved anything in terms of out-
of-the-box support for this, mainly due to lack of resources.
However, whilst I haven't had a chance to look at it closely, there has been
some work on this within the community:
http://sbabic.github.io/swupdate/swupdate.html
https://github.com/sbabic/swupdate
https://github.com/sbabic/meta-swupdate/
> My initial experience has been that
>
> - as you mention it would be really helpful to have something "more"
> around management of package feed releases / targets.
>
> - some automation around deployment of package feeds to production
> servers would help, or at least some documentation on best practice.
So the scope of my proposal is a little bit narrower, i.e. for the SDK; and
I'm suggesting that we mostly bypass the packaging system since it doesn't
really add much benefit and sometimes gets in the way when you're an
application developer in the middle of development and the level of churn is
high (as opposed to making incremental changes after the product's release).
> The other big issue I am seeing, which is mostly my own fault thus far,
> is that I have sometimes taken the easy option of modifying the root
> filesystem image in various ways within the image recipe (for example
> changing a Webmin configuration perhaps)
>
> However when I then come to upgrade a package in-situ, such as Webmin,
> the changes are then overwritten.
>
> I think this is probably also an issue when upgrading packages that have
> had local modifications made, and I wonder whether there's a solution to
> this that I'm not aware of?
We do have CONFFILES to point to configuration files that may be modified (and
thus should not just be overwritten on upgrade). There's not much logic in the
actual build system to deal with this, we just pass it to the package manager;
but it does work, and recipes that deploy configuration files (and bbappends, if
the configuration file is being added rather than changed from there) should set
CONFFILES so that the right thing happens on upgrade if you are using a
package manager on the target.
A related issue is that for anything other than temporary changes it's often
not clear which recipe you need to change/append in order to provide your own
version of a particular config file. FYI I entered the following enhancement bug
some months ago to add a tool to help with that:
https://bugzilla.yoctoproject.org/show_bug.cgi?id=6447
> I am aware of course that mainstream package management tools allow
> diffing, upgrading, ignoring and such but I am unsure as to how that is
> supported under Yocto at present?
There isn't really any support for this at the moment, no; I think we'd want
to try to do this kind of thing at the build system end though to avoid tying
ourselves to one particular package manager.
Cheers,
Paul
--
Paul Eggleton
Intel Open Source Technology Centre
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
@ 2014-08-07 13:05 ` Paul Eggleton
0 siblings, 0 replies; 23+ messages in thread
From: Paul Eggleton @ 2014-08-07 13:05 UTC (permalink / raw)
To: Alex J Lennon; +Cc: yocto, openembedded-core
Hi Alex,
On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
> On 07/08/2014 10:10, Paul Eggleton wrote:
> fwiw Upgrade solutions are something that is still a read need imho, as
> I think we discussed at one of the FOSDEMs.
>
> (The other real need being an on-board test framework, again imho, and
> which I believe is ongoing)
Indeed; I think we've made some pretty good progress here in that the Yocto
Project QA team is now using the automated runtime testing to do QA tests on
real hardware. Reporting and monitoring of ptest results is also being looked
at as well as integration with LAVA.
> Historically I, and I suspect others, have done full image updates of
> the storage medium, onboard flash or whatever but these images are
> getting so big now that I am trying to move away from that and into
> using package feeds for updates to embedded targets.
Personally with how fragile package management can end up being, I'm convinced
that full-image updates are the way to go for a lot of cases, but ideally with
some intelligence so that you only ship the changes (at a filesystem level
rather than a package or file level). This ensures that an upgraded image on
one device ends up exactly identical to any other device including a newly
deployed one. Of course it does assume that you have a read-only rootfs and
keep your configuration data / logs / other writeable data on a separate
partition or storage medium. However, beyond improvements to support for
having a read-only rootfs we haven't really achieved anything in terms of out-
of-the-box support for this, mainly due to lack of resources.
However, whilst I haven't had a chance to look at it closely, there has been
some work on this within the community:
http://sbabic.github.io/swupdate/swupdate.html
https://github.com/sbabic/swupdate
https://github.com/sbabic/meta-swupdate/
> My initial experience has been that
>
> - as you mention it would be really helpful to have something "more"
> around management of package feed releases / targets.
>
> - some automation around deployment of package feeds to production
> servers would help, or at least some documentation on best practice.
So the scope of my proposal is a little bit narrower, i.e. for the SDK; and
I'm suggesting that we mostly bypass the packaging system since it doesn't
really add much benefit and sometimes gets in the way when you're an
application developer in the middle of development and the level of churn is
high (as opposed to making incremental changes after the product's release).
> The other big issue I am seeing, which is mostly my own fault thus far,
> is that I have sometimes taken the easy option of modifying the root
> filesystem image in various ways within the image recipe (for example
> changing a Webmin configuration perhaps)
>
> However when I then come to upgrade a package in-situ, such as Webmin,
> the changes are then overwritten.
>
> I think this is probably also an issue when upgrading packages that have
> had local modifications made, and I wonder whether there's a solution to
> this that I'm not aware of?
We do have CONFFILES to point to configuration files that may be modified (and
thus should not just be overwritten on upgrade). There's not much logic in the
actual build system to deal with this, we just pass it to the package manager;
but it does work, and recipes that deploy configuration files (and bbappends, if
the configuration file is being added rather than changed from there) should set
CONFFILES so that the right thing happens on upgrade if you are using a
package manager on the target.
A related issue is that for anything other than temporary changes it's often
not clear which recipe you need to change/append in order to provide your own
version of a particular config file. FYI I entered the following enhancement bug
some months ago to add a tool to help with that:
https://bugzilla.yoctoproject.org/show_bug.cgi?id=6447
> I am aware of course that mainstream package management tools allow
> diffing, upgrading, ignoring and such but I am unsure as to how that is
> supported under Yocto at present?
There isn't really any support for this at the moment, no; I think we'd want
to try to do this kind of thing at the build system end though to avoid tying
ourselves to one particular package manager.
Cheers,
Paul
--
Paul Eggleton
Intel Open Source Technology Centre
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
2014-08-07 13:05 ` [yocto] " Paul Eggleton
@ 2014-08-07 13:14 ` Alex J Lennon
-1 siblings, 0 replies; 23+ messages in thread
From: Alex J Lennon @ 2014-08-07 13:14 UTC (permalink / raw)
To: Paul Eggleton; +Cc: yocto, openembedded-core
On 07/08/2014 14:05, Paul Eggleton wrote:
> Hi Alex,
>
> On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
>> On 07/08/2014 10:10, Paul Eggleton wrote:
>> fwiw Upgrade solutions are something that is still a read need imho, as
>> I think we discussed at one of the FOSDEMs.
>>
>> (The other real need being an on-board test framework, again imho, and
>> which I believe is ongoing)
> Indeed; I think we've made some pretty good progress here in that the Yocto
> Project QA team is now using the automated runtime testing to do QA tests on
> real hardware. Reporting and monitoring of ptest results is also being looked
> at as well as integration with LAVA.
>
Great news. I really want to look into this but as ever time is the
constraining factor.
>> Historically I, and I suspect others, have done full image updates of
>> the storage medium, onboard flash or whatever but these images are
>> getting so big now that I am trying to move away from that and into
>> using package feeds for updates to embedded targets.
> Personally with how fragile package management can end up being, I'm convinced
> that full-image updates are the way to go for a lot of cases, but ideally with
> some intelligence so that you only ship the changes (at a filesystem level
> rather than a package or file level). This ensures that an upgraded image on
> one device ends up exactly identical to any other device including a newly
> deployed one. Of course it does assume that you have a read-only rootfs and
> keep your configuration data / logs / other writeable data on a separate
> partition or storage medium. However, beyond improvements to support for
> having a read-only rootfs we haven't really achieved anything in terms of out-
> of-the-box support for this, mainly due to lack of resources.
Deltas. Yes I've seen binary deltas attempted over the years, with
varying degrees of success.
I can see how what you say could work at a file-system level if we could
separate out the
writeable data, yes. Not sure I've seen any tooling around this though?
Back in the day when I first started out with Arcom Embedded Linux in
the late '90's I had us
do something similar with a read only JFFS2 system partition and then a
separate app/data
partition. That seemed to work OK. Maybe I need to revisit that.
> However, whilst I haven't had a chance to look at it closely, there has been
> some work on this within the community:
>
> http://sbabic.github.io/swupdate/swupdate.html
> https://github.com/sbabic/swupdate
> https://github.com/sbabic/meta-swupdate/
I'll take a look. Thanks.
>
>> My initial experience has been that
>>
>> - as you mention it would be really helpful to have something "more"
>> around management of package feed releases / targets.
>>
>> - some automation around deployment of package feeds to production
>> servers would help, or at least some documentation on best practice.
> So the scope of my proposal is a little bit narrower, i.e. for the SDK; and
> I'm suggesting that we mostly bypass the packaging system since it doesn't
> really add much benefit and sometimes gets in the way when you're an
> application developer in the middle of development and the level of churn is
> high (as opposed to making incremental changes after the product's release).
Mmm. Yes I can understand that. Same here.
>> The other big issue I am seeing, which is mostly my own fault thus far,
>> is that I have sometimes taken the easy option of modifying the root
>> filesystem image in various ways within the image recipe (for example
>> changing a Webmin configuration perhaps)
>>
>> However when I then come to upgrade a package in-situ, such as Webmin,
>> the changes are then overwritten.
>>
>> I think this is probably also an issue when upgrading packages that have
>> had local modifications made, and I wonder whether there's a solution to
>> this that I'm not aware of?
> We do have CONFFILES to point to configuration files that may be modified (and
> thus should not just be overwritten on upgrade). There's not much logic in the
> actual build system to deal with this, we just pass it to the package manager;
> but it does work, and recipes that deploy configuration files (and bbappends, if
> the configuration file is being added rather than changed from there) should set
> CONFFILES so that the right thing happens on upgrade if you are using a
> package manager on the target.
>
> A related issue is that for anything other than temporary changes it's often
> not clear which recipe you need to change/append in order to provide your own
> version of a particular config file. FYI I entered the following enhancement bug
> some months ago to add a tool to help with that:
>
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=6447
Interesting thanks. I don't recall seeing this in recipes. I might have
missed it or are not many
people using this features in their recipes? Of course the next issue is
not knowing what you
want to do with those conf files during an unattended upgrade onto an
embedded box.
>> I am aware of course that mainstream package management tools allow
>> diffing, upgrading, ignoring and such but I am unsure as to how that is
>> supported under Yocto at present?
> There isn't really any support for this at the moment, no; I think we'd want
> to try to do this kind of thing at the build system end though to avoid tying
> ourselves to one particular package manager.
>
Indeed.
Cheers!
Alex
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
@ 2014-08-07 13:14 ` Alex J Lennon
0 siblings, 0 replies; 23+ messages in thread
From: Alex J Lennon @ 2014-08-07 13:14 UTC (permalink / raw)
To: Paul Eggleton; +Cc: yocto, openembedded-core
On 07/08/2014 14:05, Paul Eggleton wrote:
> Hi Alex,
>
> On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
>> On 07/08/2014 10:10, Paul Eggleton wrote:
>> fwiw Upgrade solutions are something that is still a read need imho, as
>> I think we discussed at one of the FOSDEMs.
>>
>> (The other real need being an on-board test framework, again imho, and
>> which I believe is ongoing)
> Indeed; I think we've made some pretty good progress here in that the Yocto
> Project QA team is now using the automated runtime testing to do QA tests on
> real hardware. Reporting and monitoring of ptest results is also being looked
> at as well as integration with LAVA.
>
Great news. I really want to look into this but as ever time is the
constraining factor.
>> Historically I, and I suspect others, have done full image updates of
>> the storage medium, onboard flash or whatever but these images are
>> getting so big now that I am trying to move away from that and into
>> using package feeds for updates to embedded targets.
> Personally with how fragile package management can end up being, I'm convinced
> that full-image updates are the way to go for a lot of cases, but ideally with
> some intelligence so that you only ship the changes (at a filesystem level
> rather than a package or file level). This ensures that an upgraded image on
> one device ends up exactly identical to any other device including a newly
> deployed one. Of course it does assume that you have a read-only rootfs and
> keep your configuration data / logs / other writeable data on a separate
> partition or storage medium. However, beyond improvements to support for
> having a read-only rootfs we haven't really achieved anything in terms of out-
> of-the-box support for this, mainly due to lack of resources.
Deltas. Yes I've seen binary deltas attempted over the years, with
varying degrees of success.
I can see how what you say could work at a file-system level if we could
separate out the
writeable data, yes. Not sure I've seen any tooling around this though?
Back in the day when I first started out with Arcom Embedded Linux in
the late '90's I had us
do something similar with a read only JFFS2 system partition and then a
separate app/data
partition. That seemed to work OK. Maybe I need to revisit that.
> However, whilst I haven't had a chance to look at it closely, there has been
> some work on this within the community:
>
> http://sbabic.github.io/swupdate/swupdate.html
> https://github.com/sbabic/swupdate
> https://github.com/sbabic/meta-swupdate/
I'll take a look. Thanks.
>
>> My initial experience has been that
>>
>> - as you mention it would be really helpful to have something "more"
>> around management of package feed releases / targets.
>>
>> - some automation around deployment of package feeds to production
>> servers would help, or at least some documentation on best practice.
> So the scope of my proposal is a little bit narrower, i.e. for the SDK; and
> I'm suggesting that we mostly bypass the packaging system since it doesn't
> really add much benefit and sometimes gets in the way when you're an
> application developer in the middle of development and the level of churn is
> high (as opposed to making incremental changes after the product's release).
Mmm. Yes I can understand that. Same here.
>> The other big issue I am seeing, which is mostly my own fault thus far,
>> is that I have sometimes taken the easy option of modifying the root
>> filesystem image in various ways within the image recipe (for example
>> changing a Webmin configuration perhaps)
>>
>> However when I then come to upgrade a package in-situ, such as Webmin,
>> the changes are then overwritten.
>>
>> I think this is probably also an issue when upgrading packages that have
>> had local modifications made, and I wonder whether there's a solution to
>> this that I'm not aware of?
> We do have CONFFILES to point to configuration files that may be modified (and
> thus should not just be overwritten on upgrade). There's not much logic in the
> actual build system to deal with this, we just pass it to the package manager;
> but it does work, and recipes that deploy configuration files (and bbappends, if
> the configuration file is being added rather than changed from there) should set
> CONFFILES so that the right thing happens on upgrade if you are using a
> package manager on the target.
>
> A related issue is that for anything other than temporary changes it's often
> not clear which recipe you need to change/append in order to provide your own
> version of a particular config file. FYI I entered the following enhancement bug
> some months ago to add a tool to help with that:
>
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=6447
Interesting thanks. I don't recall seeing this in recipes. I might have
missed it or are not many
people using this features in their recipes? Of course the next issue is
not knowing what you
want to do with those conf files during an unattended upgrade onto an
embedded box.
>> I am aware of course that mainstream package management tools allow
>> diffing, upgrading, ignoring and such but I am unsure as to how that is
>> supported under Yocto at present?
> There isn't really any support for this at the moment, no; I think we'd want
> to try to do this kind of thing at the build system end though to avoid tying
> ourselves to one particular package manager.
>
Indeed.
Cheers!
Alex
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
2014-08-07 13:05 ` [yocto] " Paul Eggleton
@ 2014-08-08 7:54 ` Nicolas Dechesne
-1 siblings, 0 replies; 23+ messages in thread
From: Nicolas Dechesne @ 2014-08-08 7:54 UTC (permalink / raw)
To: Paul Eggleton
Cc: Yocto list discussion, Patches and discussions about the oe-core layer
On Thu, Aug 7, 2014 at 3:05 PM, Paul Eggleton
<paul.eggleton@linux.intel.com> wrote:
> Personally with how fragile package management can end up being, I'm convinced
> that full-image updates are the way to go for a lot of cases, but ideally with
> some intelligence so that you only ship the changes (at a filesystem level
> rather than a package or file level). This ensures that an upgraded image on
> one device ends up exactly identical to any other device including a newly
> deployed one. Of course it does assume that you have a read-only rootfs and
> keep your configuration data / logs / other writeable data on a separate
> partition or storage medium. However, beyond improvements to support for
> having a read-only rootfs we haven't really achieved anything in terms of out-
> of-the-box support for this, mainly due to lack of resources.
>
> However, whilst I haven't had a chance to look at it closely, there has been
> some work on this within the community:
>
> http://sbabic.github.io/swupdate/swupdate.html
> https://github.com/sbabic/swupdate
> https://github.com/sbabic/meta-swupdate/
>
fwiw, Ubuntu has started to do something like that for their phone images, see
https://wiki.ubuntu.com/ImageBasedUpgrades
I haven't used nor looked into the details... i just had heard about
it, and thought it was worth mentioning it here. however the main
design idea from that wiki page is exactly what we are discussing
here. e.g. build images on the 'server' side using our regular tools,
but deploy binary differences on targets.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
@ 2014-08-08 7:54 ` Nicolas Dechesne
0 siblings, 0 replies; 23+ messages in thread
From: Nicolas Dechesne @ 2014-08-08 7:54 UTC (permalink / raw)
To: Paul Eggleton
Cc: Yocto list discussion, Patches and discussions about the oe-core layer
On Thu, Aug 7, 2014 at 3:05 PM, Paul Eggleton
<paul.eggleton@linux.intel.com> wrote:
> Personally with how fragile package management can end up being, I'm convinced
> that full-image updates are the way to go for a lot of cases, but ideally with
> some intelligence so that you only ship the changes (at a filesystem level
> rather than a package or file level). This ensures that an upgraded image on
> one device ends up exactly identical to any other device including a newly
> deployed one. Of course it does assume that you have a read-only rootfs and
> keep your configuration data / logs / other writeable data on a separate
> partition or storage medium. However, beyond improvements to support for
> having a read-only rootfs we haven't really achieved anything in terms of out-
> of-the-box support for this, mainly due to lack of resources.
>
> However, whilst I haven't had a chance to look at it closely, there has been
> some work on this within the community:
>
> http://sbabic.github.io/swupdate/swupdate.html
> https://github.com/sbabic/swupdate
> https://github.com/sbabic/meta-swupdate/
>
fwiw, Ubuntu has started to do something like that for their phone images, see
https://wiki.ubuntu.com/ImageBasedUpgrades
I haven't used nor looked into the details... i just had heard about
it, and thought it was worth mentioning it here. however the main
design idea from that wiki page is exactly what we are discussing
here. e.g. build images on the 'server' side using our regular tools,
but deploy binary differences on targets.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [OE-core] RFC: Improving the developer workflow
2014-08-07 9:10 RFC: Improving the developer workflow Paul Eggleton
@ 2014-08-08 8:04 ` Nicolas Dechesne
2014-08-07 12:09 ` Bryan Evenson
` (2 subsequent siblings)
3 siblings, 0 replies; 23+ messages in thread
From: Nicolas Dechesne @ 2014-08-08 8:04 UTC (permalink / raw)
To: Paul Eggleton
Cc: Yocto list discussion, Patches and discussions about the oe-core layer
On Thu, Aug 7, 2014 at 11:10 AM, Paul Eggleton
<paul.eggleton@linux.intel.com> wrote:
> Example workflow
> ================
>
> I won't give a workflow for every possible usage, but just to give a basic
> example - let's assume you want to build a "new" piece of software for which
> you have your own source tree on your machine. The rough set of steps required
> would be something like this (rough, e.g. the command names given shouldn't be
> read as final):
>
> 1. Install the SDK
>
> 2. Run a setup script to make the SDK tools available
>
> 3. Add a new recipe with "sdktool add <recipename>" - interactive process. The
> tool records that <recipename> is being worked on, creates a recipe that can
> be used to build the software using your external source tree, and places the
> recipe where it will be used automatically by other steps.
>
> 4. Build the recipe with "sdktool build <recipename>". This probably only goes
> as far as do_install or possibly do_package_qa; in any case the QA process
> would be less stringent than with the standard build system however in order
> to avoid putting too many barriers in the way of testing on the target.
>
> 5. Fix any failures and repeat from the previous step as necessary.
>
> 6. Deploy changes to target with "sdktool deploy-target <ip address>" assuming
> SSH is available on the target. Alternatively "sdktool build-image
> <imagename>" can be used to regenerate an image with the changes in it;
> "sdktool runqemu" could do that (if necessary) and then run the result within
> QEMU with the appropriate options set.
coincidentally, i was giving an OE workshop this week, and when I
explained about the OE SDK, someone immediately brought up that it was
quite limited because:
1- SDK cannot be used to generate deployable packages, e.g. using
the SDK to create ipk/deb/rpm that can be delivered to
targets/clients, not just for debugging, but also for production when
production system has package management support.
2- SDK cannot be used to regenerate updated images. e.g. Company A
delivers a SDK + board, Company B is making a product using the SDK
(adding content) and wants to be able to make new images with the new
content in order to sell/deploy it.
3- SDK itself cannot be upgraded when the 'base image' and SDK are updated
4- SDK users cannot add content to the SDK. e.g. I am a SDK user I
create a library and I want that library to be in the SDK now.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
@ 2014-08-08 8:04 ` Nicolas Dechesne
0 siblings, 0 replies; 23+ messages in thread
From: Nicolas Dechesne @ 2014-08-08 8:04 UTC (permalink / raw)
To: Paul Eggleton
Cc: Yocto list discussion, Patches and discussions about the oe-core layer
On Thu, Aug 7, 2014 at 11:10 AM, Paul Eggleton
<paul.eggleton@linux.intel.com> wrote:
> Example workflow
> ================
>
> I won't give a workflow for every possible usage, but just to give a basic
> example - let's assume you want to build a "new" piece of software for which
> you have your own source tree on your machine. The rough set of steps required
> would be something like this (rough, e.g. the command names given shouldn't be
> read as final):
>
> 1. Install the SDK
>
> 2. Run a setup script to make the SDK tools available
>
> 3. Add a new recipe with "sdktool add <recipename>" - interactive process. The
> tool records that <recipename> is being worked on, creates a recipe that can
> be used to build the software using your external source tree, and places the
> recipe where it will be used automatically by other steps.
>
> 4. Build the recipe with "sdktool build <recipename>". This probably only goes
> as far as do_install or possibly do_package_qa; in any case the QA process
> would be less stringent than with the standard build system however in order
> to avoid putting too many barriers in the way of testing on the target.
>
> 5. Fix any failures and repeat from the previous step as necessary.
>
> 6. Deploy changes to target with "sdktool deploy-target <ip address>" assuming
> SSH is available on the target. Alternatively "sdktool build-image
> <imagename>" can be used to regenerate an image with the changes in it;
> "sdktool runqemu" could do that (if necessary) and then run the result within
> QEMU with the appropriate options set.
coincidentally, i was giving an OE workshop this week, and when I
explained about the OE SDK, someone immediately brought up that it was
quite limited because:
1- SDK cannot be used to generate deployable packages, e.g. using
the SDK to create ipk/deb/rpm that can be delivered to
targets/clients, not just for debugging, but also for production when
production system has package management support.
2- SDK cannot be used to regenerate updated images. e.g. Company A
delivers a SDK + board, Company B is making a product using the SDK
(adding content) and wants to be able to make new images with the new
content in order to sell/deploy it.
3- SDK itself cannot be upgraded when the 'base image' and SDK are updated
4- SDK users cannot add content to the SDK. e.g. I am a SDK user I
create a library and I want that library to be in the SDK now.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [OE-core] RFC: Improving the developer workflow
2014-08-07 9:10 RFC: Improving the developer workflow Paul Eggleton
@ 2014-08-08 12:56 ` Mike Looijmans
2014-08-07 12:09 ` Bryan Evenson
` (2 subsequent siblings)
3 siblings, 0 replies; 23+ messages in thread
From: Mike Looijmans @ 2014-08-08 12:56 UTC (permalink / raw)
To: Paul Eggleton, openembedded-core, yocto
On 08/07/2014 11:10 AM, Paul Eggleton wrote:
...
> * Provide tools to allow modifying software for which a recipe already exists.
> If the user has an external source tree we use that, otherwise we can fetch
> the source, apply any patches and place the result in an external source tree,
> possibly managed with git. (It's fair to say this is perhaps less of an
> application developer type task, but still an important one and fairly simple
> to add once we have the rest of the tooling.)
This has been the most awkard of all OE uses so far. It hasn't improved
over the years either.
In general, people (both external customers and internal colleages) have
no objection whatsoever to cloning the OE repos and running bitbake
themselves.
I haven't been able to come up with a good solution. I always try to
test and develop my software on a regular PC, but for some things (e.g.
access to the FPGA, or optimizing inner loop code with NEON
instructions) you can't do without actual on-target deployment.
So I usually end up with a mix of manual commands and special-purpose
shell scripts that work for just one package. For small packages, I tend
to commit locally, and then re-run the package up to the "install"
phase. For big things that take several minutes to compile I'm usually
manually moving files around and struggling to keep the files in my
repository and what the build will see in sync.
Any small step forward in this area will be a giant leap for the
developers...
>
> * Provide tools to get your changes onto the target in order to test them.
> With access to the build system, rebuilding the image with changes to a target
> component is fairly trivial; but we can go further - assuming a network
> connection to the target is available we can provide tools to simply deploy
> the files installed by the changed recipe onto the running device (using an
> "sstate-like" mechanism - remove the old list of files and then install the new
> ones).
If it's possible to run the "install" target, this may be as simple as
making a tar from the installed "image" directory and unpack it on the
target.
I tend to do this in a one-liner if I have ssh access:
tar -cf - -C tmp.../${P}/image . | ssh target tar xf - -C /
I think that covers 99% of the use cases I can come up with...
> * Provide tools to get your changes to the code or the metadata into a form
> that you can submit somewhere.
If you have git in the "modify recipe" stage, this should be a breeze.
Just use git push/bundle/send-email/format-patch/... to send them where
they need to go.
Mike.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
@ 2014-08-08 12:56 ` Mike Looijmans
0 siblings, 0 replies; 23+ messages in thread
From: Mike Looijmans @ 2014-08-08 12:56 UTC (permalink / raw)
To: Paul Eggleton, openembedded-core, yocto
On 08/07/2014 11:10 AM, Paul Eggleton wrote:
...
> * Provide tools to allow modifying software for which a recipe already exists.
> If the user has an external source tree we use that, otherwise we can fetch
> the source, apply any patches and place the result in an external source tree,
> possibly managed with git. (It's fair to say this is perhaps less of an
> application developer type task, but still an important one and fairly simple
> to add once we have the rest of the tooling.)
This has been the most awkard of all OE uses so far. It hasn't improved
over the years either.
In general, people (both external customers and internal colleages) have
no objection whatsoever to cloning the OE repos and running bitbake
themselves.
I haven't been able to come up with a good solution. I always try to
test and develop my software on a regular PC, but for some things (e.g.
access to the FPGA, or optimizing inner loop code with NEON
instructions) you can't do without actual on-target deployment.
So I usually end up with a mix of manual commands and special-purpose
shell scripts that work for just one package. For small packages, I tend
to commit locally, and then re-run the package up to the "install"
phase. For big things that take several minutes to compile I'm usually
manually moving files around and struggling to keep the files in my
repository and what the build will see in sync.
Any small step forward in this area will be a giant leap for the
developers...
>
> * Provide tools to get your changes onto the target in order to test them.
> With access to the build system, rebuilding the image with changes to a target
> component is fairly trivial; but we can go further - assuming a network
> connection to the target is available we can provide tools to simply deploy
> the files installed by the changed recipe onto the running device (using an
> "sstate-like" mechanism - remove the old list of files and then install the new
> ones).
If it's possible to run the "install" target, this may be as simple as
making a tar from the installed "image" directory and unpack it on the
target.
I tend to do this in a one-liner if I have ssh access:
tar -cf - -C tmp.../${P}/image . | ssh target tar xf - -C /
I think that covers 99% of the use cases I can come up with...
> * Provide tools to get your changes to the code or the metadata into a form
> that you can submit somewhere.
If you have git in the "modify recipe" stage, this should be a breeze.
Just use git push/bundle/send-email/format-patch/... to send them where
they need to go.
Mike.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
2014-08-07 13:05 ` [yocto] " Paul Eggleton
@ 2014-08-08 15:57 ` Alex J Lennon
-1 siblings, 0 replies; 23+ messages in thread
From: Alex J Lennon @ 2014-08-08 15:57 UTC (permalink / raw)
To: Paul Eggleton; +Cc: yocto, openembedded-core
Hi Paul,
> Personally with how fragile package management can end up being, I'm convinced
> that full-image updates are the way to go for a lot of cases, but ideally with
> some intelligence so that you only ship the changes (at a filesystem level
> rather than a package or file level). This ensures that an upgraded image on
> one device ends up exactly identical to any other device including a newly
> deployed one. Of course it does assume that you have a read-only rootfs and
> keep your configuration data / logs / other writeable data on a separate
> partition or storage medium. However, beyond improvements to support for
> having a read-only rootfs we haven't really achieved anything in terms of out-
> of-the-box support for this, mainly due to lack of resources.
>
> However, whilst I haven't had a chance to look at it closely, there has been
> some work on this within the community:
>
> http://sbabic.github.io/swupdate/swupdate.html
> https://github.com/sbabic/swupdate
> https://github.com/sbabic/meta-swupdate/
>
>
I had a quick look at this. It's interesting. If I am reading this
correctly it's based on the old
-> Bootloader runs Partition A
-> Update Partition B, set Bootloader to run Partition B
-> On failure stay on partition A and retry update.
-> Bootloader runs Partition B
-> Update Partition A, set Bootloader to run Partition A
-> etc.
We've done this type of thing before and it works well. Of course the
drawback is the amount
of flash you need to achieve it but it is a good robust system.
I'd be interested to see how this could work with filesystem deltas say.
I don't _think_ that is
documented here?
...
Thinking a little further what would also really interest me would be to
consider using the
transactionality of the underlying file-system or block-management layer
for the update process.
Given nowadays journalling and log-structure file-systems are already
designed to fail-back when
file/meta-data modifications are interrupted surely we should be able to
start a macro-transaction
point at the start of the partition update, and if that update doesn't
complete with a macro-commit
then the f/s layer should be able to automatically roll itself back?
Perhaps the same could be done at
a block management layer?
Cheers,
Alex
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
@ 2014-08-08 15:57 ` Alex J Lennon
0 siblings, 0 replies; 23+ messages in thread
From: Alex J Lennon @ 2014-08-08 15:57 UTC (permalink / raw)
To: Paul Eggleton; +Cc: yocto, openembedded-core
Hi Paul,
> Personally with how fragile package management can end up being, I'm convinced
> that full-image updates are the way to go for a lot of cases, but ideally with
> some intelligence so that you only ship the changes (at a filesystem level
> rather than a package or file level). This ensures that an upgraded image on
> one device ends up exactly identical to any other device including a newly
> deployed one. Of course it does assume that you have a read-only rootfs and
> keep your configuration data / logs / other writeable data on a separate
> partition or storage medium. However, beyond improvements to support for
> having a read-only rootfs we haven't really achieved anything in terms of out-
> of-the-box support for this, mainly due to lack of resources.
>
> However, whilst I haven't had a chance to look at it closely, there has been
> some work on this within the community:
>
> http://sbabic.github.io/swupdate/swupdate.html
> https://github.com/sbabic/swupdate
> https://github.com/sbabic/meta-swupdate/
>
>
I had a quick look at this. It's interesting. If I am reading this
correctly it's based on the old
-> Bootloader runs Partition A
-> Update Partition B, set Bootloader to run Partition B
-> On failure stay on partition A and retry update.
-> Bootloader runs Partition B
-> Update Partition A, set Bootloader to run Partition A
-> etc.
We've done this type of thing before and it works well. Of course the
drawback is the amount
of flash you need to achieve it but it is a good robust system.
I'd be interested to see how this could work with filesystem deltas say.
I don't _think_ that is
documented here?
...
Thinking a little further what would also really interest me would be to
consider using the
transactionality of the underlying file-system or block-management layer
for the update process.
Given nowadays journalling and log-structure file-systems are already
designed to fail-back when
file/meta-data modifications are interrupted surely we should be able to
start a macro-transaction
point at the start of the partition update, and if that update doesn't
complete with a macro-commit
then the f/s layer should be able to automatically roll itself back?
Perhaps the same could be done at
a block management layer?
Cheers,
Alex
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
2014-08-07 13:05 ` [yocto] " Paul Eggleton
` (3 preceding siblings ...)
(?)
@ 2014-08-09 8:13 ` Mike Looijmans
2014-08-09 8:44 ` Alex J Lennon
-1 siblings, 1 reply; 23+ messages in thread
From: Mike Looijmans @ 2014-08-09 8:13 UTC (permalink / raw)
To: openembedded-core
On 08/07/2014 03:05 PM, Paul Eggleton wrote:
> On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
>> Historically I, and I suspect others, have done full image updates of
>> the storage medium, onboard flash or whatever but these images are
>> getting so big now that I am trying to move away from that and into
>> using package feeds for updates to embedded targets.
>
> Personally with how fragile package management can end up being, I'm convinced
> that full-image updates are the way to go for a lot of cases, but ideally with
> some intelligence so that you only ship the changes (at a filesystem level
> rather than a package or file level). This ensures that an upgraded image on
> one device ends up exactly identical to any other device including a newly
> deployed one. Of course it does assume that you have a read-only rootfs and
> keep your configuration data / logs / other writeable data on a separate
> partition or storage medium. However, beyond improvements to support for
> having a read-only rootfs we haven't really achieved anything in terms of out-
> of-the-box support for this, mainly due to lack of resources.
Full-image upgrades are probably most seen in "lab" environments, where
the software is being developed.
Once deployed to customers, who will not be using a build system, the
system must rely on packages and online updates.
Embedded systems look more like desktops these days.
- End-users will make changes to the system:
- "plugins" and other applications.
- configuration data
- application data (e.g. loggings, EPG data)
- There is not enough room in the flash for two full images.
- There is usually a virtually indestructable bootloader that can
recover even from fully erasing the NAND flash.
- Flash filesystems are usually NAND. NAND isn't suitable for read-only
root filesystems, you want to wear-level across the whole flash.
For the OpenPLi settop boxes we've been using "online upgrades" which
basically just call "opkg update && opkg upgrade" for many years, and
there's never been a real disaster. The benefits easily outweigh the
drawbacks.
When considering system upgrades, too much attention is being spent in
the "corner cases". It's not really a problem if the box is bricked when
the power fails during an upgrade. As long as there's a procedure the
end-user can use to recover the system (on most settop boxes, debricking
the system is just a matter of inserting a USB stick and flipping the
power switch).
--
Mike Looijmans
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
2014-08-09 8:13 ` [yocto] " Mike Looijmans
@ 2014-08-09 8:44 ` Alex J Lennon
2014-08-09 11:22 ` Mike Looijmans
0 siblings, 1 reply; 23+ messages in thread
From: Alex J Lennon @ 2014-08-09 8:44 UTC (permalink / raw)
To: Mike Looijmans; +Cc: openembedded-core
[-- Attachment #1: Type: text/plain, Size: 4351 bytes --]
On 09/08/2014 09:13, Mike Looijmans wrote:
> On 08/07/2014 03:05 PM, Paul Eggleton wrote:
>> On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
>>> Historically I, and I suspect others, have done full image updates of
>>> the storage medium, onboard flash or whatever but these images are
>>> getting so big now that I am trying to move away from that and into
>>> using package feeds for updates to embedded targets.
>>
>> Personally with how fragile package management can end up being, I'm
>> convinced
>> that full-image updates are the way to go for a lot of cases, but
>> ideally with
>> some intelligence so that you only ship the changes (at a filesystem
>> level
>> rather than a package or file level). This ensures that an upgraded
>> image on
>> one device ends up exactly identical to any other device including a
>> newly
>> deployed one. Of course it does assume that you have a read-only
>> rootfs and
>> keep your configuration data / logs / other writeable data on a separate
>> partition or storage medium. However, beyond improvements to support for
>> having a read-only rootfs we haven't really achieved anything in
>> terms of out-
>> of-the-box support for this, mainly due to lack of resources.
>
> Full-image upgrades are probably most seen in "lab" environments,
> where the software is being developed.
>
> Once deployed to customers, who will not be using a build system, the
> system must rely on packages and online updates.
>
> Embedded systems look more like desktops these days.
>
> - End-users will make changes to the system:
> - "plugins" and other applications.
> - configuration data
> - application data (e.g. loggings, EPG data)
> - There is not enough room in the flash for two full images.
> - There is usually a virtually indestructable bootloader that can
> recover even from fully erasing the NAND flash.
> - Flash filesystems are usually NAND. NAND isn't suitable for
> read-only root filesystems, you want to wear-level across the whole
> flash.
>
Agreeing with much you say Mike, I was under the impression that there
are block management layers now which will wear level across partitions?
So you could have your read only partition but still wear levelled
across the NAND ?
> For the OpenPLi settop boxes we've been using "online upgrades" which
> basically just call "opkg update && opkg upgrade" for many years, and
> there's never been a real disaster. The benefits easily outweigh the
> drawbacks.
>
> When considering system upgrades, too much attention is being spent in
> the "corner cases". It's not really a problem if the box is bricked
> when the power fails during an upgrade. As long as there's a procedure
> the end-user can use to recover the system (on most settop boxes,
> debricking the system is just a matter of inserting a USB stick and
> flipping the power switch).
>
>
For us on this latest project - and indeed the past few projects - it is
a major problem (and cost) if the device is bricked. These devices are
not user-maintainable and we'd be sending engineers out around the world
to fix.
Not a good impression to make with the customers either.
Whether we're a usual use case I don't know.
Cheers,
Alex
--
Dynamic Devices Ltd <http://www.dynamicdevices.co.uk/>
Alex J Lennon / Director
1 Queensway, Liverpool L22 4RA
mobile: +44 (0)7956 668178
Linkedin <http://www.linkedin.com/in/alexjlennon> Skype
<skype:alexjlennon?add>
This e-mail message may contain confidential or legally privileged
information and is intended only for the use of the intended
recipient(s). Any unauthorized disclosure, dissemination, distribution,
copying or the taking of any action in reliance on the information
herein is prohibited. E-mails are not secure and cannot be guaranteed to
be error free as they can be intercepted, amended, or contain viruses.
Anyone who communicates with us by e-mail is deemed to have accepted
these risks. Company Name is not responsible for errors or omissions in
this message and denies any responsibility for any damage arising from
the use of e-mail. Any opinion and other statement contained in this
message and any attachment are solely those of the author and do not
necessarily represent those of the company.
[-- Attachment #2.1: Type: text/html, Size: 7581 bytes --]
[-- Attachment #2.2: ddlogo-4.png --]
[-- Type: image/png, Size: 3997 bytes --]
[-- Attachment #2.3: linkedin.png --]
[-- Type: image/png, Size: 631 bytes --]
[-- Attachment #2.4: skype.png --]
[-- Type: image/png, Size: 800 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
2014-08-09 8:44 ` Alex J Lennon
@ 2014-08-09 11:22 ` Mike Looijmans
2014-08-09 11:57 ` Alex J Lennon
0 siblings, 1 reply; 23+ messages in thread
From: Mike Looijmans @ 2014-08-09 11:22 UTC (permalink / raw)
To: Alex J Lennon; +Cc: openembedded-core
On 08/09/2014 10:44 AM, Alex J Lennon wrote:
>
> On 09/08/2014 09:13, Mike Looijmans wrote:
>> On 08/07/2014 03:05 PM, Paul Eggleton wrote:
>>> On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
>>>> Historically I, and I suspect others, have done full image updates of
>>>> the storage medium, onboard flash or whatever but these images are
>>>> getting so big now that I am trying to move away from that and into
>>>> using package feeds for updates to embedded targets.
>>>
>>> Personally with how fragile package management can end up being, I'm
>>> convinced
>>> that full-image updates are the way to go for a lot of cases, but
>>> ideally with
>>> some intelligence so that you only ship the changes (at a filesystem
>>> level
>>> rather than a package or file level). This ensures that an upgraded
>>> image on
>>> one device ends up exactly identical to any other device including a
>>> newly
>>> deployed one. Of course it does assume that you have a read-only
>>> rootfs and
>>> keep your configuration data / logs / other writeable data on a separate
>>> partition or storage medium. However, beyond improvements to support for
>>> having a read-only rootfs we haven't really achieved anything in
>>> terms of out-
>>> of-the-box support for this, mainly due to lack of resources.
>>
>> Full-image upgrades are probably most seen in "lab" environments,
>> where the software is being developed.
>>
>> Once deployed to customers, who will not be using a build system, the
>> system must rely on packages and online updates.
>>
>> Embedded systems look more like desktops these days.
>>
>> - End-users will make changes to the system:
>> - "plugins" and other applications.
>> - configuration data
>> - application data (e.g. loggings, EPG data)
>> - There is not enough room in the flash for two full images.
>> - There is usually a virtually indestructable bootloader that can
>> recover even from fully erasing the NAND flash.
>> - Flash filesystems are usually NAND. NAND isn't suitable for
>> read-only root filesystems, you want to wear-level across the whole
>> flash.
>>
>
> Agreeing with much you say Mike, I was under the impression that there
> are block management layers now which will wear level across partitions?
>
> So you could have your read only partition but still wear levelled
> across the NAND ?
Going off-topic here I guess, but I think you can use the UBI block
layer in combination with e.g. squashfs. Never tried it, but it should
be possible to create an UBI volume, write a squash blob into it and
mount that.
However, any system that accomplishes that, is sort of cheating. It
isn't a read-only rootfs in the true meaning of the word any more. In
time, the volume will move around on the flash, thus the rootfs will be
re-written.
>> For the OpenPLi settop boxes we've been using "online upgrades" which
>> basically just call "opkg update && opkg upgrade" for many years, and
>> there's never been a real disaster. The benefits easily outweigh the
>> drawbacks.
>>
>> When considering system upgrades, too much attention is being spent in
>> the "corner cases". It's not really a problem if the box is bricked
>> when the power fails during an upgrade. As long as there's a procedure
>> the end-user can use to recover the system (on most settop boxes,
>> debricking the system is just a matter of inserting a USB stick and
>> flipping the power switch).
>
> For us on this latest project - and indeed the past few projects - it is
> a major problem (and cost) if the device is bricked. These devices are
> not user-maintainable and we'd be sending engineers out around the world
> to fix.
>
> Not a good impression to make with the customers either.
>
> Whether we're a usual use case I don't know.
I think you're a very usual use case, and it's a valid one indeed. I'm
just trying to create awareness that there are projects out there that
use OE for consumer products, and have millions of devices running in
the end-users' living rooms, who upgrade at a whim (feed servers sending
out about 4TB traffic each month).
I've also done medical devices where, just as you say, bricking it just
isn't an option. These are typically inaccessible by the end-user, and
see no modification other than about 1k of configuration data (e.g. wifi
keys) during their lifespan.
--
Mike Looijmans
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [yocto] RFC: Improving the developer workflow
2014-08-09 11:22 ` Mike Looijmans
@ 2014-08-09 11:57 ` Alex J Lennon
0 siblings, 0 replies; 23+ messages in thread
From: Alex J Lennon @ 2014-08-09 11:57 UTC (permalink / raw)
To: Mike Looijmans; +Cc: openembedded-core
On 09/08/2014 12:22, Mike Looijmans wrote:
> On 08/09/2014 10:44 AM, Alex J Lennon wrote:
>
> Going off-topic here I guess, but I think you can use the UBI block
> layer in combination with e.g. squashfs. Never tried it, but it should
> be possible to create an UBI volume, write a squash blob into it and
> mount that.
>
> However, any system that accomplishes that, is sort of cheating. It
> isn't a read-only rootfs in the true meaning of the word any more. In
> time, the volume will move around on the flash, thus the rootfs will
> be re-written.
>
I guess it comes down to what risks we're trying to guard against here?
Thinking aloud...
If I believe that my UBI - or other - layer is robust (and I think it is
nowadays?) then I should be able to believe that UBI can wear level my
data across NAND without a risk of data-loss due to bad sectors, power
interruption or other (assuming enough spare blocks)
Now if that's a true statement then the risk of my main
'read-only-but-wear-levelled' file-system becoming corrupted due to this
is very low.
I think I would accept that risk - with some testing to prove it out to
myself - given that the main file-system partition is likely to be the
largest partition and if I am minimising cost/size of flash then I want
to be able to wear level using that larger area.
I've had exactly this problem before with e.g. data/logs on small
read-write data partitions which rapidly kill the flash as there's a
very small area being wear levelled.
So, what I am thinking is more of a risk for us is if I remount that OS
filesystem as read/write and start doing some kind of update to it,
whether via package feeds or some delta-based system?
I think if I could remount read-write / start a transaction / do the
update / commit the update transaction that would be rather good. And of
course if it gets interrupted or otherwise fails we just roll-back.
>>> For the OpenPLi settop boxes we've been using "online upgrades" which
>>> basically just call "opkg update && opkg upgrade" for many years, and
>>> there's never been a real disaster. The benefits easily outweigh the
>>> drawbacks.
>>>
>>> When considering system upgrades, too much attention is being spent in
>>> the "corner cases". It's not really a problem if the box is bricked
>>> when the power fails during an upgrade. As long as there's a procedure
>>> the end-user can use to recover the system (on most settop boxes,
>>> debricking the system is just a matter of inserting a USB stick and
>>> flipping the power switch).
>>
>> For us on this latest project - and indeed the past few projects - it is
>> a major problem (and cost) if the device is bricked. These devices are
>> not user-maintainable and we'd be sending engineers out around the world
>> to fix.
>>
>> Not a good impression to make with the customers either.
>>
>> Whether we're a usual use case I don't know.
>
> I think you're a very usual use case, and it's a valid one indeed. I'm
> just trying to create awareness that there are projects out there that
> use OE for consumer products, and have millions of devices running in
> the end-users' living rooms, who upgrade at a whim (feed servers
> sending out about 4TB traffic each month).
>
> I've also done medical devices where, just as you say, bricking it
> just isn't an option. These are typically inaccessible by the
> end-user, and see no modification other than about 1k of configuration
> data (e.g. wifi keys) during their lifespan.
>
That's really interesting. Do you mind me asking who pays for that
traffic? (!)
Yes we have done some medical devices in the past. This current crop is
smart buildings which is similarly difficult to access if something
blows up.
Then we've done some in-car telematics and train telemetry which is all
similarly difficult due to inaccessibility, maintenance constraints, and the
desire to keep the users' fingers out of the device.
I guess it's horses for courses isn't it. Glad to hear I'm not too much
of an outlier ;)
Cheers,
Alex
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [OE-core] RFC: Improving the developer workflow
2014-08-08 15:57 ` [yocto] " Alex J Lennon
(?)
@ 2014-08-12 13:23 ` Tim O' Callaghan
-1 siblings, 0 replies; 23+ messages in thread
From: Tim O' Callaghan @ 2014-08-12 13:23 UTC (permalink / raw)
To: 'Alex J Lennon', Paul Eggleton; +Cc: yocto, openembedded-core
Hi,
Another approach I would like to suggest is the one that CoreOS has in place, that they call fastpatch.
https://coreos.com/using-coreos/updates/
Essentially two root partitions, and boot to switch between the new image while leaving the old one intact as the last known good. It avoids the complexity of an overlay, and it fits with the current yocto filesystem image based approach. With this approach, forward/backward application configuration compatibility can be maintained with an extra bbclass to mix in to the recipes of local application configurations based on OS release version, so that they can be updated by the init scripts.
Tim.
-----Original Message-----
From: openembedded-core-bounces@lists.openembedded.org [mailto:openembedded-core-bounces@lists.openembedded.org] On Behalf Of Alex J Lennon
Sent: vrijdag 8 augustus 2014 17:58
To: Paul Eggleton
Cc: yocto@yoctoproject.org; openembedded-core@lists.openembedded.org
Subject: Re: [OE-core] [yocto] RFC: Improving the developer workflow
Hi Paul,
> Personally with how fragile package management can end up being, I'm
> convinced that full-image updates are the way to go for a lot of
> cases, but ideally with some intelligence so that you only ship the
> changes (at a filesystem level rather than a package or file level).
> This ensures that an upgraded image on one device ends up exactly
> identical to any other device including a newly deployed one. Of
> course it does assume that you have a read-only rootfs and keep your
> configuration data / logs / other writeable data on a separate
> partition or storage medium. However, beyond improvements to support
> for having a read-only rootfs we haven't really achieved anything in terms of out- of-the-box support for this, mainly due to lack of resources.
>
> However, whilst I haven't had a chance to look at it closely, there
> has been some work on this within the community:
>
> http://sbabic.github.io/swupdate/swupdate.html
> https://github.com/sbabic/swupdate
> https://github.com/sbabic/meta-swupdate/
>
>
I had a quick look at this. It's interesting. If I am reading this correctly it's based on the old
-> Bootloader runs Partition A
-> Update Partition B, set Bootloader to run Partition B
-> On failure stay on partition A and retry update.
-> Bootloader runs Partition B
-> Update Partition A, set Bootloader to run Partition A etc.
We've done this type of thing before and it works well. Of course the drawback is the amount of flash you need to achieve it but it is a good robust system.
I'd be interested to see how this could work with filesystem deltas say.
I don't _think_ that is
documented here?
...
Thinking a little further what would also really interest me would be to consider using the transactionality of the underlying file-system or block-management layer for the update process.
Given nowadays journalling and log-structure file-systems are already designed to fail-back when file/meta-data modifications are interrupted surely we should be able to start a macro-transaction point at the start of the partition update, and if that update doesn't complete with a macro-commit then the f/s layer should be able to automatically roll itself back?
Perhaps the same could be done at
a block management layer?
Cheers,
Alex
--
_______________________________________________
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [OE-core] RFC: Improving the developer workflow
2014-08-08 8:04 ` Nicolas Dechesne
@ 2014-08-25 6:47 ` Paul Eggleton
-1 siblings, 0 replies; 23+ messages in thread
From: Paul Eggleton @ 2014-08-25 6:47 UTC (permalink / raw)
To: Nicolas Dechesne; +Cc: Yocto list discussion, openembedded-core
Hi Nicolas,
Apologies for the delayed reply.
On Friday 08 August 2014 10:04:19 Nicolas Dechesne wrote:
> On Thu, Aug 7, 2014 at 11:10 AM, Paul Eggleton
> <paul.eggleton@linux.intel.com> wrote:
> > Example workflow
> > ================
> >
> > I won't give a workflow for every possible usage, but just to give a basic
> > example - let's assume you want to build a "new" piece of software for
> > which you have your own source tree on your machine. The rough set of
> > steps required would be something like this (rough, e.g. the command
> > names given shouldn't be read as final):
> >
> > 1. Install the SDK
> >
> > 2. Run a setup script to make the SDK tools available
> >
> > 3. Add a new recipe with "sdktool add <recipename>" - interactive process.
> > The tool records that <recipename> is being worked on, creates a recipe
> > that can be used to build the software using your external source tree,
> > and places the recipe where it will be used automatically by other steps.
> >
> > 4. Build the recipe with "sdktool build <recipename>". This probably only
> > goes as far as do_install or possibly do_package_qa; in any case the QA
> > process would be less stringent than with the standard build system
> > however in order to avoid putting too many barriers in the way of testing
> > on the target.
> >
> > 5. Fix any failures and repeat from the previous step as necessary.
> >
> > 6. Deploy changes to target with "sdktool deploy-target <ip address>"
> > assuming SSH is available on the target. Alternatively "sdktool
> > build-image <imagename>" can be used to regenerate an image with the
> > changes in it; "sdktool runqemu" could do that (if necessary) and then
> > run the result within QEMU with the appropriate options set.
>
> coincidentally, i was giving an OE workshop this week, and when I
> explained about the OE SDK, someone immediately brought up that it was
> quite limited because:
>
> 1- SDK cannot be used to generate deployable packages, e.g. using
> the SDK to create ipk/deb/rpm that can be delivered to
> targets/clients, not just for debugging, but also for production when
> production system has package management support.
> 2- SDK cannot be used to regenerate updated images. e.g. Company A
> delivers a SDK + board, Company B is making a product using the SDK
> (adding content) and wants to be able to make new images with the new
> content in order to sell/deploy it.
I have included solutions for these two in my proposal. However, I am a bit
concerned about it being used in the way you describe above. The main aim is
to provide tools for developers to test their changes on the target easily; if
people start using it to produce images that they then send to other people
(particularly customers or other organisations) then the potential end-game is
an image used in production is one produced from a single developer's machine,
at which point you may have some reproducibility issues - "where's the source
for package xyz?" "Erm, not sure, that developer left six months ago..."
Of course there's nothing we'll really be able to do to stop people from using
the tools in this way, so it's mainly going to be an exercise in documenting
the right way to use them. It's worth thinking about these problems though
especially if you want to encourage best practices.
> 3- SDK itself cannot be upgraded when the 'base image' and SDK are updated
> 4- SDK users cannot add content to the SDK. e.g. I am a SDK user I
> create a library and I want that library to be in the SDK now.
We're certainly aiming to take care of these last two - indeed #4 is one of
the main drivers behind the proposed new SDK.
Cheers,
Paul
--
Paul Eggleton
Intel Open Source Technology Centre
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: RFC: Improving the developer workflow
@ 2014-08-25 6:47 ` Paul Eggleton
0 siblings, 0 replies; 23+ messages in thread
From: Paul Eggleton @ 2014-08-25 6:47 UTC (permalink / raw)
To: Nicolas Dechesne; +Cc: Yocto list discussion, openembedded-core
Hi Nicolas,
Apologies for the delayed reply.
On Friday 08 August 2014 10:04:19 Nicolas Dechesne wrote:
> On Thu, Aug 7, 2014 at 11:10 AM, Paul Eggleton
> <paul.eggleton@linux.intel.com> wrote:
> > Example workflow
> > ================
> >
> > I won't give a workflow for every possible usage, but just to give a basic
> > example - let's assume you want to build a "new" piece of software for
> > which you have your own source tree on your machine. The rough set of
> > steps required would be something like this (rough, e.g. the command
> > names given shouldn't be read as final):
> >
> > 1. Install the SDK
> >
> > 2. Run a setup script to make the SDK tools available
> >
> > 3. Add a new recipe with "sdktool add <recipename>" - interactive process.
> > The tool records that <recipename> is being worked on, creates a recipe
> > that can be used to build the software using your external source tree,
> > and places the recipe where it will be used automatically by other steps.
> >
> > 4. Build the recipe with "sdktool build <recipename>". This probably only
> > goes as far as do_install or possibly do_package_qa; in any case the QA
> > process would be less stringent than with the standard build system
> > however in order to avoid putting too many barriers in the way of testing
> > on the target.
> >
> > 5. Fix any failures and repeat from the previous step as necessary.
> >
> > 6. Deploy changes to target with "sdktool deploy-target <ip address>"
> > assuming SSH is available on the target. Alternatively "sdktool
> > build-image <imagename>" can be used to regenerate an image with the
> > changes in it; "sdktool runqemu" could do that (if necessary) and then
> > run the result within QEMU with the appropriate options set.
>
> coincidentally, i was giving an OE workshop this week, and when I
> explained about the OE SDK, someone immediately brought up that it was
> quite limited because:
>
> 1- SDK cannot be used to generate deployable packages, e.g. using
> the SDK to create ipk/deb/rpm that can be delivered to
> targets/clients, not just for debugging, but also for production when
> production system has package management support.
> 2- SDK cannot be used to regenerate updated images. e.g. Company A
> delivers a SDK + board, Company B is making a product using the SDK
> (adding content) and wants to be able to make new images with the new
> content in order to sell/deploy it.
I have included solutions for these two in my proposal. However, I am a bit
concerned about it being used in the way you describe above. The main aim is
to provide tools for developers to test their changes on the target easily; if
people start using it to produce images that they then send to other people
(particularly customers or other organisations) then the potential end-game is
an image used in production is one produced from a single developer's machine,
at which point you may have some reproducibility issues - "where's the source
for package xyz?" "Erm, not sure, that developer left six months ago..."
Of course there's nothing we'll really be able to do to stop people from using
the tools in this way, so it's mainly going to be an exercise in documenting
the right way to use them. It's worth thinking about these problems though
especially if you want to encourage best practices.
> 3- SDK itself cannot be upgraded when the 'base image' and SDK are updated
> 4- SDK users cannot add content to the SDK. e.g. I am a SDK user I
> create a library and I want that library to be in the SDK now.
We're certainly aiming to take care of these last two - indeed #4 is one of
the main drivers behind the proposed new SDK.
Cheers,
Paul
--
Paul Eggleton
Intel Open Source Technology Centre
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2014-08-25 6:47 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-07 9:10 RFC: Improving the developer workflow Paul Eggleton
2014-08-07 10:13 ` Alex J Lennon
2014-08-07 10:13 ` [yocto] " Alex J Lennon
2014-08-07 13:05 ` Paul Eggleton
2014-08-07 13:05 ` [yocto] " Paul Eggleton
2014-08-07 13:14 ` Alex J Lennon
2014-08-07 13:14 ` [yocto] " Alex J Lennon
2014-08-08 7:54 ` Nicolas Dechesne
2014-08-08 7:54 ` [yocto] " Nicolas Dechesne
2014-08-08 15:57 ` Alex J Lennon
2014-08-08 15:57 ` [yocto] " Alex J Lennon
2014-08-12 13:23 ` [OE-core] " Tim O' Callaghan
2014-08-09 8:13 ` [yocto] " Mike Looijmans
2014-08-09 8:44 ` Alex J Lennon
2014-08-09 11:22 ` Mike Looijmans
2014-08-09 11:57 ` Alex J Lennon
2014-08-07 12:09 ` Bryan Evenson
2014-08-08 8:04 ` [OE-core] " Nicolas Dechesne
2014-08-08 8:04 ` Nicolas Dechesne
2014-08-25 6:47 ` [OE-core] " Paul Eggleton
2014-08-25 6:47 ` Paul Eggleton
2014-08-08 12:56 ` [OE-core] " Mike Looijmans
2014-08-08 12:56 ` Mike Looijmans
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.