openbmc.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* CI compute resources
@ 2018-08-20 17:20 Andrew Geissler
  2018-08-22  7:56 ` Jean-Marie Verdun
  0 siblings, 1 reply; 3+ messages in thread
From: Andrew Geissler @ 2018-08-20 17:20 UTC (permalink / raw)
  To: OpenBMC Maillist; +Cc: Joel Stanley

Hey Everyone,

Per this mornings community call, it was noted that the upcoming
changes to our bitbake layers and the use of subtree to manage it,
we're going to double our CI build requirements for openbmc/openbmc.

We could short change ourselves and just do the new subtree commits
but Brad noted that yocto does CI for both the subtree and the main
merge for a variety of reasons, so we should too.

We have some other alternatives (build fewer machine configs, or only
build certain one's in each CI, or just wait longer) but it would be
best to first see if anyone can cough up a few more cloud machines for
CI.

Current server contributors for openbmc:

Rackspace: 1
Google: 2
IBM: 1

The ideal server has:

16 or greater cpu threads
64GB or greater memory
1TB or greater hard drive
Ubuntu 16 or greater

It needs to be on a public network that openpower.xyz can talk too.
All of the build scripts use docker so the system just needs to have
docker installed.

Let me and Joel know if you got something!

Thanks,
Andrew

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: CI compute resources
  2018-08-20 17:20 CI compute resources Andrew Geissler
@ 2018-08-22  7:56 ` Jean-Marie Verdun
  2018-08-22 15:13   ` Andrew Geissler
  0 siblings, 1 reply; 3+ messages in thread
From: Jean-Marie Verdun @ 2018-08-22  7:56 UTC (permalink / raw)
  To: openbmc, ggiamarchi

Hi Andrew,

We can provide access to a couple of additionnal servers without any
issues. Here are the characteristics that we can supply

Open Compute type server with Dual Xeon 2680v2, 64GB of RAM (DDR3 ECC),
3TB HDD, 1Gbps ethernet connected to the internet, Ubuntu 16.04 server.

We can start with 1 and then increase that number gradually depending on
the need and if that works. I can probably allocate 4 of them straight
forward if this is easing life of the community. Lead time to get access
to them is a couple of days.

These machines will be located in France in Data4 Datacenter. So latency
might be a little bit long compared to US based hosting.

We work on setting up a CI for the linuxboot project and have also
developped a solution based on this machine to connect a flash emulator
straight to the PCB which is automatized through an API (still under
development)

Currently we can upload a firmware to the emulator remotely control the
servers through hard power on/off options, reset, get console access
through serial, and feedback end user on the capability of the firmware
(in that case linuxboot) to start properly the machine. The intend is to
validate within the CI process that each build is able to start a
machine and run regression tests on live hardware.

We are also able to test if we can boot the node etc ... and validate
that we didn't broke features like NUMA etc ...

This technology can be easily adapted to OpenBMC, as we connect to a SPI
bus, and I bet that the flash on ASpeed chips might be connected through
such bus. If not, we could work on doing that.

One of the advantage is that we use OCP hardware and we can change a lot
of things on top of them.

Let me know if that works for you.

vejmarie

Le 20/08/2018 à 19:20, Andrew Geissler a écrit :
> Hey Everyone,
>
> Per this mornings community call, it was noted that the upcoming
> changes to our bitbake layers and the use of subtree to manage it,
> we're going to double our CI build requirements for openbmc/openbmc.
>
> We could short change ourselves and just do the new subtree commits
> but Brad noted that yocto does CI for both the subtree and the main
> merge for a variety of reasons, so we should too.
>
> We have some other alternatives (build fewer machine configs, or only
> build certain one's in each CI, or just wait longer) but it would be
> best to first see if anyone can cough up a few more cloud machines for
> CI.
>
> Current server contributors for openbmc:
>
> Rackspace: 1
> Google: 2
> IBM: 1
>
> The ideal server has:
>
> 16 or greater cpu threads
> 64GB or greater memory
> 1TB or greater hard drive
> Ubuntu 16 or greater
>
> It needs to be on a public network that openpower.xyz can talk too.
> All of the build scripts use docker so the system just needs to have
> docker installed.
>
> Let me and Joel know if you got something!
>
> Thanks,
> Andrew
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: CI compute resources
  2018-08-22  7:56 ` Jean-Marie Verdun
@ 2018-08-22 15:13   ` Andrew Geissler
  0 siblings, 0 replies; 3+ messages in thread
From: Andrew Geissler @ 2018-08-22 15:13 UTC (permalink / raw)
  To: jean-marie.verdun; +Cc: OpenBMC Maillist, ggiamarchi, Joel Stanley

Hi Jean-Marie,

Thanks for the offer!  I didn't even know we had splitted-desktop in
our OpenBMC community, great to hear from you.  It seems like you have
a mechanism to actually flash OpenBMC firmware on your system, and
then verify the server still boots.  This would be great for the
future, but currently we're just looking to access the Ubuntu OS of a
system and use it to build our OpenBMC distros during our CI process.
Building OpenBMC takes a lot of compute power and our CI jobs build
them for multiple systems in parallel
(https://openpower.xyz/job/openbmc-build-gerrit-trigger-multi/).

If you have an initial system we could use for this then Joel and I
could work with you privately to connect it up to our jenkins server
(ssh key exchanges).

Thanks!
Andrew
On Wed, Aug 22, 2018 at 3:40 AM Jean-Marie Verdun
<jean-marie.verdun@splitted-desktop.com> wrote:
>
> Hi Andrew,
>
> We can provide access to a couple of additionnal servers without any
> issues. Here are the characteristics that we can supply
>
> Open Compute type server with Dual Xeon 2680v2, 64GB of RAM (DDR3 ECC),
> 3TB HDD, 1Gbps ethernet connected to the internet, Ubuntu 16.04 server.
>
> We can start with 1 and then increase that number gradually depending on
> the need and if that works. I can probably allocate 4 of them straight
> forward if this is easing life of the community. Lead time to get access
> to them is a couple of days.
>
> These machines will be located in France in Data4 Datacenter. So latency
> might be a little bit long compared to US based hosting.
>
> We work on setting up a CI for the linuxboot project and have also
> developped a solution based on this machine to connect a flash emulator
> straight to the PCB which is automatized through an API (still under
> development)
>
> Currently we can upload a firmware to the emulator remotely control the
> servers through hard power on/off options, reset, get console access
> through serial, and feedback end user on the capability of the firmware
> (in that case linuxboot) to start properly the machine. The intend is to
> validate within the CI process that each build is able to start a
> machine and run regression tests on live hardware.
>
> We are also able to test if we can boot the node etc ... and validate
> that we didn't broke features like NUMA etc ...
>
> This technology can be easily adapted to OpenBMC, as we connect to a SPI
> bus, and I bet that the flash on ASpeed chips might be connected through
> such bus. If not, we could work on doing that.
>
> One of the advantage is that we use OCP hardware and we can change a lot
> of things on top of them.
>
> Let me know if that works for you.
>
> vejmarie
>
> Le 20/08/2018 à 19:20, Andrew Geissler a écrit :
> > Hey Everyone,
> >
> > Per this mornings community call, it was noted that the upcoming
> > changes to our bitbake layers and the use of subtree to manage it,
> > we're going to double our CI build requirements for openbmc/openbmc.
> >
> > We could short change ourselves and just do the new subtree commits
> > but Brad noted that yocto does CI for both the subtree and the main
> > merge for a variety of reasons, so we should too.
> >
> > We have some other alternatives (build fewer machine configs, or only
> > build certain one's in each CI, or just wait longer) but it would be
> > best to first see if anyone can cough up a few more cloud machines for
> > CI.
> >
> > Current server contributors for openbmc:
> >
> > Rackspace: 1
> > Google: 2
> > IBM: 1
> >
> > The ideal server has:
> >
> > 16 or greater cpu threads
> > 64GB or greater memory
> > 1TB or greater hard drive
> > Ubuntu 16 or greater
> >
> > It needs to be on a public network that openpower.xyz can talk too.
> > All of the build scripts use docker so the system just needs to have
> > docker installed.
> >
> > Let me and Joel know if you got something!
> >
> > Thanks,
> > Andrew
> >
>
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-08-22 21:21 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-20 17:20 CI compute resources Andrew Geissler
2018-08-22  7:56 ` Jean-Marie Verdun
2018-08-22 15:13   ` Andrew Geissler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).