All of lore.kernel.org
 help / color / mirror / Atom feed
From: Luis Chamberlain <mcgrof@kernel.org>
To: Theodore Ts'o <tytso@mit.edu>
Cc: "Darrick J. Wong" <djwong@kernel.org>,
	Leah Rumancik <leah.rumancik@gmail.com>,
	Amir Goldstein <amir73il@gmail.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Chuck Lever <chuck.lever@oracle.com>,
	chandanrmail@gmail.com,
	Sweet Tea Dorminy <sweettea-kernel@dorminy.me>,
	Pankaj Raghav <pankydev8@gmail.com>,
	Zorro Lang <zlang@redhat.com>,
	linux-xfs@vger.kernel.org, fstests <fstests@vger.kernel.org>
Subject: Re: [PATCH 5.15 CANDIDATE v2 0/8] xfs stable candidate patches for 5.15.y (part 1)
Date: Sat, 25 Jun 2022 11:49:54 -0700	[thread overview]
Message-ID: <YrdY0h3z0Rcw36AJ@bombadil.infradead.org> (raw)
In-Reply-To: <YrZxPlTQ9f/BvnkJ@mit.edu>

On Fri, Jun 24, 2022 at 10:21:50PM -0400, Theodore Ts'o wrote:
> On Fri, Jun 24, 2022 at 03:54:44PM -0700, Luis Chamberlain wrote:
> > 
> > Perhaps I am not understanding what you are suggesting with a VM native
> > solution. What do you mean by that? A full KVM VM inside the cloud?
> 
> "Cloud native" is the better way to put things.  Cloud VM's
> are designed to be ephemeral, so the concept of "node bringup" really
> doesn't enter into the picture.
> 
> When I run the "build-appliance" command, this creates a test
> appliance image.  Which is to say, we create a root file system image,
> and then "freeze" it into a VM image.

So this seems to build an image from a base distro image. Is that right?
And it would seem your goal is to store that image then after so it can
be re-used.

> For kvm-xfstests this is a qcow image which is run in snapshot mode,
> which means that if any changes is made to the root file system, those
> changes disappear when the VM exits.

Sure, so you use one built image once, makes sense.

You are optimizing usage for GCE. That makes sense. The goal behind
kdevops was to use technology which can *enable* any optimizations in
a cloud agnostic way. What APIs become public is up to the cloud
provider, and one cloud agnostic way to manage cloud solutions using
open source tools is with terraform and so that is used today. If an API
is not yet avilable through terraform kdevops could simply use whatever
cloud tool for additional hooks. But having the ability to ramp up
regardless of cloud provider was extremely important to me from the
beginning.

Optimizing is certainly possible, always :)

Likewise, if you using local virtualized, we can save vagrant images
in the vagrant cloud, if we wanted, which would allow pre-built setups
saved:

https://app.vagrantup.com/boxes/search

That could reduce speed for when doing bringup for local KVM /
Virtualbox guests.

In fact since vagrant images are also just tarballs with qcow2 files,
I do wonder if they can be also leveraged for cloud deployments. Or if
the inverse is true, if your qcow2 images can be used for vagrant
purposes as well. If you're curious:

https://github.com/linux-kdevops/kdevops/blob/master/docs/custom-vagrant-boxes.md

What approach you use is up to you. From a Linux distribution perspective
being able to do reproducible builds was important too, and so that is
why a lot of effort was put to ensure how you cook up a final state from
an initial distro release was supported.

> I can very quickly have over 100 test VM's running in parallel, and as
> the tests complete, they are automatically shutdown and destroyed ----
> which means that we don't store state in the VM.  Instead the state is
> stored in a Google Cloud Storage (Amazon S3) bucket, with e-mail sent
> with a summary of results.

Using cloud object storage is certainly nice if you can afford it. I
think it is valuable, but likewise should be optional. And so with
kdevops support is welcomed should someone want to do that. And so
what you describe is not impossible with kdevops it is just not done
today, but could be enabled.

> VM's can get started much more quickly than "make bringup", since
> we're not running puppet or ansible to configure each node.

You can easily just use pre-built images as well instead of doing
the build from a base distro release, just as you could use custom
vagrant images for local KVM guests.

The usage of ansible to *build* fstests and install can be done once
too and that image saved, exported, etc, and then re-used. The kernel
config I maintain on kdevops has been tested to work on local KVM
virtualization setups, but also all supported cloud providers as well.

So I think there is certainly value in learning from the ways you 
optimizing cloud usage for GCE and generalizing that for *any* cloud
provider.

The steps to get to *build* an image from a base distro release is
glanced over but that alone takes effort and is made pretty well
distro agnostic within kdevops too.

> In contrast, I can just run "gce-xfstests ls-results" to see all of
> the results that has been saved to GCS, and I can fetch a particular
> test result to my laptop via a single command: "gce-xfstests
> get-results tytso-20220624210238".  No need to ssh to a host node, and
> then ssh to the kdevops test node, yadda, yadda, yadda --- and if you
> run "make destroy" you lose all of the test result history on that node,
> right?

Actually all the *.bad, *.dmesg as well as final xunit results for all
nodes for failed tests is copied over locally to the host which is
running kdevops. Xunit files are also merged to represent a final full set
of results too. So no not destroyed. If you wanted to keep all files even
for non-failed stuff we can add that as a new Kconfig bool.

Support for stashing results into object storage sure would be nice, agreed.

> See the difference?

Yes you have optimized usage of GCE. Good stuff, lots to learn from that effort!
Thanks for sharing the details!

  Luis

  reply	other threads:[~2022-06-25 18:50 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-16 18:27 [PATCH 5.15 CANDIDATE v2 0/8] xfs stable candidate patches for 5.15.y (part 1) Leah Rumancik
2022-06-16 18:27 ` [PATCH 5.15 CANDIDATE v2 1/8] xfs: use kmem_cache_free() for kmem_cache objects Leah Rumancik
2022-06-16 18:27 ` [PATCH 5.15 CANDIDATE v2 2/8] xfs: punch out data fork delalloc blocks on COW writeback failure Leah Rumancik
2022-06-16 18:27 ` [PATCH 5.15 CANDIDATE v2 3/8] xfs: Fix the free logic of state in xfs_attr_node_hasname Leah Rumancik
2022-06-16 18:27 ` [PATCH 5.15 CANDIDATE v2 4/8] xfs: remove all COW fork extents when remounting readonly Leah Rumancik
2022-06-16 18:27 ` [PATCH 5.15 CANDIDATE v2 5/8] xfs: check sb_meta_uuid for dabuf buffer recovery Leah Rumancik
2022-06-16 18:27 ` [PATCH 5.15 CANDIDATE v2 6/8] xfs: prevent UAF in xfs_log_item_in_current_chkpt Leah Rumancik
2022-06-16 18:27 ` [PATCH 5.15 CANDIDATE v2 7/8] xfs: only bother with sync_filesystem during readonly remount Leah Rumancik
2022-06-16 18:27 ` [PATCH 5.15 CANDIDATE v2 8/8] xfs: use setattr_copy to set vfs inode attributes Leah Rumancik
2022-06-17  7:27   ` Amir Goldstein
2022-06-22  0:07 ` [PATCH 5.15 CANDIDATE v2 0/8] xfs stable candidate patches for 5.15.y (part 1) Luis Chamberlain
2022-06-22 21:44   ` Theodore Ts'o
2022-06-23  5:31     ` Amir Goldstein
2022-06-23 21:39       ` Luis Chamberlain
2022-06-23 21:31     ` Luis Chamberlain
2022-06-24  5:32       ` Theodore Ts'o
2022-06-24 22:54         ` Luis Chamberlain
2022-06-25  2:21           ` Theodore Ts'o
2022-06-25 18:49             ` Luis Chamberlain [this message]
2022-06-25 21:14               ` Theodore Ts'o
2022-07-01 23:08                 ` Luis Chamberlain
2022-06-25  7:28           ` sharing fstests results (Was: [PATCH 5.15 CANDIDATE v2 0/8] xfs stable candidate patches for 5.15.y (part 1)) Amir Goldstein
2022-06-25 19:35             ` Luis Chamberlain
2022-06-25 21:50               ` Theodore Ts'o
2022-07-01 23:13                 ` Luis Chamberlain
2022-06-22 21:52   ` [PATCH 5.15 CANDIDATE v2 0/8] xfs stable candidate patches for 5.15.y (part 1) Leah Rumancik
2022-06-23 21:40     ` Luis Chamberlain
2022-06-22 16:23 ` Darrick J. Wong
2022-06-22 16:35   ` Darrick J. Wong
2022-06-22 21:29     ` Leah Rumancik
2022-06-23  4:53       ` Amir Goldstein
2022-06-23  6:28         ` Amir Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YrdY0h3z0Rcw36AJ@bombadil.infradead.org \
    --to=mcgrof@kernel.org \
    --cc=amir73il@gmail.com \
    --cc=chandanrmail@gmail.com \
    --cc=chuck.lever@oracle.com \
    --cc=djwong@kernel.org \
    --cc=fstests@vger.kernel.org \
    --cc=josef@toxicpanda.com \
    --cc=leah.rumancik@gmail.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=pankydev8@gmail.com \
    --cc=sweettea-kernel@dorminy.me \
    --cc=tytso@mit.edu \
    --cc=zlang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.