qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: "Lukáš Doktor" <ldoktor@redhat.com>
Cc: Charles Shih <cheshi@redhat.com>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	QEMU Developers <qemu-devel@nongnu.org>
Subject: Re: Proposal for a regular upstream performance testing
Date: Tue, 1 Dec 2020 10:22:10 +0000	[thread overview]
Message-ID: <20201201102210.GC567514@stefanha-x1.localdomain> (raw)
In-Reply-To: <35db4764-22c4-521b-d8ee-27ec39aebd3e@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 1301 bytes --]

On Tue, Dec 01, 2020 at 09:05:49AM +0100, Lukáš Doktor wrote:
> Dne 30. 11. 20 v 14:25 Stefan Hajnoczi napsal(a):
> > On Thu, Nov 26, 2020 at 09:10:14AM +0100, Lukáš Doktor wrote:
> > What is the minimal environment needed for bare metal hosts?
> > 
> 
> Not sure what you mean by that. For provisioning I have a beaker plugin, other plugins can be added if needed. Even without beaker one can also provide an installed machine and skip the provisioning step. Runperf would then only apply the profiles (including fetching the VM images from public sources) and run the tests on them. Note that for certain profiles might need to reboot the machine and in such case the tested machine can not be the one running run-perf, other profiles can use the current machine but it's still not a very good idea as the additional overhead might spoil the results.
> 
> Note that for a very simple issue which do not require a special setup I am usually just running a custom VM on my laptop and use a Localhost profile on that VM, which basically results in testing that custom-setup VM's performance. It's dirty but very fast for the first-level check.

I was thinking about reprovisioning the machine to ensure each run
starts from the same clean state. This requires reprovisioning.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2020-12-01 10:23 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-26  8:10 Proposal for a regular upstream performance testing Lukáš Doktor
2020-11-26  8:23 ` Jason Wang
2020-11-26  9:43 ` Daniel P. Berrangé
2020-11-26 11:29   ` Lukáš Doktor
2020-11-30 13:23   ` Stefan Hajnoczi
2020-12-01  7:51     ` Lukáš Doktor
2020-11-26 10:17 ` Peter Maydell
2020-11-26 11:16   ` Lukáš Doktor
2020-11-30 13:25 ` Stefan Hajnoczi
2020-12-01  8:05   ` Lukáš Doktor
2020-12-01 10:22     ` Stefan Hajnoczi [this message]
2020-12-01 12:06       ` Lukáš Doktor
2020-12-01 12:35         ` Stefan Hajnoczi
2020-12-02  8:58           ` Chenqun (kuhn)
2020-12-02  8:23 ` Chenqun (kuhn)
2022-03-21  8:46 ` Lukáš Doktor
2022-03-21  9:42   ` Stefan Hajnoczi
2022-03-21 10:29     ` Lukáš Doktor
2022-03-22 15:05       ` Stefan Hajnoczi
2022-03-28  6:18         ` Lukáš Doktor
2022-03-28  9:57           ` Stefan Hajnoczi
2022-03-28 11:09             ` Lukáš Doktor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201201102210.GC567514@stefanha-x1.localdomain \
    --to=stefanha@redhat.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=cheshi@redhat.com \
    --cc=ldoktor@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).