From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xen.org, Wei Liu <wei.liu2@citrix.com>,
Ian Campbell <ian.campbell@citrix.com>,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [PATCH] libxl: Increase device model startup timeout to 1min.
Date: Fri, 3 Jul 2015 12:21:50 +0100 [thread overview]
Message-ID: <20150703112150.GI1891@perard.uk.xensource.com> (raw)
In-Reply-To: <21909.12493.239202.740226@mariner.uk.xensource.com>
On Thu, Jul 02, 2015 at 01:38:37PM +0100, Ian Jackson wrote:
> Anthony PERARD writes ("Re: [PATCH] libxl: Increase device model startup timeout to 1min."):
> > I have tested an increase timeout this night. And here are the result.
> >
> > The machine is a AMD Opteron(tm) Processor 4284, with 8G of RAM and 8 pCPU.
> > It's running Ubuntu 14.04, with Xen 4.4. On top of that, OpenStack have
> > been deployed via devstack on a single.
> >
> > The test is to run Tempest with --concurrency=4. There are 4 tests runned
> > in parallel, but they don't necessarly start a VM. When they do, it's a PV
> > with 64MB and 1 vCPU and sometime with double amount of RAM.
> >
> > The stats:
> > Tempest run: 22
> > Tempest run time for each run: ~3000s
> > Tempest number of test: 1143
> > after 22 run of tempest:
> > QEMU start: 3352
> > number of run that took more than 2s: 20
> > number of run that took more than 9s: 6
> > maximum start time: 10.973713s
> >
> > I have gathered the QEMU start time by having strace running for each of
> > them. I have then look at the time it took from the first syscall
> > execve('qemu') until the syscall where QEMU respond on its QMP socket
> > (libxl have acknoledge that QEMU is running at that time).
>
> Thanks for this information.
>
> So from what you say it appears that we are running at most 4 copies
> of libxl and qemu in parallel, along with at most 4 VMs ?
Some test do run 3 VM, so at most it would be 12 VM. I think that still
only one copy of libxl since everythings goes through libvirt, right?
> And out of 3352 qemu starts, we have
> <= 2s 3332
> >2s <= 9s 14
> > 9s 6
> ?
>
> Do you have any information about the maximum system load in general ?
> dom0 load, vcpu overcommit, etc. ? Do you know what the guests are
> doing ?
I've got some stat collected via `dstat`.
During the tests, the load average can go from 5 to 8 in dom0.
The CPUs in dom0 can spend quite a lot of time waiting, it's not rare to
see beetween 35 and 60% of time waiting (total of all 8 cpu). As for the
disk, dstat reveal that there is often between 50M and 120M per second of
total disk write.
The guests usully does not do much, beside booting. It run a small system
call CirrOS.
> I'm starting to think that this might be a real bug but that the bug
> might be "Linux's I/O subsystem sometimes produces appalling latency
> under load" (which is hardly news).
I guess the straces support this, here are few quote from different strace:
04:11:50.602067 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) <0.000036>
04:11:50.602166 open("/usr/lib/x86_64-linux-gnu/libidn.so.11", O_RDONLY|O_CLOEXEC) = 3 <0.000039>
04:11:50.602263 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300+\0\0\0\0\0\0"..., 832) = 832 <0.000035>
04:11:50.602359 fstat(3, {st_mode=S_IFREG|0644, st_size=207128, ...}) = 0 <0.000033>
04:11:50.602453 mmap(NULL, 2302208, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f845b9f8000 <0.000036>
04:11:50.602546 mprotect(0x7f845ba29000, 2097152, PROT_NONE) = 0 <0.000037>
04:11:50.602639 mmap(0x7f845bc29000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x31000) = 0x7f845bc29000 <0.000038>
04:11:51.257654 close(3) = 0 <0.000042>
04:11:51.257859 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) <0.000078>
18:08:37.635570 open("/usr/lib/x86_64-linux-gnu/sasl2/libcrammd5.so", O_RDONLY|O_CLOEXEC) = 28 <0.000050>
18:08:37.635681 read(28, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\20\0\0\0\0\0\0"..., 832) = 832 <1.011536>
18:08:38.647329 fstat(28, {st_mode=S_IFREG|0644, st_size=22592, ...}) = 0 <0.000042>
18:08:38.647499 mmap(NULL, 2117872, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 28, 0) = 0x7f6c29128000 <0.000048>
18:08:38.647613 mprotect(0x7f6c2912c000, 2097152, PROT_NONE) = 0 <0.000049>
18:08:38.647721 mmap(0x7f6c2932c000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 28, 0x4000) = 0x7f6c2932c000 <0.000043>
18:08:38.647862 close(28) = 0 <0.000035>
The first quote is a pattern I'm seeing very often on slow dm start, where
it take a long time between the mmap and the next syscall. On the second
quote, read() is to blame, it took 1s.
I guess even the first quote imply there is going to be I/O after the mmap
call, isn't it?
--
Anthony PERARD
next prev parent reply other threads:[~2015-07-03 11:21 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-26 11:57 [PATCH] libxl: Increase device model startup timeout to 1min Anthony PERARD
2015-06-26 16:41 ` Ian Campbell
2015-06-29 14:23 ` Anthony PERARD
2015-06-29 14:51 ` Ian Campbell
2015-06-29 16:09 ` Anthony PERARD
2015-06-30 3:08 ` Dario Faggioli
2015-06-30 14:04 ` Ian Jackson
2015-06-30 15:58 ` Stefano Stabellini
2015-06-30 16:40 ` Ian Jackson
2015-07-01 15:03 ` Stefano Stabellini
2015-07-02 11:11 ` Anthony PERARD
2015-07-02 12:38 ` Ian Jackson
2015-07-03 11:21 ` Anthony PERARD [this message]
2015-07-03 11:30 ` Ian Jackson
2015-07-07 13:44 ` Ian Campbell
2015-07-07 13:47 ` Ian Jackson
2015-07-07 14:37 ` [PATCH V2] " Anthony PERARD
2015-07-07 14:41 ` Ian Jackson
2015-07-07 15:09 ` [PATCH V3] " Anthony PERARD
2015-07-07 15:14 ` Ian Jackson
2015-07-07 15:41 ` Ian Campbell
2015-07-14 6:17 ` Jan Beulich
2015-07-14 7:55 ` Ian Campbell
2015-07-14 9:25 ` Dario Faggioli
2015-07-14 9:37 ` Ian Campbell
2015-07-14 10:52 ` Dario Faggioli
2015-07-14 14:23 ` Dario Faggioli
2015-07-14 14:48 ` Ian Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150703112150.GI1891@perard.uk.xensource.com \
--to=anthony.perard@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=ian.campbell@citrix.com \
--cc=stefano.stabellini@eu.citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).