All of lore.kernel.org
 help / color / mirror / Atom feed
* Payed Xen Admin
@ 2016-11-27  8:52 Michael Schinzel
  2016-11-28 13:30 ` Neil Sikka
  2016-11-29 12:08 ` Dario Faggioli
  0 siblings, 2 replies; 9+ messages in thread
From: Michael Schinzel @ 2016-11-27  8:52 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 5904 bytes --]

Good Morning,

we have some issues with our Xen Hosts. It seems it is a xen bug but we do not find the solution.

Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0 16192     4     r-----  147102.5
(null)                                       2     1     1     --p--d    1273.2
vmanager2268                                 4  1024     1     -b----   34798.8
vmanager2340                                 5  1024     1     -b----    5983.8
vmanager2619                                12   512     1     -b----    1067.0
vmanager2618                                13  1024     4     -b----    1448.7
vmanager2557                                14  1024     1     -b----    2783.5
vmanager1871                                16   512     1     -b----    3772.1
vmanager2592                                17   512     1     -b----   19744.5
vmanager2566                                18  2048     1     -b----    3068.4
vmanager2228                                19   512     1     -b----     837.6
vmanager2241                                20   512     1     -b----     997.0
vmanager2244                                21  2048     1     -b----    1457.9
vmanager2272                                22  2048     1     -b----    1924.5
vmanager2226                                23  1024     1     -b----    1454.0
vmanager2245                                24   512     1     -b----     692.5
vmanager2249                                25   512     1     -b----   22857.7
vmanager2265                                26  2048     1     -b----    1388.1
vmanager2270                                27   512     1     -b----    1250.6
vmanager2271                                28  2048     3     -b----    2060.8
vmanager2273                                29  1024     1     -b----   34089.4
vmanager2274                                30  2048     1     -b----    8585.1
vmanager2281                                31  2048     2     -b----    1848.9
vmanager2282                                32   512     1     -b----     755.1
vmanager2288                                33  1024     1     -b----     543.6
vmanager2292                                34   512     1     -b----    3004.9
vmanager2041                                35   512     1     -b----    4246.2
vmanager2216                                36  1536     1     -b----   47508.3
vmanager2295                                37   512     1     -b----    1414.9
vmanager2599                                38  1024     4     -b----    7523.0
vmanager2296                                39  1536     1     -b----    7142.0
vmanager2297                                40   512     1     -b----     536.7
vmanager2136                                42  1024     1     -b----    6162.9
vmanager2298                                43   512     1     -b----     441.7
vmanager2299                                44   512     1     -b----     368.7
(null)                                      45     4     1     --p--d    1296.3
vmanager2303                                46   512     1     -b----    1437.0
vmanager2308                                47   512     1     -b----     619.3
vmanager2318                                48   512     1     -b----     976.8
vmanager2325                                49   512     1     -b----     480.2
vmanager2620                                53   512     1     -b----     346.2
(null)                                      56     0     1     --p--d       8.8
vmanager2334                                57   512     1     -b----     255.5
vmanager2235                                58   512     1     -b----    1724.2
vmanager987                                 59   512     1     -b----     647.1
vmanager2302                                60   512     1     -b----     171.4
vmanager2335                                61   512     1     -b----      31.3
vmanager2336                                62   512     1     -b----      45.1
vmanager2338                                63   512     1     -b----      22.6
vmanager2346                                64   512     1     -b----      20.9
vmanager2349                                65  2048     1     -b----      14.4
vmanager2350                                66   512     1     -b----     324.8
vmanager2353                                67   512     1     -b----       7.6


HVM VMs change sometimes in the state (null).

We still upgraded xen from 4.1.1 to 4.8, we upgraded the System Kernel -

root@v8:~# uname -a
Linux v8.ip-projects.de 4.8.10-xen #2 SMP Mon Nov 21 18:56:56 CET 2016 x86_64 GNU/Linux

But all these points dont help us to solve this issue.

Now we are searching a Xen administrator which can help us anylising and solving this issue. We would also pay for this Service.

Hardware Specs of the host:

2x Intel Xeon E5-2620v4
256 GB DDR4 ECC Reg RAM
6x 3 TB WD RE
2x 512 GB Kingston KC
2x 256 GB Kingston KC
2x 600 GB SAS
LSI MegaRAID 9361-8i
MegaRAID Kit LSICVM02


The cause behind this Setup:

6x 3 TB WD RE - RAID 10 - W/R IO Cache + CacheCade LSI - Data Storage
2x 512 GB Kingston KC400 SSDs - RAID 1 - SSD Cache for RAID 10 Array
2x 256 GB Kingston KC400 SSD - RAID 1 - SWAP Array for Para VMs
2x 600 GB SAS  - RAID 1 - Backup Array for faster Backup of the VMs to external Storage.




Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -

[https://www.ip-projects.de/logo_mail.png]<https://www.ip-projects.de/>

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH




[-- Attachment #1.1.2: Type: text/html, Size: 26213 bytes --]

[-- Attachment #1.2: image001.png --]
[-- Type: image/png, Size: 2217 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Payed Xen Admin
  2016-11-27  8:52 Payed Xen Admin Michael Schinzel
@ 2016-11-28 13:30 ` Neil Sikka
  2016-11-28 17:19   ` Michael Schinzel
  2016-11-29 12:08 ` Dario Faggioli
  1 sibling, 1 reply; 9+ messages in thread
From: Neil Sikka @ 2016-11-28 13:30 UTC (permalink / raw)
  To: Michael Schinzel; +Cc: Xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 6794 bytes --]

Usually, I've seen (null) domains are not running but their Qemu DMs are
running. You could probably remove the (null) from the list by using "kill
-9" on the qemu pids.

On Nov 27, 2016 11:55 PM, "Michael Schinzel" <schinzel@ip-projects.de>
wrote:

> Good Morning,
>
>
>
> we have some issues with our Xen Hosts. It seems it is a xen bug but we do
> not find the solution.
>
>
>
> Name                                        ID   Mem VCPUs      State
> Time(s)
>
> Domain-0                                     0 16192     4     r-----
> 147102.5
>
> (null)                                       2     1     1     --p--d
> 1273.2
>
> vmanager2268                                 4  1024     1     -b----
> 34798.8
>
> vmanager2340                                 5  1024     1     -b----
> 5983.8
>
> vmanager2619                                12   512     1     -b----
> 1067.0
>
> vmanager2618                                13  1024     4     -b----
> 1448.7
>
> vmanager2557                                14  1024     1     -b----
> 2783.5
>
> vmanager1871                                16   512     1     -b----
> 3772.1
>
> vmanager2592                                17   512     1     -b----
> 19744.5
>
> vmanager2566                                18  2048     1     -b----
> 3068.4
>
> vmanager2228                                19   512     1     -b----
> 837.6
>
> vmanager2241                                20   512     1     -b----
> 997.0
>
> vmanager2244                                21  2048     1     -b----
> 1457.9
>
> vmanager2272                                22  2048     1     -b----
> 1924.5
>
> vmanager2226                                23  1024     1     -b----
> 1454.0
>
> vmanager2245                                24   512     1     -b----
> 692.5
>
> vmanager2249                                25   512     1     -b----
> 22857.7
>
> vmanager2265                                26  2048     1     -b----
> 1388.1
>
> vmanager2270                                27   512     1     -b----
> 1250.6
>
> vmanager2271                                28  2048     3     -b----
> 2060.8
>
> vmanager2273                                29  1024     1     -b----
> 34089.4
>
> vmanager2274                                30  2048     1     -b----
> 8585.1
>
> vmanager2281                                31  2048     2     -b----
> 1848.9
>
> vmanager2282                                32   512     1     -b----
> 755.1
>
> vmanager2288                                33  1024     1     -b----
> 543.6
>
> vmanager2292                                34   512     1     -b----
> 3004.9
>
> vmanager2041                                35   512     1     -b----
> 4246.2
>
> vmanager2216                                36  1536     1     -b----
> 47508.3
>
> vmanager2295                                37   512     1     -b----
> 1414.9
>
> vmanager2599                                38  1024     4     -b----
> 7523.0
>
> vmanager2296                                39  1536     1     -b----
> 7142.0
>
> vmanager2297                                40   512     1     -b----
> 536.7
>
> vmanager2136                                42  1024     1     -b----
> 6162.9
>
> vmanager2298                                43   512     1     -b----
> 441.7
>
> vmanager2299                                44   512     1     -b----
> 368.7
>
> (null)                                      45     4     1     --p--d
> 1296.3
>
> vmanager2303                                46   512     1     -b----
> 1437.0
>
> vmanager2308                                47   512     1     -b----
> 619.3
>
> vmanager2318                                48   512     1     -b----
> 976.8
>
> vmanager2325                                49   512     1     -b----
> 480.2
>
> vmanager2620                                53   512     1     -b----
> 346.2
>
> (null)                                      56     0     1
> --p--d       8.8
>
> vmanager2334                                57   512     1     -b----
> 255.5
>
> vmanager2235                                58   512     1     -b----
> 1724.2
>
> vmanager987                                 59   512     1     -b----
> 647.1
>
> vmanager2302                                60   512     1     -b----
> 171.4
>
> vmanager2335                                61   512     1
> -b----      31.3
>
> vmanager2336                                62   512     1
> -b----      45.1
>
> vmanager2338                                63   512     1
> -b----      22.6
>
> vmanager2346                                64   512     1
> -b----      20.9
>
> vmanager2349                                65  2048     1
> -b----      14.4
>
> vmanager2350                                66   512     1     -b----
> 324.8
>
> vmanager2353                                67   512     1
> -b----       7.6
>
>
>
>
>
> HVM VMs change sometimes in the state (null).
>
>
>
> We still upgraded xen from 4.1.1 to 4.8, we upgraded the System Kernel –
>
>
>
> root@v8:~# uname -a
>
> Linux v8.ip-projects.de 4.8.10-xen #2 SMP Mon Nov 21 18:56:56 CET 2016
> x86_64 GNU/Linux
>
>
>
> But all these points dont help us to solve this issue.
>
>
>
> Now we are searching a Xen administrator which can help us anylising and
> solving this issue. We would also pay for this Service.
>
>
>
> Hardware Specs of the host:
>
>
>
> 2x Intel Xeon E5-2620v4
> 256 GB DDR4 ECC Reg RAM
> 6x 3 TB WD RE
> 2x 512 GB Kingston KC
> 2x 256 GB Kingston KC
> 2x 600 GB SAS
> LSI MegaRAID 9361-8i
> MegaRAID Kit LSICVM02
>
>
>
>
>
> The cause behind this Setup:
>
>
>
> 6x 3 TB WD RE – RAID 10 – W/R IO Cache + CacheCade LSI – Data Storage
>
> 2x 512 GB Kingston KC400 SSDs – RAID 1 – SSD Cache for RAID 10 Array
>
> 2x 256 GB Kingston KC400 SSD – RAID 1 – SWAP Array for Para VMs
>
> 2x 600 GB SAS  - RAID 1 – Backup Array for faster Backup of the VMs to
> external Storage.
>
>
>
>
>
>
>
>
>
> Mit freundlichen Grüßen
>
>
>
> Michael Schinzel
>
> - Geschäftsführer -
>
>
>
> [image: https://www.ip-projects.de/logo_mail.png]
> <https://www.ip-projects.de/>
>
> IP-Projects GmbH & Co. KG
> Am Vogelherd 14
> D - 97295 Waldbrunn
>
> Telefon: 09306 - 76499-0
> FAX: 09306 - 76499-15
> E-Mail: info@ip-projects.de
>
> Geschäftsführer: Michael Schinzel
> Registergericht Würzburg: HRA 6798
> Komplementär: IP-Projects Verwaltungs GmbH
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
>
>

[-- Attachment #1.1.2: Type: text/html, Size: 15502 bytes --]

[-- Attachment #1.2: image001.png --]
[-- Type: image/png, Size: 2217 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Payed Xen Admin
  2016-11-28 13:30 ` Neil Sikka
@ 2016-11-28 17:19   ` Michael Schinzel
  2016-11-28 18:27     ` Thomas Toka
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Schinzel @ 2016-11-28 17:19 UTC (permalink / raw)
  To: Neil Sikka; +Cc: Xen-devel, Thomas Toka


[-- Attachment #1.1.1: Type: text/plain, Size: 7211 bytes --]

Hello,

thank you for your response. There are no quemu prozesses which we can identify with the ID of the failed guest.


Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -

[https://www.ip-projects.de/logo_mail.png]<https://www.ip-projects.de/>

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



Von: Neil Sikka [mailto:neilsikka@gmail.com]
Gesendet: Montag, 28. November 2016 14:30
An: Michael Schinzel <schinzel@ip-projects.de>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Betreff: Re: [Xen-devel] Payed Xen Admin


Usually, I've seen (null) domains are not running but their Qemu DMs are running. You could probably remove the (null) from the list by using "kill -9" on the qemu pids.

On Nov 27, 2016 11:55 PM, "Michael Schinzel" <schinzel@ip-projects.de<mailto:schinzel@ip-projects.de>> wrote:
Good Morning,

we have some issues with our Xen Hosts. It seems it is a xen bug but we do not find the solution.

Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0 16192     4     r-----  147102.5
(null)                                       2     1     1     --p--d    1273.2
vmanager2268                                 4  1024     1     -b----   34798.8
vmanager2340                                 5  1024     1     -b----    5983.8
vmanager2619                                12   512     1     -b----    1067.0
vmanager2618                                13  1024     4     -b----    1448.7
vmanager2557                                14  1024     1     -b----    2783.5
vmanager1871                                16   512     1     -b----    3772.1
vmanager2592                                17   512     1     -b----   19744.5
vmanager2566                                18  2048     1     -b----    3068.4
vmanager2228                                19   512     1     -b----     837.6
vmanager2241                                20   512     1     -b----     997.0
vmanager2244                                21  2048     1     -b----    1457.9
vmanager2272                                22  2048     1     -b----    1924.5
vmanager2226                                23  1024     1     -b----    1454.0
vmanager2245                                24   512     1     -b----     692.5
vmanager2249                                25   512     1     -b----   22857.7
vmanager2265                                26  2048     1     -b----    1388.1
vmanager2270                                27   512     1     -b----    1250.6
vmanager2271                                28  2048     3     -b----    2060.8
vmanager2273                                29  1024     1     -b----   34089.4
vmanager2274                                30  2048     1     -b----    8585.1
vmanager2281                                31  2048     2     -b----    1848.9
vmanager2282                                32   512     1     -b----     755.1
vmanager2288                                33  1024     1     -b----     543.6
vmanager2292                                34   512     1     -b----    3004.9
vmanager2041                                35   512     1     -b----    4246.2
vmanager2216                                36  1536     1     -b----   47508.3
vmanager2295                                37   512     1     -b----    1414.9
vmanager2599                                38  1024     4     -b----    7523.0
vmanager2296                                39  1536     1     -b----    7142.0
vmanager2297                                40   512     1     -b----     536.7
vmanager2136                                42  1024     1     -b----    6162.9
vmanager2298                                43   512     1     -b----     441.7
vmanager2299                                44   512     1     -b----     368.7
(null)                                      45     4     1     --p--d    1296.3
vmanager2303                                46   512     1     -b----    1437.0
vmanager2308                                47   512     1     -b----     619.3
vmanager2318                                48   512     1     -b----     976.8
vmanager2325                                49   512     1     -b----     480.2
vmanager2620                                53   512     1     -b----     346.2
(null)                                      56     0     1     --p--d       8.8
vmanager2334                                57   512     1     -b----     255.5
vmanager2235                                58   512     1     -b----    1724.2
vmanager987                                 59   512     1     -b----     647.1
vmanager2302                                60   512     1     -b----     171.4
vmanager2335                                61   512     1     -b----      31.3
vmanager2336                                62   512     1     -b----      45.1
vmanager2338                                63   512     1     -b----      22.6
vmanager2346                                64   512     1     -b----      20.9
vmanager2349                                65  2048     1     -b----      14.4
vmanager2350                                66   512     1     -b----     324.8
vmanager2353                                67   512     1     -b----       7.6


HVM VMs change sometimes in the state (null).

We still upgraded xen from 4.1.1 to 4.8, we upgraded the System Kernel –

root@v8:~# uname -a
Linux v8.ip-projects.de<http://v8.ip-projects.de> 4.8.10-xen #2 SMP Mon Nov 21 18:56:56 CET 2016 x86_64 GNU/Linux

But all these points dont help us to solve this issue.

Now we are searching a Xen administrator which can help us anylising and solving this issue. We would also pay for this Service.

Hardware Specs of the host:

2x Intel Xeon E5-2620v4
256 GB DDR4 ECC Reg RAM
6x 3 TB WD RE
2x 512 GB Kingston KC
2x 256 GB Kingston KC
2x 600 GB SAS
LSI MegaRAID 9361-8i
MegaRAID Kit LSICVM02


The cause behind this Setup:

6x 3 TB WD RE – RAID 10 – W/R IO Cache + CacheCade LSI – Data Storage
2x 512 GB Kingston KC400 SSDs – RAID 1 – SSD Cache for RAID 10 Array
2x 256 GB Kingston KC400 SSD – RAID 1 – SWAP Array for Para VMs
2x 600 GB SAS  - RAID 1 – Backup Array for faster Backup of the VMs to external Storage.




Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -

[https://www.ip-projects.de/logo_mail.png]<https://www.ip-projects.de/>

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
https://lists.xen.org/xen-devel

[-- Attachment #1.1.2: Type: text/html, Size: 35914 bytes --]

[-- Attachment #1.2: image001.png --]
[-- Type: image/png, Size: 2217 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Payed Xen Admin
  2016-11-28 17:19   ` Michael Schinzel
@ 2016-11-28 18:27     ` Thomas Toka
  2016-11-28 21:08       ` Neil Sikka
  0 siblings, 1 reply; 9+ messages in thread
From: Thomas Toka @ 2016-11-28 18:27 UTC (permalink / raw)
  To: Michael Schinzel, Neil Sikka; +Cc: Xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 8253 bytes --]

Hello,

thanks for answering Neil. I think Neil means the block devices ?

Neil can you show us how to verify if those devices are still running for the null domain ids?

I also think its maybe just a timing problem, maybe they do not shut down always as they should..

We can give you surely access to such u box and you could have a look..

Mit freundlichen Grüßen

Thomas Toka

- Second Level Support -

[logo_mail]

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



Von: Michael Schinzel
Gesendet: Montag, 28. November 2016 18:20
An: Neil Sikka <neilsikka@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>; Thomas Toka <toka@ip-projects.de>
Betreff: AW: [Xen-devel] Payed Xen Admin

Hello,

thank you for your response. There are no quemu prozesses which we can identify with the ID of the failed guest.


Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -

[https://www.ip-projects.de/logo_mail.png]<https://www.ip-projects.de/>

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



Von: Neil Sikka [mailto:neilsikka@gmail.com]
Gesendet: Montag, 28. November 2016 14:30
An: Michael Schinzel <schinzel@ip-projects.de<mailto:schinzel@ip-projects.de>>
Cc: Xen-devel <xen-devel@lists.xenproject.org<mailto:xen-devel@lists.xenproject.org>>
Betreff: Re: [Xen-devel] Payed Xen Admin


Usually, I've seen (null) domains are not running but their Qemu DMs are running. You could probably remove the (null) from the list by using "kill -9" on the qemu pids.

On Nov 27, 2016 11:55 PM, "Michael Schinzel" <schinzel@ip-projects.de<mailto:schinzel@ip-projects.de>> wrote:
Good Morning,

we have some issues with our Xen Hosts. It seems it is a xen bug but we do not find the solution.

Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0 16192     4     r-----  147102.5
(null)                                       2     1     1     --p--d    1273.2
vmanager2268                                 4  1024     1     -b----   34798.8
vmanager2340                                 5  1024     1     -b----    5983.8
vmanager2619                                12   512     1     -b----    1067.0
vmanager2618                                13  1024     4     -b----    1448.7
vmanager2557                                14  1024     1     -b----    2783.5
vmanager1871                                16   512     1     -b----    3772.1
vmanager2592                                17   512     1     -b----   19744.5
vmanager2566                                18  2048     1     -b----    3068.4
vmanager2228                                19   512     1     -b----     837.6
vmanager2241                                20   512     1     -b----     997.0
vmanager2244                                21  2048     1     -b----    1457.9
vmanager2272                                22  2048     1     -b----    1924.5
vmanager2226                                23  1024     1     -b----    1454.0
vmanager2245                                24   512     1     -b----     692.5
vmanager2249                                25   512     1     -b----   22857.7
vmanager2265                                26  2048     1     -b----    1388.1
vmanager2270                                27   512     1     -b----    1250.6
vmanager2271                                28  2048     3     -b----    2060.8
vmanager2273                                29  1024     1     -b----   34089.4
vmanager2274                                30  2048     1     -b----    8585.1
vmanager2281                                31  2048     2     -b----    1848.9
vmanager2282                                32   512     1     -b----     755.1
vmanager2288                                33  1024     1     -b----     543.6
vmanager2292                                34   512     1     -b----    3004.9
vmanager2041                                35   512     1     -b----    4246.2
vmanager2216                                36  1536     1     -b----   47508.3
vmanager2295                                37   512     1     -b----    1414.9
vmanager2599                                38  1024     4     -b----    7523.0
vmanager2296                                39  1536     1     -b----    7142.0
vmanager2297                                40   512     1     -b----     536.7
vmanager2136                                42  1024     1     -b----    6162.9
vmanager2298                                43   512     1     -b----     441.7
vmanager2299                                44   512     1     -b----     368.7
(null)                                      45     4     1     --p--d    1296.3
vmanager2303                                46   512     1     -b----    1437.0
vmanager2308                                47   512     1     -b----     619.3
vmanager2318                                48   512     1     -b----     976.8
vmanager2325                                49   512     1     -b----     480.2
vmanager2620                                53   512     1     -b----     346.2
(null)                                      56     0     1     --p--d       8.8
vmanager2334                                57   512     1     -b----     255.5
vmanager2235                                58   512     1     -b----    1724.2
vmanager987                                 59   512     1     -b----     647.1
vmanager2302                                60   512     1     -b----     171.4
vmanager2335                                61   512     1     -b----      31.3
vmanager2336                                62   512     1     -b----      45.1
vmanager2338                                63   512     1     -b----      22.6
vmanager2346                                64   512     1     -b----      20.9
vmanager2349                                65  2048     1     -b----      14.4
vmanager2350                                66   512     1     -b----     324.8
vmanager2353                                67   512     1     -b----       7.6


HVM VMs change sometimes in the state (null).

We still upgraded xen from 4.1.1 to 4.8, we upgraded the System Kernel –

root@v8:~# uname -a
Linux v8.ip-projects.de<http://v8.ip-projects.de> 4.8.10-xen #2 SMP Mon Nov 21 18:56:56 CET 2016 x86_64 GNU/Linux

But all these points dont help us to solve this issue.

Now we are searching a Xen administrator which can help us anylising and solving this issue. We would also pay for this Service.

Hardware Specs of the host:

2x Intel Xeon E5-2620v4
256 GB DDR4 ECC Reg RAM
6x 3 TB WD RE
2x 512 GB Kingston KC
2x 256 GB Kingston KC
2x 600 GB SAS
LSI MegaRAID 9361-8i
MegaRAID Kit LSICVM02


The cause behind this Setup:

6x 3 TB WD RE – RAID 10 – W/R IO Cache + CacheCade LSI – Data Storage
2x 512 GB Kingston KC400 SSDs – RAID 1 – SSD Cache for RAID 10 Array
2x 256 GB Kingston KC400 SSD – RAID 1 – SWAP Array for Para VMs
2x 600 GB SAS  - RAID 1 – Backup Array for faster Backup of the VMs to external Storage.




Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -

[https://www.ip-projects.de/logo_mail.png]<https://www.ip-projects.de/>

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
https://lists.xen.org/xen-devel

[-- Attachment #1.1.2: Type: text/html, Size: 42001 bytes --]

[-- Attachment #1.2: image002.png --]
[-- Type: image/png, Size: 1043 bytes --]

[-- Attachment #1.3: image003.png --]
[-- Type: image/png, Size: 2217 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Payed Xen Admin
  2016-11-28 18:27     ` Thomas Toka
@ 2016-11-28 21:08       ` Neil Sikka
  2016-11-29 20:01         ` Thomas Toka
  0 siblings, 1 reply; 9+ messages in thread
From: Neil Sikka @ 2016-11-28 21:08 UTC (permalink / raw)
  To: Thomas Toka; +Cc: Xen-devel, Michael Schinzel


[-- Attachment #1.1.1: Type: text/plain, Size: 9334 bytes --]

My technique has been to look through top or ps on Dom0 for the QEMU
processes and correlate those PIDs with what I see in /proc/PID. The
proc/PID/cmdline file specifies which domid the QEMU process is doing the
device emulation for. If QEMU instances are running, try killing the QEMU
processes that are running for Domains that are destroyed.

On Mon, Nov 28, 2016 at 1:27 PM, Thomas Toka <toka@ip-projects.de> wrote:

> Hello,
>
>
>
> thanks for answering Neil. I think Neil means the block devices ?
>
>
>
> Neil can you show us how to verify if those devices are still running for
> the null domain ids?
>
>
>
> I also think its maybe just a timing problem, maybe they do not shut down
> always as they should..
>
>
>
> We can give you surely access to such u box and you could have a look..
>
>
>
> Mit freundlichen Grüßen
>
>
>
> Thomas Toka
>
>
>
> - Second Level Support -
>
>
>
> [image: logo_mail]
>
> IP-Projects GmbH & Co. KG
> Am Vogelherd 14
> D - 97295 Waldbrunn
>
> Telefon: 09306 - 76499-0
> FAX: 09306 - 76499-15
> E-Mail: info@ip-projects.de
>
> Geschäftsführer: Michael Schinzel
> Registergericht Würzburg: HRA 6798
> Komplementär: IP-Projects Verwaltungs GmbH
>
>
>
>
>
> *Von:* Michael Schinzel
> *Gesendet:* Montag, 28. November 2016 18:20
> *An:* Neil Sikka <neilsikka@gmail.com>
> *Cc:* Xen-devel <xen-devel@lists.xenproject.org>; Thomas Toka <
> toka@ip-projects.de>
> *Betreff:* AW: [Xen-devel] Payed Xen Admin
>
>
>
> Hello,
>
>
>
> thank you for your response. There are no quemu prozesses which we can
> identify with the ID of the failed guest.
>
>
>
>
>
> Mit freundlichen Grüßen
>
>
>
> Michael Schinzel
>
> - Geschäftsführer -
>
>
>
> [image: https://www.ip-projects.de/logo_mail.png]
> <https://www.ip-projects.de/>
>
> IP-Projects GmbH & Co. KG
> Am Vogelherd 14
> D - 97295 Waldbrunn
>
> Telefon: 09306 - 76499-0
> FAX: 09306 - 76499-15
> E-Mail: info@ip-projects.de
>
> Geschäftsführer: Michael Schinzel
> Registergericht Würzburg: HRA 6798
> Komplementär: IP-Projects Verwaltungs GmbH
>
>
>
>
>
> *Von:* Neil Sikka [mailto:neilsikka@gmail.com <neilsikka@gmail.com>]
> *Gesendet:* Montag, 28. November 2016 14:30
> *An:* Michael Schinzel <schinzel@ip-projects.de>
> *Cc:* Xen-devel <xen-devel@lists.xenproject.org>
> *Betreff:* Re: [Xen-devel] Payed Xen Admin
>
>
>
> Usually, I've seen (null) domains are not running but their Qemu DMs are
> running. You could probably remove the (null) from the list by using "kill
> -9" on the qemu pids.
>
>
>
> On Nov 27, 2016 11:55 PM, "Michael Schinzel" <schinzel@ip-projects.de>
> wrote:
>
> Good Morning,
>
>
>
> we have some issues with our Xen Hosts. It seems it is a xen bug but we do
> not find the solution.
>
>
>
> Name                                        ID   Mem VCPUs      State
> Time(s)
>
> Domain-0                                     0 16192     4     r-----
> 147102.5
>
> (null)                                       2     1     1     --p--d
> 1273.2
>
> vmanager2268                                 4  1024     1     -b----
> 34798.8
>
> vmanager2340                                 5  1024     1     -b----
> 5983.8
>
> vmanager2619                                12   512     1     -b----
> 1067.0
>
> vmanager2618                                13  1024     4     -b----
> 1448.7
>
> vmanager2557                                14  1024     1     -b----
> 2783.5
>
> vmanager1871                                16   512     1     -b----
> 3772.1
>
> vmanager2592                                17   512     1     -b----
> 19744.5
>
> vmanager2566                                18  2048     1     -b----
> 3068.4
>
> vmanager2228                                19   512     1     -b----
> 837.6
>
> vmanager2241                                20   512     1     -b----
> 997.0
>
> vmanager2244                                21  2048     1     -b----
> 1457.9
>
> vmanager2272                                22  2048     1     -b----
> 1924.5
>
> vmanager2226                                23  1024     1     -b----
> 1454.0
>
> vmanager2245                                24   512     1     -b----
> 692.5
>
> vmanager2249                                25   512     1     -b----
> 22857.7
>
> vmanager2265                                26  2048     1     -b----
> 1388.1
>
> vmanager2270                                27   512     1     -b----
> 1250.6
>
> vmanager2271                                28  2048     3     -b----
> 2060.8
>
> vmanager2273                                29  1024     1     -b----
> 34089.4
>
> vmanager2274                                30  2048     1     -b----
> 8585.1
>
> vmanager2281                                31  2048     2     -b----
> 1848.9
>
> vmanager2282                                32   512     1     -b----
> 755.1
>
> vmanager2288                                33  1024     1     -b----
> 543.6
>
> vmanager2292                                34   512     1     -b----
> 3004.9
>
> vmanager2041                                35   512     1     -b----
> 4246.2
>
> vmanager2216                                36  1536     1     -b----
> 47508.3
>
> vmanager2295                                37   512     1     -b----
> 1414.9
>
> vmanager2599                                38  1024     4     -b----
> 7523.0
>
> vmanager2296                                39  1536     1     -b----
> 7142.0
>
> vmanager2297                                40   512     1     -b----
> 536.7
>
> vmanager2136                                42  1024     1     -b----
> 6162.9
>
> vmanager2298                                43   512     1     -b----
> 441.7
>
> vmanager2299                                44   512     1     -b----
> 368.7
>
> (null)                                      45     4     1     --p--d
> 1296.3
>
> vmanager2303                                46   512     1     -b----
> 1437.0
>
> vmanager2308                                47   512     1     -b----
> 619.3
>
> vmanager2318                                48   512     1     -b----
> 976.8
>
> vmanager2325                                49   512     1     -b----
> 480.2
>
> vmanager2620                                53   512     1     -b----
> 346.2
>
> (null)                                      56     0     1
> --p--d       8.8
>
> vmanager2334                                57   512     1     -b----
> 255.5
>
> vmanager2235                                58   512     1     -b----
> 1724.2
>
> vmanager987                                 59   512     1     -b----
> 647.1
>
> vmanager2302                                60   512     1     -b----
> 171.4
>
> vmanager2335                                61   512     1
> -b----      31.3
>
> vmanager2336                                62   512     1
> -b----      45.1
>
> vmanager2338                                63   512     1
> -b----      22.6
>
> vmanager2346                                64   512     1
> -b----      20.9
>
> vmanager2349                                65  2048     1
> -b----      14.4
>
> vmanager2350                                66   512     1     -b----
> 324.8
>
> vmanager2353                                67   512     1
> -b----       7.6
>
>
>
>
>
> HVM VMs change sometimes in the state (null).
>
>
>
> We still upgraded xen from 4.1.1 to 4.8, we upgraded the System Kernel –
>
>
>
> root@v8:~# uname -a
>
> Linux v8.ip-projects.de 4.8.10-xen #2 SMP Mon Nov 21 18:56:56 CET 2016
> x86_64 GNU/Linux
>
>
>
> But all these points dont help us to solve this issue.
>
>
>
> Now we are searching a Xen administrator which can help us anylising and
> solving this issue. We would also pay for this Service.
>
>
>
> Hardware Specs of the host:
>
>
>
> 2x Intel Xeon E5-2620v4
> 256 GB DDR4 ECC Reg RAM
> 6x 3 TB WD RE
> 2x 512 GB Kingston KC
> 2x 256 GB Kingston KC
> 2x 600 GB SAS
> LSI MegaRAID 9361-8i
> MegaRAID Kit LSICVM02
>
>
>
>
>
> The cause behind this Setup:
>
>
>
> 6x 3 TB WD RE – RAID 10 – W/R IO Cache + CacheCade LSI – Data Storage
>
> 2x 512 GB Kingston KC400 SSDs – RAID 1 – SSD Cache for RAID 10 Array
>
> 2x 256 GB Kingston KC400 SSD – RAID 1 – SWAP Array for Para VMs
>
> 2x 600 GB SAS  - RAID 1 – Backup Array for faster Backup of the VMs to
> external Storage.
>
>
>
>
>
>
>
>
>
> Mit freundlichen Grüßen
>
>
>
> Michael Schinzel
>
> - Geschäftsführer -
>
>
>
> [image: https://www.ip-projects.de/logo_mail.png]
> <https://www.ip-projects.de/>
>
> IP-Projects GmbH & Co. KG
> Am Vogelherd 14
> D - 97295 Waldbrunn
>
> Telefon: 09306 - 76499-0
> FAX: 09306 - 76499-15
> E-Mail: info@ip-projects.de
>
> Geschäftsführer: Michael Schinzel
> Registergericht Würzburg: HRA 6798
> Komplementär: IP-Projects Verwaltungs GmbH
>
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
>
>


-- 
My Blog: http://www.neilscomputerblog.blogspot.com/
Twitter: @neilsikka

[-- Attachment #1.1.2: Type: text/html, Size: 26363 bytes --]

[-- Attachment #1.2: image002.png --]
[-- Type: image/png, Size: 1043 bytes --]

[-- Attachment #1.3: image003.png --]
[-- Type: image/png, Size: 2217 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Payed Xen Admin
  2016-11-27  8:52 Payed Xen Admin Michael Schinzel
  2016-11-28 13:30 ` Neil Sikka
@ 2016-11-29 12:08 ` Dario Faggioli
  2016-11-29 13:34   ` IP-Projects - Support
  1 sibling, 1 reply; 9+ messages in thread
From: Dario Faggioli @ 2016-11-29 12:08 UTC (permalink / raw)
  To: Michael Schinzel, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1680 bytes --]

On Sun, 2016-11-27 at 08:52 +0000, Michael Schinzel wrote:
> Good Morning,
>  
Hello,

First, one thing, can you avoid sending HTML email to the list? Tanks
in advance. :-)
 
> Name                                        ID   Mem VCPUs     
> State   Time(s)
> Domain-0                                     0 16192     4     r--
> ---  147102.5
> vmanager2325                                49   512     1     -b--
> --     480.2
> vmanager2620                                53   512     1     -b--
> --     346.2
> (null)                                      56     0     1     --p
> --d       8.8
> vmanager2334                                57   512     1     -b--
> --     255.5
>  
> HVM VMs change sometimes in the state (null).
>  
Sorry, I feel like there's something I'm missing. What do you mean with
"change sometimes in the state"? What are the VMs doing when that
happens?

Is it, for instance, that they're being shutdown, and as a consequence
of that, instead of disappearing, they stay there as (null)?

Or does it happen just out of the blue, while they're running?

What do their log files in /var/log/xen/ say?

Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Payed Xen Admin
  2016-11-29 12:08 ` Dario Faggioli
@ 2016-11-29 13:34   ` IP-Projects - Support
  2016-11-29 18:16     ` PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin] Dario Faggioli
  0 siblings, 1 reply; 9+ messages in thread
From: IP-Projects - Support @ 2016-11-29 13:34 UTC (permalink / raw)
  To: 'Dario Faggioli'; +Cc: 'xen-devel@lists.xenproject.org'

Hello,

we see this i think when the vms are stopped or restarted by customers (xl destroy vm and then recreating) or I can reprdoce this when I stop them all by
script with a for loop with xl destroy $i .

It happens with hvm and pvm

testcase all vms started:

root@v34:/var# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  2048     2     r-----     398.0
vmanager1813                                28  1024     1     -b----       2.4
vmanager1864                                29  1024     1     -b----       2.2
vmanager1866                                30  2048     1     r-----       5.6
vmanager1867                                31  2048     1     r-----       4.4
vmanager2255                                32  2048     1     r-----       0.6
vmanager2494                                33   512     1     -b----       1.7
vmanager2593                                34   512     1     -b----       1.8

root@v34:/var# /root/scripts/vps_stop.sh
root@v34:/var# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  2048     2     r-----     420.5
(null)                                      34     0     1     --p--d       2.3

root@v34:/var# cat /etc/xen/vmanager2593.cfg

kernel = "/boot/vmlinuz-4.8.10-xen"
ramdisk = "/boot/initrd.img-4.8.10-xen"
memory = 512
name = "vmanager2593"


disk = ['phy:/dev/vm/vmanager2593-root,xvda1,w',
        'phy:/dev/vm/vmanager2593-swap,xvda2,w']
root = "/dev/xvda1 ro"
vcpus = 1
cpus = "all,^0-3"
vif = [ 'vifname=vmanager2593, rate=100Mb/s, bridge=xenbr0.165, mac=00:50:56:xx:xx:xx, ip=84.200.xx.xx' ]
vif_other_config = [ '00:50:56:xx:xx:xx', 'tbf', 'rate=100Mb/s', 'bps_read=100Mb/s', 'bps_write=100Mb/s', 'iops_read=100IOPS', 'iops_write=100IOPS' ]

root@v34:/var/log/xen# cat xl-vmanager2593.log
Waiting for domain vmanager2593 (domid 34) to die [pid 23747]
Domain 34 has been destroyed.

/var/log/xen/xen-hotplug.log does not log anything. Any hint why?

(XEN) 'q' pressed -> dumping domain info (now=0x16B:4C7A5CC3)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=524288 xenheap_pages=5 shared_pages=0 paged_pages=0 dirty_cpus={0-1} max_pages=4294967295
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000002d
(XEN) Rangesets belonging to domain 0:
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-407, 40c-cfb, d00-ffff }
(XEN)     log-dirty  { }
(XEN)     Interrupts { 1-35 }
(XEN)     I/O Memory { 0-fedff, fef00-ffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 0000000000817167: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 0000000000817166: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000817165: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 0000000000817164: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000bc9fc: caf=c000000000000002, taf=7400000000000002
(XEN) NODE affinity for domain 0: [0]
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU0 [has=F] poll=0 upcall_pend=00 upcall_mask=00 dirty_cpus={0}
(XEN)     cpu_hard_affinity={0} cpu_soft_affinity={0-7}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU1: CPU1 [has=T] poll=0 upcall_pend=00 upcall_mask=00 dirty_cpus={1}
(XEN)     cpu_hard_affinity={1} cpu_soft_affinity={0-7}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) General information for domain 34:
(XEN)     refcnt=1 dying=2 pause_count=2
(XEN)     nr_pages=122 xenheap_pages=0 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=131328
(XEN)     handle=2a991534-312f-465a-9dff-f9a9fb1baadd vm_assist=0000002d
(XEN) Rangesets belonging to domain 34:
(XEN)     I/O Ports  { }
(XEN)     log-dirty  { }
(XEN)     Interrupts { }
(XEN)     I/O Memory { }
(XEN) Memory pages belonging to domain 34:
(XEN)     DomPage 00000000005b9041: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9042: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9043: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9044: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9045: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9046: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9047: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9048: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9049: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b904a: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b904b: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b904c: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b904d: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b904e: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b904f: caf=00000001, taf=7400000000000001
(XEN)     DomPage 00000000005b9050: caf=00000001, taf=7400000000000001
(XEN) NODE affinity for domain 34: [0]
(XEN) VCPU information and callbacks for domain 34:
(XEN)     VCPU0: CPU4 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={4-7} cpu_soft_affinity={0-7}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5)
(XEN) Notifying guest 0:1 (virq 1, port 12)
(XEN) Notifying guest 34:0 (virq 1, port 0)
(XEN) Shared frames 0 -- Saved frames 0

(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    0 ... no active grant table entries
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:   34 (v1)
(XEN) [  8]        0 0x5b8f05 0x00000001          0 0x5b8f05 0x19
(XEN) [770]        0 0x5b90ba 0x00000001          0 0x5b90ba 0x19
(XEN) [802]        0 0x5b90b9 0x00000001          0 0x5b90b9 0x19
(XEN) [803]        0 0x5b90b8 0x00000001          0 0x5b90b8 0x19
(XEN) [804]        0 0x5b90b7 0x00000001          0 0x5b90b7 0x19
(XEN) [805]        0 0x5b90b6 0x00000001          0 0x5b90b6 0x19
(XEN) [806]        0 0x5b90b5 0x00000001          0 0x5b90b5 0x19
(XEN) [807]        0 0x5b90b4 0x00000001          0 0x5b90b4 0x19
(XEN) [808]        0 0x5b90b3 0x00000001          0 0x5b90b3 0x19
(XEN) [809]        0 0x5b90b2 0x00000001          0 0x5b90b2 0x19
(XEN) [810]        0 0x5b90b1 0x00000001          0 0x5b90b1 0x19
(XEN) [811]        0 0x5b90b0 0x00000001          0 0x5b90b0 0x19
(XEN) [812]        0 0x5b90af 0x00000001          0 0x5b90af 0x19
(XEN) [813]        0 0x5b90ae 0x00000001          0 0x5b90ae 0x19
(XEN) [814]        0 0x5b90ad 0x00000001          0 0x5b90ad 0x19
(XEN) [815]        0 0x5b90ac 0x00000001          0 0x5b90ac 0x19
(XEN) [816]        0 0x5b90ab 0x00000001          0 0x5b90ab 0x19
(XEN) [817]        0 0x5b90aa 0x00000001          0 0x5b90aa 0x19
(XEN) [818]        0 0x5b90a9 0x00000001          0 0x5b90a9 0x19
(XEN) [819]        0 0x5b90a8 0x00000001          0 0x5b90a8 0x19
(XEN) [820]        0 0x5b90a7 0x00000001          0 0x5b90a7 0x19
(XEN) [821]        0 0x5b90a6 0x00000001          0 0x5b90a6 0x19
(XEN) [822]        0 0x5b90a5 0x00000001          0 0x5b90a5 0x19
(XEN) [823]        0 0x5b90a4 0x00000001          0 0x5b90a4 0x19
(XEN) [824]        0 0x5b90a3 0x00000001          0 0x5b90a3 0x19
(XEN) [825]        0 0x5b90a2 0x00000001          0 0x5b90a2 0x19
(XEN) [826]        0 0x5b90a1 0x00000001          0 0x5b90a1 0x19
(XEN) [827]        0 0x5b90a0 0x00000001          0 0x5b90a0 0x19
(XEN) [828]        0 0x5b909f 0x00000001          0 0x5b909f 0x19
(XEN) [829]        0 0x5b909e 0x00000001          0 0x5b909e 0x19
(XEN) [830]        0 0x5b909d 0x00000001          0 0x5b909d 0x19
(XEN) [831]        0 0x5b909c 0x00000001          0 0x5b909c 0x19
(XEN) [832]        0 0x5b909b 0x00000001          0 0x5b909b 0x19
(XEN) [833]        0 0x5b909a 0x00000001          0 0x5b909a 0x19
(XEN) [834]        0 0x5b9099 0x00000001          0 0x5b9099 0x19
(XEN) [835]        0 0x5b9098 0x00000001          0 0x5b9098 0x19
(XEN) [836]        0 0x5b9097 0x00000001          0 0x5b9097 0x19
(XEN) [837]        0 0x5b9096 0x00000001          0 0x5b9096 0x19
(XEN) [838]        0 0x5b9095 0x00000001          0 0x5b9095 0x19
(XEN) [839]        0 0x5b9094 0x00000001          0 0x5b9094 0x19
(XEN) [840]        0 0x5b9093 0x00000001          0 0x5b9093 0x19
(XEN) [841]        0 0x5b9092 0x00000001          0 0x5b9092 0x19
(XEN) [842]        0 0x5b9091 0x00000001          0 0x5b9091 0x19
(XEN) [843]        0 0x5b9090 0x00000001          0 0x5b9090 0x19
(XEN) [844]        0 0x5b908f 0x00000001          0 0x5b908f 0x19
(XEN) [845]        0 0x5b908e 0x00000001          0 0x5b908e 0x19
(XEN) [846]        0 0x5b908d 0x00000001          0 0x5b908d 0x19
(XEN) [847]        0 0x5b908c 0x00000001          0 0x5b908c 0x19
(XEN) [848]        0 0x5b908b 0x00000001          0 0x5b908b 0x19
(XEN) [849]        0 0x5b908a 0x00000001          0 0x5b908a 0x19
(XEN) [850]        0 0x5b9089 0x00000001          0 0x5b9089 0x19
(XEN) [851]        0 0x5b9088 0x00000001          0 0x5b9088 0x19
(XEN) [852]        0 0x5b9087 0x00000001          0 0x5b9087 0x19
(XEN) [853]        0 0x5b9086 0x00000001          0 0x5b9086 0x19
(XEN) [854]        0 0x5b9085 0x00000001          0 0x5b9085 0x19
(XEN) [855]        0 0x5b9084 0x00000001          0 0x5b9084 0x19
(XEN) [856]        0 0x5b9083 0x00000001          0 0x5b9083 0x19
(XEN) [857]        0 0x5b9082 0x00000001          0 0x5b9082 0x19
(XEN) [858]        0 0x5b9081 0x00000001          0 0x5b9081 0x19
(XEN) [859]        0 0x5b9080 0x00000001          0 0x5b9080 0x19
(XEN) [860]        0 0x5b907f 0x00000001          0 0x5b907f 0x19
(XEN) [861]        0 0x5b907e 0x00000001          0 0x5b907e 0x19
(XEN) [862]        0 0x5b907d 0x00000001          0 0x5b907d 0x19
(XEN) [863]        0 0x5b907c 0x00000001          0 0x5b907c 0x19
(XEN) [864]        0 0x5b907b 0x00000001          0 0x5b907b 0x19
(XEN) [865]        0 0x5b907a 0x00000001          0 0x5b907a 0x19
(XEN) [866]        0 0x5b9079 0x00000001          0 0x5b9079 0x19
(XEN) [867]        0 0x5b9078 0x00000001          0 0x5b9078 0x19
(XEN) [868]        0 0x5b9077 0x00000001          0 0x5b9077 0x19
(XEN) [869]        0 0x5b9076 0x00000001          0 0x5b9076 0x19
(XEN) [870]        0 0x5b9075 0x00000001          0 0x5b9075 0x19
(XEN) [871]        0 0x5b9074 0x00000001          0 0x5b9074 0x19
(XEN) [872]        0 0x5b9073 0x00000001          0 0x5b9073 0x19
(XEN) [873]        0 0x5b9072 0x00000001          0 0x5b9072 0x19
(XEN) [874]        0 0x5b9071 0x00000001          0 0x5b9071 0x19
(XEN) [875]        0 0x5b9070 0x00000001          0 0x5b9070 0x19
(XEN) [876]        0 0x5b906f 0x00000001          0 0x5b906f 0x19
(XEN) [877]        0 0x5b906e 0x00000001          0 0x5b906e 0x19
(XEN) [878]        0 0x5b906d 0x00000001          0 0x5b906d 0x19
(XEN) [879]        0 0x5b906c 0x00000001          0 0x5b906c 0x19
(XEN) [880]        0 0x5b906b 0x00000001          0 0x5b906b 0x19
(XEN) [881]        0 0x5b906a 0x00000001          0 0x5b906a 0x19
(XEN) [882]        0 0x5b9069 0x00000001          0 0x5b9069 0x19
(XEN) [883]        0 0x5b9068 0x00000001          0 0x5b9068 0x19
(XEN) [884]        0 0x5b9067 0x00000001          0 0x5b9067 0x19
(XEN) [885]        0 0x5b9066 0x00000001          0 0x5b9066 0x19
(XEN) [886]        0 0x5b9065 0x00000001          0 0x5b9065 0x19
(XEN) [887]        0 0x5b9064 0x00000001          0 0x5b9064 0x19
(XEN) [888]        0 0x5b9063 0x00000001          0 0x5b9063 0x19
(XEN) [889]        0 0x5b9062 0x00000001          0 0x5b9062 0x19
(XEN) [890]        0 0x5b9061 0x00000001          0 0x5b9061 0x19
(XEN) [891]        0 0x5b9060 0x00000001          0 0x5b9060 0x19
(XEN) [892]        0 0x5b905f 0x00000001          0 0x5b905f 0x19
(XEN) [893]        0 0x5b905e 0x00000001          0 0x5b905e 0x19
(XEN) [894]        0 0x5b905d 0x00000001          0 0x5b905d 0x19
(XEN) [895]        0 0x5b905c 0x00000001          0 0x5b905c 0x19
(XEN) [896]        0 0x5b905b 0x00000001          0 0x5b905b 0x19
(XEN) [897]        0 0x5b905a 0x00000001          0 0x5b905a 0x19
(XEN) [898]        0 0x5b9059 0x00000001          0 0x5b9059 0x19
(XEN) [899]        0 0x5b9058 0x00000001          0 0x5b9058 0x19
(XEN) [900]        0 0x5b9057 0x00000001          0 0x5b9057 0x19
(XEN) [901]        0 0x5b9056 0x00000001          0 0x5b9056 0x19
(XEN) [902]        0 0x5b9055 0x00000001          0 0x5b9055 0x19
(XEN) [903]        0 0x5b9054 0x00000001          0 0x5b9054 0x19
(XEN) [904]        0 0x5b9053 0x00000001          0 0x5b9053 0x19
(XEN) [905]        0 0x5b9052 0x00000001          0 0x5b9052 0x19
(XEN) [906]        0 0x5b9050 0x00000001          0 0x5b9050 0x19
(XEN) [907]        0 0x5b904f 0x00000001          0 0x5b904f 0x19
(XEN) [908]        0 0x5b904e 0x00000001          0 0x5b904e 0x19
(XEN) [909]        0 0x5b904d 0x00000001          0 0x5b904d 0x19
(XEN) [910]        0 0x5b904c 0x00000001          0 0x5b904c 0x19
(XEN) [911]        0 0x5b904b 0x00000001          0 0x5b904b 0x19
(XEN) [912]        0 0x5b904a 0x00000001          0 0x5b904a 0x19
(XEN) [913]        0 0x5b9049 0x00000001          0 0x5b9049 0x19
(XEN) [914]        0 0x5b9048 0x00000001          0 0x5b9048 0x19
(XEN) [915]        0 0x5b9047 0x00000001          0 0x5b9047 0x19
(XEN) [916]        0 0x5b9046 0x00000001          0 0x5b9046 0x19
(XEN) [917]        0 0x5b9045 0x00000001          0 0x5b9045 0x19
(XEN) [918]        0 0x5b9044 0x00000001          0 0x5b9044 0x19
(XEN) [919]        0 0x5b9043 0x00000001          0 0x5b9043 0x19
(XEN) [920]        0 0x5b9042 0x00000001          0 0x5b9042 0x19
(XEN) [921]        0 0x5b9041 0x00000001          0 0x5b9041 0x19
(XEN) gnttab_usage_print_all ] done

Mit freundlichen Grüßen 

Thomas Toka

- Second Level Support - 


IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn 
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH


-----Ursprüngliche Nachricht-----
Von: Dario Faggioli [mailto:dario.faggioli@citrix.com] 
Gesendet: Dienstag, 29. November 2016 13:08
An: Michael Schinzel <schinzel@ip-projects.de>; xen-devel@lists.xenproject.org
Betreff: Re: [Xen-devel] Payed Xen Admin

On Sun, 2016-11-27 at 08:52 +0000, Michael Schinzel wrote:
> Good Morning,
>  
Hello,

First, one thing, can you avoid sending HTML email to the list? Tanks in advance. :-)
 
> Name                                        ID   Mem VCPUs State   
> Time(s)
> Domain-0                                     0 16192     4     r--
> ---  147102.5
> vmanager2325                                49   512     1     -b--
> --     480.2
> vmanager2620                                53   512     1     -b--
> --     346.2
> (null)                                      56     0     1     --p --d       
> 8.8
> vmanager2334                                57   512     1     -b--
> --     255.5
>  
> HVM VMs change sometimes in the state (null).
>  
Sorry, I feel like there's something I'm missing. What do you mean with "change sometimes in the state"? What are the VMs doing when that happens?

Is it, for instance, that they're being shutdown, and as a consequence of that, instead of disappearing, they stay there as (null)?

Or does it happen just out of the blue, while they're running?

What do their log files in /var/log/xen/ say?

Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin]
  2016-11-29 13:34   ` IP-Projects - Support
@ 2016-11-29 18:16     ` Dario Faggioli
  0 siblings, 0 replies; 9+ messages in thread
From: Dario Faggioli @ 2016-11-29 18:16 UTC (permalink / raw)
  To: IP-Projects - Support
  Cc: 'xen-devel@lists.xenproject.org', Wei Liu, Roger Pau Monne


[-- Attachment #1.1: Type: text/plain, Size: 6295 bytes --]

On Tue, 2016-11-29 at 13:34 +0000, IP-Projects - Support wrote:
> Hello,
> 
> we see this i think when the vms are stopped or restarted by
> customers (xl destroy vm and then recreating) or I can reprdoce this
> when I stop them all by
> script with a for loop with xl destroy $i .
> 
Ok, that makes sense. What is happening to you is that some of the
domain, although dead, are still around as 'zombies', because they've
got outstanding pages/references/etc.

This is clearly visible in the output of the debug keys you provided.

Something similar has been discussed, e.g., here:
https://lists.xenproject.org/archives/html/xen-devel/2013-11/msg03413.h
tml

> It happens with hvm and pvm
> 
> testcase all vms started:
> 
> root@v34:/var# xl list
> Name                                        ID   Mem
> VCPUs      State   Time(s)
> Domain-0                                     0  2048     2     r---
> --     398.0
> vmanager2593                                34   512     1     -b--
> --       1.8
> 
> root@v34:/var# /root/scripts/vps_stop.sh
> root@v34:/var# xl list
> Name                                        ID   Mem
> VCPUs      State   Time(s)
> Domain-0                                     0  2048     2     r--
> ---     420.5
> (null)                                      34     0     1     --p
> --d       2.3
> 
Just for the sake of completeness, can we see what's in vps_stop.sh?

> root@v34:/var/log/xen# cat xl-vmanager2593.log
> Waiting for domain vmanager2593 (domid 34) to die [pid 23747]
> Domain 34 has been destroyed.
> 
Ok, thanks. Not much indeed. One way to increase the amount of
information would be to start the domains with:

xl -vvv create /etc/xen/vmanager2593.cfg

This will add logs coming from xl and libxl, which may not be where the
problem really is, but I think it's worth a try. Be aware that this
will make your terminal/console/whatever very busy, if you start a lot
of VMs at the same time.

From the config you posted (and that I removed) I see it's a PV guest,
so I'm not asking for any device model logs, in this case.

> /var/log/xen/xen-hotplug.log does not log anything. Any hint why?
> 
I've no idea, but I'm not even sure what kind of log that contains (I
guess stuff related to hotplug scripts).

So, here we are:

> (XEN) 'q' pressed -> dumping domain info (now=0x16B:4C7A5CC3)
> (XEN) General information for domain 34:
> (XEN)     refcnt=1 dying=2 pause_count=2
> (XEN)     nr_pages=122 xenheap_pages=0 shared_pages=0 paged_pages=0
> dirty_cpus={} max_pages=131328
>
As you see, there are outstanding pages. That's what is keeping the
domain around.

> (XEN)     handle=2a991534-312f-465a-9dff-f9a9fb1baadd
> vm_assist=0000002d
> (XEN) Rangesets belonging to domain 34:
> (XEN)     I/O Ports  { }
> (XEN)     log-dirty  { }
> (XEN)     Interrupts { }
> (XEN)     I/O Memory { }
> (XEN) Memory pages belonging to domain 34:
> (XEN)     DomPage 00000000005b9041: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9042: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9043: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9044: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9045: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9046: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9047: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9048: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9049: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b904a: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b904b: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b904c: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b904d: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b904e: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b904f: caf=00000001,
> taf=7400000000000001
> (XEN)     DomPage 00000000005b9050: caf=00000001,
> taf=7400000000000001
> (XEN) NODE affinity for domain 34: [0]
> (XEN) VCPU information and callbacks for domain 34:
> (XEN)     VCPU0: CPU4 [has=F] poll=0 upcall_pend=00 upcall_mask=01
> dirty_cpus={}
> (XEN)     cpu_hard_affinity={4-7} cpu_soft_affinity={0-7}
> (XEN)     pause_count=0 pause_flags=0
> (XEN)     No periodic timer
> (XEN) Notifying guest 0:0 (virq 1, port 5)
> (XEN) Notifying guest 0:1 (virq 1, port 12)
> (XEN) Notifying guest 34:0 (virq 1, port 0)
> (XEN) Shared frames 0 -- Saved frames 0
> 
> (XEN) gnttab_usage_print_all [ key 'g' pressed
> (XEN)       -------- active --------       -------- shared --------
> (XEN) [ref] localdom mfn      pin          localdom gmfn     flags
> (XEN) grant-table for remote domain:   34 (v1)
> (XEN) [  8]        0 0x5b8f05 0x00000001          0 0x5b8f05 0x19
> (XEN) [770]        0 0x5b90ba 0x00000001          0 0x5b90ba 0x19
> (XEN) [802]        0 0x5b90b9 0x00000001          0 0x5b90b9 0x19
> (XEN) [803]        0 0x5b90b8 0x00000001          0 0x5b90b8 0x19
> [snip]
>
And here they are the grants!

I'm Cc-ing someone who knows more than me about grants... In the
meanwhile, can you state again what it is exactly that you are using,
such as:
 - what Xen version?
 - what Dom0 kernel version?
 - about DomU kernel version, I see from this in the config file:
   vmlinuz-4.8.10-xen, so it's Linux 4.8.0, is that right?

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Payed Xen Admin
  2016-11-28 21:08       ` Neil Sikka
@ 2016-11-29 20:01         ` Thomas Toka
  0 siblings, 0 replies; 9+ messages in thread
From: Thomas Toka @ 2016-11-29 20:01 UTC (permalink / raw)
  To: 'Neil Sikka'; +Cc: Xen-devel, Michael Schinzel


[-- Attachment #1.1.1: Type: text/plain, Size: 23776 bytes --]

Hello,

so with normal OS technices like ps, top or other we can not see any processes or pids belonging the domain.

Debug keys for such a domain are like this, but usually we see no memory pages belonging to such a domain.

This one shows some:

debug-keys q

(XEN) General information for domain 9:
(XEN)     refcnt=1 dying=2 pause_count=2
(XEN)     nr_pages=170 xenheap_pages=0 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=262400
(XEN)     handle=f35296d5-9522-4e81-89bf-797d3e64b466 vm_assist=0000002d
(XEN) Rangesets belonging to domain 9:
(XEN)     I/O Ports  { }
(XEN)     log-dirty  { }
(XEN)     Interrupts { }
(XEN)     I/O Memory { }
(XEN) Memory pages belonging to domain 9:
(XEN)     DomPage 000000000053ccc7: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c892: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c893: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c894: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c895: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c896: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c897: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c898: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c899: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c89a: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c89b: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c89c: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c89d: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c89e: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c89f: caf=00000001, taf=7400000000000001
(XEN)     DomPage 000000000053c8a0: caf=00000001, taf=7400000000000001
(XEN) NODE affinity for domain 9: [0]
(XEN) VCPU information and callbacks for domain 9:
(XEN)     VCPU0: CPU4 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={4-7} cpu_soft_affinity={0-7}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5)
(XEN) Notifying guest 0:1 (virq 1, port 12)
(XEN) Notifying guest 9:0 (virq 1, port 0)
(XEN) Shared frames 0 -- Saved frames 0

debug-keys g

(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    0 ... no active grant table entries
(XEN)       -------- active --------       -------- shared --------
(XEN) [ref] localdom mfn      pin          localdom gmfn     flags
(XEN) grant-table for remote domain:    9 (v1)
(XEN) [  8]        0 0x53ccc7 0x00000001          0 0x53ccc7 0x19
(XEN) [770]        0 0x53c93b 0x00000001          0 0x53c93b 0x19
(XEN) [802]        0 0x53c93a 0x00000001          0 0x53c93a 0x19
(XEN) [803]        0 0x53c939 0x00000001          0 0x53c939 0x19
(XEN) [804]        0 0x53c938 0x00000001          0 0x53c938 0x19
(XEN) [805]        0 0x53c937 0x00000001          0 0x53c937 0x19
(XEN) [806]        0 0x53c936 0x00000001          0 0x53c936 0x19
(XEN) [807]        0 0x53c935 0x00000001          0 0x53c935 0x19
(XEN) [808]        0 0x53c934 0x00000001          0 0x53c934 0x19
(XEN) [809]        0 0x53c933 0x00000001          0 0x53c933 0x19
(XEN) [810]        0 0x53c932 0x00000001          0 0x53c932 0x19
(XEN) [811]        0 0x53c931 0x00000001          0 0x53c931 0x19
(XEN) [812]        0 0x53c930 0x00000001          0 0x53c930 0x19
(XEN) [813]        0 0x53c92f 0x00000001          0 0x53c92f 0x19
(XEN) [814]        0 0x53c92e 0x00000001          0 0x53c92e 0x19
(XEN) [815]        0 0x53c92d 0x00000001          0 0x53c92d 0x19
(XEN) [816]        0 0x53c92c 0x00000001          0 0x53c92c 0x19
(XEN) [817]        0 0x53c92b 0x00000001          0 0x53c92b 0x19
(XEN) [818]        0 0x53c92a 0x00000001          0 0x53c92a 0x19
(XEN) [819]        0 0x53c929 0x00000001          0 0x53c929 0x19
(XEN) [820]        0 0x53c928 0x00000001          0 0x53c928 0x19
(XEN) [821]        0 0x53c927 0x00000001          0 0x53c927 0x19
(XEN) [822]        0 0x53c926 0x00000001          0 0x53c926 0x19
(XEN) [823]        0 0x53c925 0x00000001          0 0x53c925 0x19
(XEN) [824]        0 0x53c924 0x00000001          0 0x53c924 0x19
(XEN) [825]        0 0x53c923 0x00000001          0 0x53c923 0x19
(XEN) [826]        0 0x53c922 0x00000001          0 0x53c922 0x19
(XEN) [827]        0 0x53c921 0x00000001          0 0x53c921 0x19
(XEN) [828]        0 0x53c920 0x00000001          0 0x53c920 0x19
(XEN) [829]        0 0x53c91f 0x00000001          0 0x53c91f 0x19
(XEN) [830]        0 0x53c91e 0x00000001          0 0x53c91e 0x19
(XEN) [831]        0 0x53c91d 0x00000001          0 0x53c91d 0x19
(XEN) [832]        0 0x53c91c 0x00000001          0 0x53c91c 0x19
(XEN) [833]        0 0x53c91b 0x00000001          0 0x53c91b 0x19
(XEN) [834]        0 0x53c91a 0x00000001          0 0x53c91a 0x19
(XEN) [835]        0 0x53c919 0x00000001          0 0x53c919 0x19
(XEN) [836]        0 0x53c918 0x00000001          0 0x53c918 0x19
(XEN) [837]        0 0x53c917 0x00000001          0 0x53c917 0x19
(XEN) [838]        0 0x53c916 0x00000001          0 0x53c916 0x19
(XEN) [839]        0 0x53c915 0x00000001          0 0x53c915 0x19
(XEN) [840]        0 0x53c914 0x00000001          0 0x53c914 0x19
(XEN) [841]        0 0x53c913 0x00000001          0 0x53c913 0x19
(XEN) [842]        0 0x53c912 0x00000001          0 0x53c912 0x19
(XEN) [843]        0 0x53c911 0x00000001          0 0x53c911 0x19
(XEN) [844]        0 0x53c910 0x00000001          0 0x53c910 0x19
(XEN) [845]        0 0x53c90f 0x00000001          0 0x53c90f 0x19
(XEN) [846]        0 0x53c90e 0x00000001          0 0x53c90e 0x19
(XEN) [847]        0 0x53c90d 0x00000001          0 0x53c90d 0x19
(XEN) [848]        0 0x53c90c 0x00000001          0 0x53c90c 0x19
(XEN) [849]        0 0x53c90b 0x00000001          0 0x53c90b 0x19
(XEN) [850]        0 0x53c90a 0x00000001          0 0x53c90a 0x19
(XEN) [851]        0 0x53c909 0x00000001          0 0x53c909 0x19
(XEN) [852]        0 0x53c908 0x00000001          0 0x53c908 0x19
(XEN) [853]        0 0x53c907 0x00000001          0 0x53c907 0x19
(XEN) [854]        0 0x53c906 0x00000001          0 0x53c906 0x19
(XEN) [855]        0 0x53c905 0x00000001          0 0x53c905 0x19
(XEN) [856]        0 0x53c904 0x00000001          0 0x53c904 0x19
(XEN) [857]        0 0x53c903 0x00000001          0 0x53c903 0x19
(XEN) [858]        0 0x53c902 0x00000001          0 0x53c902 0x19
(XEN) [859]        0 0x53c901 0x00000001          0 0x53c901 0x19
(XEN) [860]        0 0x53c900 0x00000001          0 0x53c900 0x19
(XEN) [861]        0 0x53c8ff 0x00000001          0 0x53c8ff 0x19
(XEN) [862]        0 0x53c8fe 0x00000001          0 0x53c8fe 0x19
(XEN) [863]        0 0x53c8fd 0x00000001          0 0x53c8fd 0x19
(XEN) [864]        0 0x53c8fc 0x00000001          0 0x53c8fc 0x19
(XEN) [865]        0 0x53c8fb 0x00000001          0 0x53c8fb 0x19
(XEN) [866]        0 0x53c8fa 0x00000001          0 0x53c8fa 0x19
(XEN) [867]        0 0x53c8f9 0x00000001          0 0x53c8f9 0x19
(XEN) [868]        0 0x53c8f8 0x00000001          0 0x53c8f8 0x19
(XEN) [869]        0 0x53c8f7 0x00000001          0 0x53c8f7 0x19
(XEN) [870]        0 0x53c8f6 0x00000001          0 0x53c8f6 0x19
(XEN) [871]        0 0x53c8f5 0x00000001          0 0x53c8f5 0x19
(XEN) [872]        0 0x53c8f4 0x00000001          0 0x53c8f4 0x19
(XEN) [873]        0 0x53c8f3 0x00000001          0 0x53c8f3 0x19
(XEN) [874]        0 0x53c8f2 0x00000001          0 0x53c8f2 0x19
(XEN) [875]        0 0x53c8f1 0x00000001          0 0x53c8f1 0x19
(XEN) [876]        0 0x53c8f0 0x00000001          0 0x53c8f0 0x19
(XEN) [877]        0 0x53c8ef 0x00000001          0 0x53c8ef 0x19
(XEN) [878]        0 0x53c8ee 0x00000001          0 0x53c8ee 0x19
(XEN) [879]        0 0x53c8ec 0x00000001          0 0x53c8ec 0x19
(XEN) [880]        0 0x53c8eb 0x00000001          0 0x53c8eb 0x19
(XEN) [881]        0 0x53c8ea 0x00000001          0 0x53c8ea 0x19
(XEN) [882]        0 0x53c8e9 0x00000001          0 0x53c8e9 0x19
(XEN) [883]        0 0x53c8e8 0x00000001          0 0x53c8e8 0x19
(XEN) [884]        0 0x53c8e7 0x00000001          0 0x53c8e7 0x19
(XEN) [885]        0 0x53c8e6 0x00000001          0 0x53c8e6 0x19
(XEN) [886]        0 0x53c8e5 0x00000001          0 0x53c8e5 0x19
(XEN) [887]        0 0x53c8e4 0x00000001          0 0x53c8e4 0x19
(XEN) [888]        0 0x53c8e3 0x00000001          0 0x53c8e3 0x19
(XEN) [889]        0 0x53c8e2 0x00000001          0 0x53c8e2 0x19
(XEN) [890]        0 0x53c8e1 0x00000001          0 0x53c8e1 0x19
(XEN) [891]        0 0x53c8e0 0x00000001          0 0x53c8e0 0x19
(XEN) [892]        0 0x53c8df 0x00000001          0 0x53c8df 0x19
(XEN) [893]        0 0x53c8de 0x00000001          0 0x53c8de 0x19
(XEN) [894]        0 0x53c8dd 0x00000001          0 0x53c8dd 0x19
(XEN) [895]        0 0x53c8dc 0x00000001          0 0x53c8dc 0x19
(XEN) [896]        0 0x53c8db 0x00000001          0 0x53c8db 0x19
(XEN) [897]        0 0x53c8da 0x00000001          0 0x53c8da 0x19
(XEN) [898]        0 0x53c8d9 0x00000001          0 0x53c8d9 0x19
(XEN) [899]        0 0x53c8d8 0x00000001          0 0x53c8d8 0x19
(XEN) [900]        0 0x53c8d7 0x00000001          0 0x53c8d7 0x19
(XEN) [901]        0 0x53c8d6 0x00000001          0 0x53c8d6 0x19
(XEN) [902]        0 0x53c8d5 0x00000001          0 0x53c8d5 0x19
(XEN) [903]        0 0x53c8d4 0x00000001          0 0x53c8d4 0x19
(XEN) [904]        0 0x53c8d3 0x00000001          0 0x53c8d3 0x19
(XEN) [905]        0 0x53c8d2 0x00000001          0 0x53c8d2 0x19
(XEN) [906]        0 0x53c8d1 0x00000001          0 0x53c8d1 0x19
(XEN) [907]        0 0x53c8d0 0x00000001          0 0x53c8d0 0x19
(XEN) [908]        0 0x53c8cf 0x00000001          0 0x53c8cf 0x19
(XEN) [909]        0 0x53c8ce 0x00000001          0 0x53c8ce 0x19
(XEN) [910]        0 0x53c8cd 0x00000001          0 0x53c8cd 0x19
(XEN) [911]        0 0x53c8cc 0x00000001          0 0x53c8cc 0x19
(XEN) [912]        0 0x53c8cb 0x00000001          0 0x53c8cb 0x19
(XEN) [913]        0 0x53c8ca 0x00000001          0 0x53c8ca 0x19
(XEN) [914]        0 0x53c8c9 0x00000001          0 0x53c8c9 0x19
(XEN) [915]        0 0x53c8c8 0x00000001          0 0x53c8c8 0x19
(XEN) [916]        0 0x53c8c7 0x00000001          0 0x53c8c7 0x19
(XEN) [917]        0 0x53c8c6 0x00000001          0 0x53c8c6 0x19
(XEN) [918]        0 0x53c8c5 0x00000001          0 0x53c8c5 0x19
(XEN) [919]        0 0x53c8c4 0x00000001          0 0x53c8c4 0x19
(XEN) [920]        0 0x53c8c3 0x00000001          0 0x53c8c3 0x19
(XEN) [921]        0 0x53c8c2 0x00000001          0 0x53c8c2 0x19
(XEN) [922]        0 0x53c8c1 0x00000001          0 0x53c8c1 0x19
(XEN) [923]        0 0x53c8c0 0x00000001          0 0x53c8c0 0x19
(XEN) [924]        0 0x53c8bf 0x00000001          0 0x53c8bf 0x19
(XEN) [925]        0 0x53c8be 0x00000001          0 0x53c8be 0x19
(XEN) [926]        0 0x53c8bd 0x00000001          0 0x53c8bd 0x19
(XEN) [927]        0 0x53c8bc 0x00000001          0 0x53c8bc 0x19
(XEN) [928]        0 0x53c8bb 0x00000001          0 0x53c8bb 0x19
(XEN) [929]        0 0x53c8ba 0x00000001          0 0x53c8ba 0x19
(XEN) [930]        0 0x53c8b9 0x00000001          0 0x53c8b9 0x19
(XEN) [931]        0 0x53c8b8 0x00000001          0 0x53c8b8 0x19
(XEN) [932]        0 0x53c8b7 0x00000001          0 0x53c8b7 0x19
(XEN) [933]        0 0x53c8b6 0x00000001          0 0x53c8b6 0x19
(XEN) [934]        0 0x53c8b5 0x00000001          0 0x53c8b5 0x19
(XEN) [935]        0 0x53c8b4 0x00000001          0 0x53c8b4 0x19
(XEN) [936]        0 0x53c8b3 0x00000001          0 0x53c8b3 0x19
(XEN) [937]        0 0x53c8b2 0x00000001          0 0x53c8b2 0x19
(XEN) [938]        0 0x53c8b1 0x00000001          0 0x53c8b1 0x19
(XEN) [939]        0 0x53c8b0 0x00000001          0 0x53c8b0 0x19
(XEN) [940]        0 0x53c8af 0x00000001          0 0x53c8af 0x19
(XEN) [941]        0 0x53c8ae 0x00000001          0 0x53c8ae 0x19
(XEN) [942]        0 0x53c8ad 0x00000001          0 0x53c8ad 0x19
(XEN) [943]        0 0x53c8ac 0x00000001          0 0x53c8ac 0x19
(XEN) [944]        0 0x53c8ab 0x00000001          0 0x53c8ab 0x19
(XEN) [945]        0 0x53c8aa 0x00000001          0 0x53c8aa 0x19
(XEN) [946]        0 0x53c8a9 0x00000001          0 0x53c8a9 0x19
(XEN) [947]        0 0x53c8a8 0x00000001          0 0x53c8a8 0x19
(XEN) [948]        0 0x53c8a7 0x00000001          0 0x53c8a7 0x19
(XEN) [949]        0 0x53c8a6 0x00000001          0 0x53c8a6 0x19
(XEN) [950]        0 0x53c8a5 0x00000001          0 0x53c8a5 0x19
(XEN) [951]        0 0x53c8a4 0x00000001          0 0x53c8a4 0x19
(XEN) [952]        0 0x53c8a3 0x00000001          0 0x53c8a3 0x19
(XEN) [953]        0 0x53c8a2 0x00000001          0 0x53c8a2 0x19
(XEN) [954]        0 0x53c8a1 0x00000001          0 0x53c8a1 0x19
(XEN) [955]        0 0x53c8a0 0x00000001          0 0x53c8a0 0x19
(XEN) [956]        0 0x53c89f 0x00000001          0 0x53c89f 0x19
(XEN) [957]        0 0x53c89e 0x00000001          0 0x53c89e 0x19
(XEN) [958]        0 0x53c89d 0x00000001          0 0x53c89d 0x19
(XEN) [959]        0 0x53c89c 0x00000001          0 0x53c89c 0x19
(XEN) [960]        0 0x53c89b 0x00000001          0 0x53c89b 0x19
(XEN) [961]        0 0x53c89a 0x00000001          0 0x53c89a 0x19
(XEN) [962]        0 0x53c899 0x00000001          0 0x53c899 0x19
(XEN) [963]        0 0x53c898 0x00000001          0 0x53c898 0x19
(XEN) [964]        0 0x53c897 0x00000001          0 0x53c897 0x19
(XEN) [965]        0 0x53c896 0x00000001          0 0x53c896 0x19
(XEN) [966]        0 0x53c895 0x00000001          0 0x53c895 0x19
(XEN) [967]        0 0x53c894 0x00000001          0 0x53c894 0x19
(XEN) [968]        0 0x53c893 0x00000001          0 0x53c893 0x19
(XEN) [969]        0 0x53c892 0x00000001          0 0x53c892 0x19
(XEN) gnttab_usage_print_all ] done




Mit freundlichen Grüßen

Thomas Toka

- Second Level Support -

[logo_mail]

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



Von: Neil Sikka [mailto:neilsikka@gmail.com]
Gesendet: Montag, 28. November 2016 22:09
An: Thomas Toka <toka@ip-projects.de>
Cc: Michael Schinzel <schinzel@ip-projects.de>; Xen-devel <xen-devel@lists.xenproject.org>
Betreff: Re: [Xen-devel] Payed Xen Admin

My technique has been to look through top or ps on Dom0 for the QEMU processes and correlate those PIDs with what I see in /proc/PID. The proc/PID/cmdline file specifies which domid the QEMU process is doing the device emulation for. If QEMU instances are running, try killing the QEMU processes that are running for Domains that are destroyed.

On Mon, Nov 28, 2016 at 1:27 PM, Thomas Toka <toka@ip-projects.de<mailto:toka@ip-projects.de>> wrote:
Hello,

thanks for answering Neil. I think Neil means the block devices ?

Neil can you show us how to verify if those devices are still running for the null domain ids?

I also think its maybe just a timing problem, maybe they do not shut down always as they should..

We can give you surely access to such u box and you could have a look..

Mit freundlichen Grüßen

Thomas Toka

- Second Level Support -

[logo_mail]

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



Von: Michael Schinzel
Gesendet: Montag, 28. November 2016 18:20
An: Neil Sikka <neilsikka@gmail.com<mailto:neilsikka@gmail.com>>
Cc: Xen-devel <xen-devel@lists.xenproject.org<mailto:xen-devel@lists.xenproject.org>>; Thomas Toka <toka@ip-projects.de<mailto:toka@ip-projects.de>>
Betreff: AW: [Xen-devel] Payed Xen Admin

Hello,

thank you for your response. There are no quemu prozesses which we can identify with the ID of the failed guest.


Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -

[https://www.ip-projects.de/logo_mail.png]<https://www.ip-projects.de/>

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



Von: Neil Sikka [mailto:neilsikka@gmail.com]
Gesendet: Montag, 28. November 2016 14:30
An: Michael Schinzel <schinzel@ip-projects.de<mailto:schinzel@ip-projects.de>>
Cc: Xen-devel <xen-devel@lists.xenproject.org<mailto:xen-devel@lists.xenproject.org>>
Betreff: Re: [Xen-devel] Payed Xen Admin


Usually, I've seen (null) domains are not running but their Qemu DMs are running. You could probably remove the (null) from the list by using "kill -9" on the qemu pids.

On Nov 27, 2016 11:55 PM, "Michael Schinzel" <schinzel@ip-projects.de<mailto:schinzel@ip-projects.de>> wrote:
Good Morning,

we have some issues with our Xen Hosts. It seems it is a xen bug but we do not find the solution.

Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0 16192     4     r-----  147102.5
(null)                                       2     1     1     --p--d    1273.2
vmanager2268                                 4  1024     1     -b----   34798.8
vmanager2340                                 5  1024     1     -b----    5983.8
vmanager2619                                12   512     1     -b----    1067.0
vmanager2618                                13  1024     4     -b----    1448.7
vmanager2557                                14  1024     1     -b----    2783.5
vmanager1871                                16   512     1     -b----    3772.1
vmanager2592                                17   512     1     -b----   19744.5
vmanager2566                                18  2048     1     -b----    3068.4
vmanager2228                                19   512     1     -b----     837.6
vmanager2241                                20   512     1     -b----     997.0
vmanager2244                                21  2048     1     -b----    1457.9
vmanager2272                                22  2048     1     -b----    1924.5
vmanager2226                                23  1024     1     -b----    1454.0
vmanager2245                                24   512     1     -b----     692.5
vmanager2249                                25   512     1     -b----   22857.7
vmanager2265                                26  2048     1     -b----    1388.1
vmanager2270                                27   512     1     -b----    1250.6
vmanager2271                                28  2048     3     -b----    2060.8
vmanager2273                                29  1024     1     -b----   34089.4
vmanager2274                                30  2048     1     -b----    8585.1
vmanager2281                                31  2048     2     -b----    1848.9
vmanager2282                                32   512     1     -b----     755.1
vmanager2288                                33  1024     1     -b----     543.6
vmanager2292                                34   512     1     -b----    3004.9
vmanager2041                                35   512     1     -b----    4246.2
vmanager2216                                36  1536     1     -b----   47508.3
vmanager2295                                37   512     1     -b----    1414.9
vmanager2599                                38  1024     4     -b----    7523.0
vmanager2296                                39  1536     1     -b----    7142.0
vmanager2297                                40   512     1     -b----     536.7
vmanager2136                                42  1024     1     -b----    6162.9
vmanager2298                                43   512     1     -b----     441.7
vmanager2299                                44   512     1     -b----     368.7
(null)                                      45     4     1     --p--d    1296.3
vmanager2303                                46   512     1     -b----    1437.0
vmanager2308                                47   512     1     -b----     619.3
vmanager2318                                48   512     1     -b----     976.8
vmanager2325                                49   512     1     -b----     480.2
vmanager2620                                53   512     1     -b----     346.2
(null)                                      56     0     1     --p--d       8.8
vmanager2334                                57   512     1     -b----     255.5
vmanager2235                                58   512     1     -b----    1724.2
vmanager987                                 59   512     1     -b----     647.1
vmanager2302                                60   512     1     -b----     171.4
vmanager2335                                61   512     1     -b----      31.3
vmanager2336                                62   512     1     -b----      45.1
vmanager2338                                63   512     1     -b----      22.6
vmanager2346                                64   512     1     -b----      20.9
vmanager2349                                65  2048     1     -b----      14.4
vmanager2350                                66   512     1     -b----     324.8
vmanager2353                                67   512     1     -b----       7.6


HVM VMs change sometimes in the state (null).

We still upgraded xen from 4.1.1 to 4.8, we upgraded the System Kernel –

root@v8:~# uname -a
Linux v8.ip-projects.de<http://v8.ip-projects.de> 4.8.10-xen #2 SMP Mon Nov 21 18:56:56 CET 2016 x86_64 GNU/Linux

But all these points dont help us to solve this issue.

Now we are searching a Xen administrator which can help us anylising and solving this issue. We would also pay for this Service.

Hardware Specs of the host:

2x Intel Xeon E5-2620v4
256 GB DDR4 ECC Reg RAM
6x 3 TB WD RE
2x 512 GB Kingston KC
2x 256 GB Kingston KC
2x 600 GB SAS
LSI MegaRAID 9361-8i
MegaRAID Kit LSICVM02


The cause behind this Setup:

6x 3 TB WD RE – RAID 10 – W/R IO Cache + CacheCade LSI – Data Storage
2x 512 GB Kingston KC400 SSDs – RAID 1 – SSD Cache for RAID 10 Array
2x 256 GB Kingston KC400 SSD – RAID 1 – SWAP Array for Para VMs
2x 600 GB SAS  - RAID 1 – Backup Array for faster Backup of the VMs to external Storage.




Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -

[https://www.ip-projects.de/logo_mail.png]<https://www.ip-projects.de/>

IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn

Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>

Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
https://lists.xen.org/xen-devel



--
My Blog: http://www.neilscomputerblog.blogspot.com/
Twitter: @neilsikka

[-- Attachment #1.1.2: Type: text/html, Size: 115235 bytes --]

[-- Attachment #1.2: image001.png --]
[-- Type: image/png, Size: 1043 bytes --]

[-- Attachment #1.3: image002.png --]
[-- Type: image/png, Size: 2217 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-11-29 20:02 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-27  8:52 Payed Xen Admin Michael Schinzel
2016-11-28 13:30 ` Neil Sikka
2016-11-28 17:19   ` Michael Schinzel
2016-11-28 18:27     ` Thomas Toka
2016-11-28 21:08       ` Neil Sikka
2016-11-29 20:01         ` Thomas Toka
2016-11-29 12:08 ` Dario Faggioli
2016-11-29 13:34   ` IP-Projects - Support
2016-11-29 18:16     ` PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin] Dario Faggioli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.