From: "Jürgen Groß" <jgross@suse.com>
To: "Anders Törnqvist" <anders.tornqvist@codiax.se>,
"Dario Faggioli" <dfaggioli@suse.com>,
"Julien Grall" <julien@xen.org>,
xen-devel@lists.xenproject.org,
"Stefano Stabellini" <sstabellini@kernel.org>
Subject: Re: Null scheduler and vwfi native problem
Date: Fri, 29 Jan 2021 09:18:21 +0100 [thread overview]
Message-ID: <bfe8b2fe-57c4-79e2-f2e7-3e1cb9b7963b@suse.com> (raw)
In-Reply-To: <18ef4619-19ae-90d2-459c-9b5282b49176@codiax.se>
[-- Attachment #1.1.1: Type: text/plain, Size: 5145 bytes --]
On 29.01.21 09:08, Anders Törnqvist wrote:
> On 1/26/21 11:31 PM, Dario Faggioli wrote:
>> On Tue, 2021-01-26 at 18:03 +0100, Anders Törnqvist wrote:
>>> On 1/25/21 5:11 PM, Dario Faggioli wrote:
>>>> On Fri, 2021-01-22 at 14:26 +0000, Julien Grall wrote:
>>>>> Hi Anders,
>>>>>
>>>>> On 22/01/2021 08:06, Anders Törnqvist wrote:
>>>>>> On 1/22/21 12:35 AM, Dario Faggioli wrote:
>>>>>>> On Thu, 2021-01-21 at 19:40 +0000, Julien Grall wrote:
>>>>>> - booting with "sched=null vwfi=native" but not doing the IRQ
>>>>>> passthrough that you mentioned above
>>>>>> "xl destroy" gives
>>>>>> (XEN) End of domain_destroy function
>>>>>>
>>>>>> Then a "xl create" says nothing but the domain has not started
>>>>>> correct.
>>>>>> "xl list" look like this for the domain:
>>>>>> mydomu 2 512 1 ------
>>>>>> 0.0
>>>>> This is odd. I would have expected ``xl create`` to fail if
>>>>> something
>>>>> went wrong with the domain creation.
>>>>>
>>>> So, Anders, would it be possible to issue a:
>>>>
>>>> # xl debug-keys r
>>>> # xl dmesg
>>>>
>>>> And send it to us ?
>>>>
>>>> Ideally, you'd do it:
>>>> - with Julien's patch (the one he sent the other day, and that
>>>> you
>>>> have already given a try to) applied
>>>> - while you are in the state above, i.e., after having tried to
>>>> destroy a domain and failing
>>>> - and maybe again after having tried to start a new domain
>>> Here are some logs.
>>>
>> Great, thanks a lot!
>>
>>> The system is booted as before with the patch and the domu config
>>> does
>>> not have the IRQs.
>>>
>> Ok.
>>
>>> # xl list
>>> Name ID Mem VCPUs State
>>> Time(s)
>>> Domain-0 0 3000 5 r-----
>>> 820.1
>>> mydomu 1 511 1 r-----
>>> 157.0
>>>
>>> # xl debug-keys r
>>> (XEN) sched_smt_power_savings: disabled
>>> (XEN) NOW=191793008000
>>> (XEN) Online Cpus: 0-5
>>> (XEN) Cpupool 0:
>>> (XEN) Cpus: 0-5
>>> (XEN) Scheduler: null Scheduler (null)
>>> (XEN) cpus_free =
>>> (XEN) Domain info:
>>> (XEN) Domain: 0
>>> (XEN) 1: [0.0] pcpu=0
>>> (XEN) 2: [0.1] pcpu=1
>>> (XEN) 3: [0.2] pcpu=2
>>> (XEN) 4: [0.3] pcpu=3
>>> (XEN) 5: [0.4] pcpu=4
>>> (XEN) Domain: 1
>>> (XEN) 6: [1.0] pcpu=5
>>> (XEN) Waitqueue:
>>>
>> So far, so good. All vCPUs are running on their assigned pCPU, and
>> there is no vCPU wanting to run but not having a vCPU where to do so.
>>
>>> (XEN) Command line: console=dtuart dtuart=/serial@5a060000
>>> dom0_mem=3000M dom0_max_vcpus=5 hmp-unsafe=true dom0_vcpus_pin
>>> sched=null vwfi=native
>>>
>> Oh, just as a side note (and most likely unrelated to the problem we're
>> discussing), you should be able to get rid of dom0_vcpus_pin.
>>
>> The NULL scheduler will do something similar to what that option itself
>> does anyway. And with the benefit that, if you want, you can actually
>> change to what pCPUs the dom0's vCPU are pinned. While, if you use
>> dom0_vcpus_pin, you can't.
>>
>> So it using it has only downsides (and that's true in general, if you
>> ask me, but particularly so if using NULL).
> Thanks for the feedback.
> I removed dom0_vcpus_pin. And, as you said, it seems to be unrelated to
> the problem we're discussing. The system still behaves the same.
>
> When the dom0_vcpus_pin is removed. xl vcpu-list looks like this:
>
> Name ID VCPU CPU State Time(s)
> Affinity (Hard / Soft)
> Domain-0 0 0 0 r-- 29.4 all / all
> Domain-0 0 1 1 r-- 28.7 all / all
> Domain-0 0 2 2 r-- 28.7 all / all
> Domain-0 0 3 3 r-- 28.6 all / all
> Domain-0 0 4 4 r-- 28.6 all / all
> mydomu 1 0 5 r-- 21.6 5 / all
>
> From this listing (with "all" as hard affinity for dom0) one might read
> it like dom0 is not pinned with hard affinity to any specific pCPUs at
> all but mudomu is pinned to pCPU 5.
> Will the dom0_max_vcpus=5 in this case guarantee that dom0 only will run
> on pCPU 0-4 so that mydomu always will have pCPU 5 for itself only?
No.
>
> What if I would like mydomu to be th only domain that uses pCPU 2?
Setup a cpupool with that pcpu assigned to it and put your domain into
that cpupool.
Juergen
[-- Attachment #1.1.2: OpenPGP_0xB0DE9DD628BF132F.asc --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]
next prev parent reply other threads:[~2021-01-29 8:18 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-21 10:54 Null scheduler and vwfi native problem Anders Törnqvist
2021-01-21 18:32 ` Dario Faggioli
2021-01-21 19:40 ` Julien Grall
2021-01-21 23:35 ` Dario Faggioli
2021-01-22 8:06 ` Anders Törnqvist
2021-01-22 9:05 ` Dario Faggioli
2021-01-22 14:26 ` Julien Grall
2021-01-22 17:44 ` Anders Törnqvist
2021-01-25 15:45 ` Dario Faggioli
2021-01-25 16:11 ` Dario Faggioli
2021-01-26 17:03 ` Anders Törnqvist
2021-01-26 22:31 ` Dario Faggioli
2021-01-29 8:08 ` Anders Törnqvist
2021-01-29 8:18 ` Jürgen Groß [this message]
2021-01-29 10:16 ` Dario Faggioli
2021-02-01 6:53 ` Anders Törnqvist
2021-01-30 17:59 ` Dario Faggioli
2021-02-01 6:55 ` Anders Törnqvist
2021-02-02 7:59 ` Julien Grall
2021-02-02 15:03 ` Dario Faggioli
2021-02-02 15:23 ` Julien Grall
2021-02-03 7:31 ` Dario Faggioli
2021-02-03 9:19 ` Julien Grall
2021-02-03 11:00 ` Jürgen Groß
2021-02-03 11:20 ` Julien Grall
2021-02-03 12:02 ` Jürgen Groß
2021-02-15 7:15 ` Anders Törnqvist
2021-01-22 14:02 ` Julien Grall
2021-01-22 17:30 ` Anders Törnqvist
2021-01-22 8:07 ` Anders Törnqvist
2021-01-21 19:16 ` Julien Grall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bfe8b2fe-57c4-79e2-f2e7-3e1cb9b7963b@suse.com \
--to=jgross@suse.com \
--cc=anders.tornqvist@codiax.se \
--cc=dfaggioli@suse.com \
--cc=julien@xen.org \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).