All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jürgen Groß" <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] tools/libxl: make default of max event channels dependant on vcpus
Date: Mon, 6 Apr 2020 12:47:32 +0200	[thread overview]
Message-ID: <8a6f6e41-9395-6c68-eae9-4c1aeb7d96e2@suse.com> (raw)
In-Reply-To: <26161282-7bad-5888-16c9-634647e6fde8@xen.org>

On 06.04.20 12:37, Julien Grall wrote:
> Hi Juergen,
> 
> On 06/04/2020 11:17, Jürgen Groß wrote:
>> On 06.04.20 11:24, Julien Grall wrote:
>>> Hi Jurgen,
>>>
>>> On 06/04/2020 09:27, Juergen Gross wrote:
>>>> Since Xen 4.4 the maximum number of event channels for a guest is
>>>> defaulting to 1023. For large guests with lots of vcpus this is not
>>>> enough, as e.g. the Linux kernel uses 7 event channels per vcpu,
>>>> limiting the guest to about 140 vcpus.
>>>
>>> Large guests on which arch? Which type of guests?
>>
>> I'm pretty sure this applies to x86 only. I'm not aware of event
>> channels being used on ARM for IPIs.
> 
> How about the guest types?

PV and HVM with PV enhancements.

> 
>>
>>>
>>>> Instead of requiring to specify the allowed number of event channels
>>>> via the "event_channels" domain config option, make the default
>>>> depend on the maximum number of vcpus of the guest. This will require
>>>> to use the "event_channels" domain config option in fewer cases as
>>>> before.
>>>>
>>>> As different guests will have differing needs the calculation of the
>>>> maximum number of event channels can be a rough estimate only,
>>>> currently based on the Linux kernel requirements.
>>>
>>> I am not overly happy to extend the default numbers of event channels 
>>> for everyone based on Linux behavior on a given setup. Yes you have 
>>> more guests that would be able to run, but at the expense of allowing 
>>> a guest to use more xen memory.
>>
>> The resulting number would be larger than today only for guests with
>> more than 96 vcpus. So I don't think the additional amount of memory
>> is really that problematic.
> This is not a very forward looking argument. For Arm, we limit to 128 
> vCPUs at the moment but it would be possible to support many more (I 
> think our vGIC implementation support up to 4096 vCPUs).
> 
> So even if your change impacts a small subset, each architectures should 
> be able to make the decision on the limit and not imposed by x86 Linux PV.

Okay, what about moving the default setting of b_info->event_channels
into libxl__arch_domain_build_info_setdefault() then?

> 
>>
>>>
>>> For instance, I don't think this limit increase is necessary on Arm.
>>>
>>>> Nevertheless it is
>>>> much better than the static upper limit of today as more guests will
>>>> boot just fine with the new approach.
>>>>
>>>> In order not to regress current configs use 1023 as the minimum
>>>> default setting.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>> V2:
>>>> - use max() instead of min()
>>>> - clarify commit message a little bit
>>>> ---
>>>>   tools/libxl/libxl_create.c | 2 +-
>>>
>>> The documentation should be updated.
>>
>> Oh, indeed.
>>
>>>
>>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>>>> index e7cb2dbc2b..c025b21894 100644
>>>> --- a/tools/libxl/libxl_create.c
>>>> +++ b/tools/libxl/libxl_create.c
>>>> @@ -226,7 +226,7 @@ int 
>>>> libxl__domain_build_info_setdefault(libxl__gc *gc,
>>>>               b_info->iomem[i].gfn = b_info->iomem[i].start;
>>>>       if (!b_info->event_channels)
>>>> -        b_info->event_channels = 1023;
>>>> +        b_info->event_channels = max(1023, b_info->max_vcpus * 8 + 
>>>> 255);
>>>
>>> What is the 255 for?
>>
>> Just some headroom for e.g. pv devices.
> 
> That should really be explained in the commit message and a comment.

Okay.


Juergen


  reply	other threads:[~2020-04-06 10:47 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-06  8:27 [PATCH v2] tools/libxl: make default of max event channels dependant on vcpus Juergen Gross
2020-04-06  9:24 ` Julien Grall
2020-04-06 10:17   ` Jürgen Groß
2020-04-06 10:37     ` Julien Grall
2020-04-06 10:47       ` Jürgen Groß [this message]
2020-04-06 10:52         ` Ian Jackson
2020-04-06 11:00           ` [PATCH v2] tools/libxl: make default of max event channels dependant on vcpus [and 1 more messages] Ian Jackson
2020-04-06 11:03             ` Jürgen Groß
2020-04-06 11:11             ` Jan Beulich
2020-04-06 11:54               ` Jürgen Groß
2020-04-06 12:09                 ` Jan Beulich
2020-06-02 11:06                   ` Jürgen Groß
2020-06-02 11:12                     ` Jan Beulich
2020-06-02 11:23                       ` Jürgen Groß
2020-06-02 13:21                         ` Jan Beulich
2020-04-06 10:47     ` [PATCH v2] tools/libxl: make default of max event channels dependant on vcpus Ian Jackson
2020-04-06 10:55       ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8a6f6e41-9395-6c68-eae9-4c1aeb7d96e2@suse.com \
    --to=jgross@suse.com \
    --cc=anthony.perard@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=julien@xen.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.