From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1BF0C2BA19 for ; Mon, 6 Apr 2020 10:18:05 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 98B142054F for ; Mon, 6 Apr 2020 10:18:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 98B142054F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jLOp8-0004ep-JC; Mon, 06 Apr 2020 10:17:46 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jLOp7-0004ek-N7 for xen-devel@lists.xenproject.org; Mon, 06 Apr 2020 10:17:45 +0000 X-Inumbo-ID: dabc0007-77ef-11ea-bfdc-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id dabc0007-77ef-11ea-bfdc-12813bfff9fa; Mon, 06 Apr 2020 10:17:44 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 7AE66AC11; Mon, 6 Apr 2020 10:17:42 +0000 (UTC) Subject: Re: [PATCH v2] tools/libxl: make default of max event channels dependant on vcpus To: Julien Grall , xen-devel@lists.xenproject.org References: <20200406082704.13994-1-jgross@suse.com> From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= Message-ID: Date: Mon, 6 Apr 2020 12:17:42 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Anthony PERARD , Ian Jackson , Wei Liu Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" On 06.04.20 11:24, Julien Grall wrote: > Hi Jurgen, > > On 06/04/2020 09:27, Juergen Gross wrote: >> Since Xen 4.4 the maximum number of event channels for a guest is >> defaulting to 1023. For large guests with lots of vcpus this is not >> enough, as e.g. the Linux kernel uses 7 event channels per vcpu, >> limiting the guest to about 140 vcpus. > > Large guests on which arch? Which type of guests? I'm pretty sure this applies to x86 only. I'm not aware of event channels being used on ARM for IPIs. > >> Instead of requiring to specify the allowed number of event channels >> via the "event_channels" domain config option, make the default >> depend on the maximum number of vcpus of the guest. This will require >> to use the "event_channels" domain config option in fewer cases as >> before. >> >> As different guests will have differing needs the calculation of the >> maximum number of event channels can be a rough estimate only, >> currently based on the Linux kernel requirements. > > I am not overly happy to extend the default numbers of event channels > for everyone based on Linux behavior on a given setup. Yes you have more > guests that would be able to run, but at the expense of allowing a guest > to use more xen memory. The resulting number would be larger than today only for guests with more than 96 vcpus. So I don't think the additional amount of memory is really that problematic. > > For instance, I don't think this limit increase is necessary on Arm. > >> Nevertheless it is >> much better than the static upper limit of today as more guests will >> boot just fine with the new approach. >> >> In order not to regress current configs use 1023 as the minimum >> default setting. >> >> Signed-off-by: Juergen Gross >> --- >> V2: >> - use max() instead of min() >> - clarify commit message a little bit >> --- >>   tools/libxl/libxl_create.c | 2 +- > > The documentation should be updated. Oh, indeed. > >>   1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c >> index e7cb2dbc2b..c025b21894 100644 >> --- a/tools/libxl/libxl_create.c >> +++ b/tools/libxl/libxl_create.c >> @@ -226,7 +226,7 @@ int libxl__domain_build_info_setdefault(libxl__gc >> *gc, >>               b_info->iomem[i].gfn = b_info->iomem[i].start; >>       if (!b_info->event_channels) >> -        b_info->event_channels = 1023; >> +        b_info->event_channels = max(1023, b_info->max_vcpus * 8 + 255); > > What is the 255 for? Just some headroom for e.g. pv devices. Juergen