From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD053C433DF for ; Tue, 2 Jun 2020 11:06:54 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AA2D8206C3 for ; Tue, 2 Jun 2020 11:06:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA2D8206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jg4kl-00074D-Pq; Tue, 02 Jun 2020 11:06:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jg4kj-000742-O4 for xen-devel@lists.xenproject.org; Tue, 02 Jun 2020 11:06:41 +0000 X-Inumbo-ID: 22952e54-a4c1-11ea-9dbe-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 22952e54-a4c1-11ea-9dbe-bc764e2007e4; Tue, 02 Jun 2020 11:06:40 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 26D98AD4D; Tue, 2 Jun 2020 11:06:42 +0000 (UTC) Subject: Re: [PATCH v2] tools/libxl: make default of max event channels dependant on vcpus [and 1 more messages] To: Jan Beulich References: <20200406082704.13994-1-jgross@suse.com> <24203.2251.628483.557280@mariner.uk.xensource.com> <26161282-7bad-5888-16c9-634647e6fde8@xen.org> <8a6f6e41-9395-6c68-eae9-4c1aeb7d96e2@suse.com> <24203.2546.728186.463143@mariner.uk.xensource.com> <24203.2996.819908.965198@mariner.uk.xensource.com> <799396b3-0304-e149-cc3f-45c5a46c7c0c@suse.com> From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= Message-ID: <715f6143-38b3-3f70-b9e3-1ac4a240282f@suse.com> Date: Tue, 2 Jun 2020 13:06:38 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Anthony Perard , Ian Jackson , Julien Grall , Wei Liu , "xen-devel@lists.xenproject.org" Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" On 06.04.20 14:09, Jan Beulich wrote: > On 06.04.2020 13:54, Jürgen Groß wrote: >> On 06.04.20 13:11, Jan Beulich wrote: >>> On 06.04.2020 13:00, Ian Jackson wrote: >>>> Julien Grall writes ("Re: [PATCH v2] tools/libxl: make default of max event channels dependant on vcpus"): >>>>> There are no correlation between event channels and vCPUs. The number of >>>>> event channels only depends on the number of frontend you have in your >>>>> guest. So... >>>>> >>>>> Hi Ian, >>>>> >>>>> On 06/04/2020 11:47, Ian Jackson wrote: >>>>>> If ARM folks want to have a different formula for the default then >>>>>> that is of course fine but I wonder whether this might do ARMk more >>>>>> harm than good in this case. >>>>> >>>>> ... 1023 event channels is going to be plenty enough for most of the use >>>>> cases. >>>> >>>> OK, thanks for the quick reply. >>>> >>>> So, Jürgen, I think everyone will be happy with this: >>> >>> I don't think I will be - my prior comment still holds on there not >>> being any grounds to use a specific OS kernel's (and to be precise >>> a specific OS kernel version's) requirements for determining >>> defaults. If there was to be such a dependency, then OS kernel >>> [variant] should be part of the inputs to such a (set of) formula(s). >> >> IMO this kind of trying to be perfect will completely block a sane >> heuristic for being able to boot large guests at all. > > This isn't about being perfect - I'm suggesting to leave the > default alone, not to improve the calculation, not the least > because I've been implying ... > >> The patch isn't about to find an as stringent as possible upper >> boundary for huge guests, but a sane value being able to boot most of >> those. >> >> And how should Xen know the OS kernel needs exactly after all? > > ... the answer of "It can#t" to this question. > >> And it is not that we talking about megabytes of additional memory. A >> guest with 256 vcpus will just be able to use additional 36 memory >> pages. The maximum non-PV domain (the probably only relevant case >> of another OS than Linux being used) with 128 vcpus would "waste" >> 32 kB. In case the guest misbehaves. > > Any extra page counts, or else - where do you draw the line? Any > single page may decide between Xen (not) being out of memory, > and hence also not being able to fulfill certain other requests. > >> The alternative would be to do nothing and having to let the user >> experience a somewhat cryptic guest crash. He could google for a >> possible solution which would probably end in a rather high static >> limit resulting in wasting even more memory. > > I realize this. Otoh more people running into this will improve > the chances of later ones finding useful suggestions. Of course > there's also nothing wrong with trying to make the error less > cryptic. Reviving this discussion. I strongly disagree with your reasoning. Rejecting to modify tools defaults for large guests to make them boot is a bad move IMO. We are driving more people away from Xen this way. The fear of a misbehaving guest of that size to use a few additional pages on a machine with at least 100 cpus is fine from the academical point of view, but should not be weighed higher than the usability aspect in this case IMO. Juergen