From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AFE61C433F5 for ; Tue, 5 Apr 2022 08:58:11 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.298621.508784 (Exim 4.92) (envelope-from ) id 1nbf0k-0006Tw-1H; Tue, 05 Apr 2022 08:58:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 298621.508784; Tue, 05 Apr 2022 08:58:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nbf0j-0006S4-Rk; Tue, 05 Apr 2022 08:58:01 +0000 Received: by outflank-mailman (input) for mailman id 298621; Tue, 05 Apr 2022 08:58:01 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nbf0i-0005F8-Vf for xen-devel@lists.xenproject.org; Tue, 05 Apr 2022 08:58:01 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 7debc598-b4be-11ec-a405-831a346695d4; Tue, 05 Apr 2022 10:57:59 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E0A15D6E; Tue, 5 Apr 2022 01:57:58 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com [10.1.195.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8C8CD3F73B; Tue, 5 Apr 2022 01:57:57 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7debc598-b4be-11ec-a405-831a346695d4 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: bertrand.marquis@arm.com, wei.chen@arm.com, Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu Subject: [PATCH v5 5/6] arm/dom0less: assign dom0less guests to cpupools Date: Tue, 5 Apr 2022 09:57:40 +0100 Message-Id: <20220405085741.18336-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220405085741.18336-1-luca.fancellu@arm.com> References: <20220405085741.18336-1-luca.fancellu@arm.com> Introduce domain-cpupool property of a xen,domain device tree node, that specifies the cpupool device tree handle of a xen,cpupool node that identifies a cpupool created at boot time where the guest will be assigned on creation. Add member to the xen_domctl_createdomain public interface so the XEN_DOMCTL_INTERFACE_VERSION version is bumped. Add public function to retrieve a pool id from the device tree cpupool node. Update documentation about the property. Signed-off-by: Luca Fancellu Reviewed-by: Stefano Stabellini --- Changes in v5: - no changes Changes in v4: - no changes - add R-by Changes in v3: - Use explicitely sized integer for struct xen_domctl_createdomain cpupool_id member. (Stefano) - Changed code due to previous commit code changes Changes in v2: - Moved cpupool_id from arch specific to common part (Juergen) - Implemented functions to retrieve the cpupool id from the cpupool dtb node. --- docs/misc/arm/device-tree/booting.txt | 5 +++++ xen/arch/arm/domain_build.c | 14 +++++++++++++- xen/common/boot_cpupools.c | 24 ++++++++++++++++++++++++ xen/common/domain.c | 2 +- xen/include/public/domctl.h | 4 +++- xen/include/xen/sched.h | 9 +++++++++ 6 files changed, 55 insertions(+), 3 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt index a94125394e35..7b4a29a2c293 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -188,6 +188,11 @@ with the following properties: An empty property to request the memory of the domain to be direct-map (guest physical address == physical address). +- domain-cpupool + + Optional. Handle to a xen,cpupool device tree node that identifies the + cpupool where the guest will be started at boot. + Under the "xen,domain" compatible node, one or more sub-nodes are present for the DomU kernel and ramdisk. diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 8be01678de05..9c67a483d4a4 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -3172,7 +3172,8 @@ static int __init construct_domU(struct domain *d, void __init create_domUs(void) { struct dt_device_node *node; - const struct dt_device_node *chosen = dt_find_node_by_path("/chosen"); + const struct dt_device_node *cpupool_node, + *chosen = dt_find_node_by_path("/chosen"); BUG_ON(chosen == NULL); dt_for_each_child_node(chosen, node) @@ -3241,6 +3242,17 @@ void __init create_domUs(void) vpl011_virq - 32 + 1); } + /* Get the optional property domain-cpupool */ + cpupool_node = dt_parse_phandle(node, "domain-cpupool", 0); + if ( cpupool_node ) + { + int pool_id = btcpupools_get_domain_pool_id(cpupool_node); + if ( pool_id < 0 ) + panic("Error getting cpupool id from domain-cpupool (%d)\n", + pool_id); + d_cfg.cpupool_id = pool_id; + } + /* * The variable max_init_domid is initialized with zero, so here it's * very important to use the pre-increment operator to call diff --git a/xen/common/boot_cpupools.c b/xen/common/boot_cpupools.c index 97c321386879..1f940330a62d 100644 --- a/xen/common/boot_cpupools.c +++ b/xen/common/boot_cpupools.c @@ -21,6 +21,8 @@ static unsigned int __initdata next_pool_id; #define BTCPUPOOLS_DT_NODE_NO_REG (-1) #define BTCPUPOOLS_DT_NODE_NO_LOG_CPU (-2) +#define BTCPUPOOLS_DT_WRONG_NODE (-3) +#define BTCPUPOOLS_DT_CORRUPTED_NODE (-4) static int __init get_logical_cpu_from_hw_id(unsigned int hwid) { @@ -55,6 +57,28 @@ get_logical_cpu_from_cpu_node(const struct dt_device_node *cpu_node) return cpu_num; } +int __init btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + const struct dt_device_node *phandle_node; + int cpu_num; + + if ( !dt_device_is_compatible(node, "xen,cpupool") ) + return BTCPUPOOLS_DT_WRONG_NODE; + /* + * Get first cpu listed in the cpupool, from its reg it's possible to + * retrieve the cpupool id. + */ + phandle_node = dt_parse_phandle(node, "cpupool-cpus", 0); + if ( !phandle_node ) + return BTCPUPOOLS_DT_CORRUPTED_NODE; + + cpu_num = get_logical_cpu_from_cpu_node(phandle_node); + if ( cpu_num < 0 ) + return cpu_num; + + return pool_cpu_map[cpu_num]; +} + static int __init check_and_get_sched_id(const char* scheduler_name) { int sched_id = sched_get_id_by_name(scheduler_name); diff --git a/xen/common/domain.c b/xen/common/domain.c index 351029f8b239..0827400f4f49 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -698,7 +698,7 @@ struct domain *domain_create(domid_t domid, if ( !d->pbuf ) goto fail; - if ( (err = sched_init_domain(d, 0)) != 0 ) + if ( (err = sched_init_domain(d, config->cpupool_id)) != 0 ) goto fail; if ( (err = late_hwdom_init(d)) != 0 ) diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h index b85e6170b0aa..2f4cf56f438d 100644 --- a/xen/include/public/domctl.h +++ b/xen/include/public/domctl.h @@ -38,7 +38,7 @@ #include "hvm/save.h" #include "memory.h" -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014 +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015 /* * NB. xen_domctl.domain is an IN/OUT parameter for this operation. @@ -106,6 +106,8 @@ struct xen_domctl_createdomain { /* Per-vCPU buffer size in bytes. 0 to disable. */ uint32_t vmtrace_size; + uint32_t cpupool_id; + struct xen_arch_domainconfig arch; }; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 453e98f1cba8..b62315ad5e5d 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1182,6 +1182,7 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi); void btcpupools_allocate_pools(void); unsigned int btcpupools_get_cpupool_id(unsigned int cpu); void btcpupools_dtb_parse(void); +int btcpupools_get_domain_pool_id(const struct dt_device_node *node); #else /* !CONFIG_BOOT_TIME_CPUPOOLS */ static inline void btcpupools_allocate_pools(void) {} @@ -1190,6 +1191,14 @@ static inline unsigned int btcpupools_get_cpupool_id(unsigned int cpu) { return 0; } +#ifdef CONFIG_HAS_DEVICE_TREE +static inline int +btcpupools_get_domain_pool_id(const struct dt_device_node *node) +{ + return 0; +} +#endif + #endif #endif /* __SCHED_H__ */ -- 2.17.1