xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xen.org
Cc: edgar.iglesias@xilinx.com,
	Stefano Stabellini <stefanos@xilinx.com>,
	saeed.nowshadi@xilinx.com
Subject: Re: [PATCH v5 5/7] xen/arm: zynqmp: eemi access control
Date: Tue, 11 Dec 2018 15:37:51 +0000	[thread overview]
Message-ID: <cb0ea7d3-f121-7226-896e-ea77459aea23@arm.com> (raw)
In-Reply-To: <1543871021-315-5-git-send-email-sstabellini@kernel.org>

Hi,

On 03/12/2018 21:03, Stefano Stabellini wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
> 
> From: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> 
> Introduce data structs to implement basic access controls.
> Introduce the following three functions:
> 
> domain_has_node_access: check access to the node
> domain_has_reset_access: check access to the reset line
> domain_has_mmio_access: check access to the register
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> 
> ---
> Statically defines:
> 
> - pm_node_access
> It encodes the relationship between a node id and the start of the MMIO
> region of a device in the corresponding power domain. It is used for
> permission checking. Although the MMIO region start address is available
> on device tree and could be derived from there (we plan to improve that
> in the future), the relationship between a node id and corresponding
> devices is not described and needs to be hardcoded.
> 
> - pm_reset_access
> Same as pm_node_access for reset lines.
> 
> ---
> Changes in v5:
> - improve in-code comments
> - use mfn_t in struct pm_access
> - remove mmio_access table
> 
> Changes in v4:
> - add #include as needed
> - add #if 0 for bisectability
> - use mfn_t in pm_check_access
> - add wrap-around ASSERT in domain_has_mmio_access
> - use GENMASK in domain_has_mmio_access
> - proper bound checks (== ARRAY_SIZE is out of bound)
> ---
>   xen/arch/arm/platforms/xilinx-zynqmp-eemi.c | 348 ++++++++++++++++++++++++++++
>   1 file changed, 348 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
> index 369bb3f..92a02df 100644
> --- a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
> +++ b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
> @@ -16,9 +16,357 @@
>    * GNU General Public License for more details.
>    */
>   
> +/*
> + *  EEMI Power Management API access
> + *
> + * Refs:
> + * https://www.xilinx.com/support/documentation/user_guides/ug1200-eemi-api.pdf
> + *
> + * Background:
> + * The ZynqMP has a subsystem named the PMU with a CPU and special devices
> + * dedicated to running Power Management Firmware. Other masters in the
> + * system need to send requests to the PMU in order to for example:
> + * * Manage power state
> + * * Configure clocks
> + * * Program bitstreams for the programmable logic
> + * * etc
> + *
> + * Although the details of the setup are configurable, in the common case
> + * the PMU lives in the Secure world. NS World cannot directly communicate
> + * with it and must use proxy services from ARM Trusted Firmware to reach
> + * the PMU.
> + *
> + * Power Management on the ZynqMP is implemented in a layered manner.
> + * The PMU knows about various masters and will enforce access controls
> + * based on a pre-configured partitioning. This configuration dictates
> + * which devices are owned by the various masters and the PMU FW makes sure
> + * that a given master cannot turn off a device that it does not own or that
> + * is in use by other masters.
> + *
> + * The PMU is not aware of multiple execution states in masters.
> + * For example, it treats the ARMv8 cores as single units and does not
> + * distinguish between Secure vs NS OS's nor does it know about Hypervisors
> + * and multiple guests. It is up to software on the ARMv8 cores to present
> + * a unified view of its power requirements.
> + *
> + * To implement this unified view, ARM Trusted Firmware at EL3 provides
> + * access to the PM API via SMC calls. ARM Trusted Firmware is responsible
> + * for mediating between the Secure and the NS world, rejecting SMC calls
> + * that request changes that are not allowed.
> + *
> + * Xen running above ATF owns the NS world and is responsible for presenting
> + * unified PM requests taking all guests and the hypervisor into account.
> + *
> + * Implementation:
> + * The PM API contains different classes of calls.
> + * Certain calls are harmless to expose to any guest.
> + * These include calls to get the PM API Version, or to read out the version
> + * of the chip we're running on.
> + *
> + * In order to correctly virtualize these calls, we need to know if
> + * guests issuing these calls have ownership of the given device.
> + * The approach taken here is to map PM API Nodes identifying
> + * a device into base addresses for registers that belong to that
> + * same device.
> + *
> + * If the guest has access to devices registers, we give the guest
> + * access to PM API calls that affect that device. This is implemented
> + * by pm_node_access and domain_has_node_access().
> + */
> +
> +#include <xen/iocap.h>
> +#include <xen/sched.h>
>   #include <asm/regs.h>
>   #include <asm/platforms/xilinx-zynqmp-eemi.h>
>   
> +#if 0
> +struct pm_access
> +{
> +    mfn_t mfn;
> +    bool hwdom_access;    /* HW domain gets access regardless.  */
> +};
> +
> +/*
> + * This table maps a node into a memory address.

Some of the nodes below don't have memory address. So this comment has to be 
updated.

> + * If a guest has access to the address, it has enough control
> + * over the node to grant it access to EEMI calls for that node.
> + */
> +static const struct pm_access pm_node_access[] = {
> +    /* MM_RPU grants access to all RPU Nodes.  */
> +    [NODE_RPU] = { mfn_init(MM_RPU) },
> +    [NODE_RPU_0] = { mfn_init(MM_RPU) },
> +    [NODE_RPU_1] = { mfn_init(MM_RPU) },
> +    [NODE_IPI_RPU_0] = { mfn_init(MM_RPU) },
> +
> +    /* GPU nodes.  */
> +    [NODE_GPU] = { mfn_init(MM_GPU) },
> +    [NODE_GPU_PP_0] = { mfn_init(MM_GPU) },
> +    [NODE_GPU_PP_1] = { mfn_init(MM_GPU) },
> +
> +    [NODE_USB_0] = { mfn_init(MM_USB3_0_XHCI) },
> +    [NODE_USB_1] = { mfn_init(MM_USB3_1_XHCI) },
> +    [NODE_TTC_0] = { mfn_init(MM_TTC0) },
> +    [NODE_TTC_1] = { mfn_init(MM_TTC1) },
> +    [NODE_TTC_2] = { mfn_init(MM_TTC2) },
> +    [NODE_TTC_3] = { mfn_init(MM_TTC3) },
> +    [NODE_SATA] = { mfn_init(MM_SATA_AHCI_HBA) },
> +    [NODE_ETH_0] = { mfn_init(MM_GEM0) },
> +    [NODE_ETH_1] = { mfn_init(MM_GEM1) },
> +    [NODE_ETH_2] = { mfn_init(MM_GEM2) },
> +    [NODE_ETH_3] = { mfn_init(MM_GEM3) },
> +    [NODE_UART_0] = { mfn_init(MM_UART0) },
> +    [NODE_UART_1] = { mfn_init(MM_UART1) },
> +    [NODE_SPI_0] = { mfn_init(MM_SPI0) },
> +    [NODE_SPI_1] = { mfn_init(MM_SPI1) },
> +    [NODE_I2C_0] = { mfn_init(MM_I2C0) },
> +    [NODE_I2C_1] = { mfn_init(MM_I2C1) },
> +    [NODE_SD_0] = { mfn_init(MM_SD0) },
> +    [NODE_SD_1] = { mfn_init(MM_SD1) },
> +    [NODE_DP] = { mfn_init(MM_DP) },
> +
> +    /* Guest with GDMA Channel 0 gets PM access. Other guests don't.  */
> +    [NODE_GDMA] = { mfn_init(MM_GDMA_CH0) },
> +    /* Guest with ADMA Channel 0 gets PM access. Other guests don't.  */
> +    [NODE_ADMA] = { mfn_init(MM_ADMA_CH0) },
> +
> +    [NODE_NAND] = { mfn_init(MM_NAND) },
> +    [NODE_QSPI] = { mfn_init(MM_QSPI) },
> +    [NODE_GPIO] = { mfn_init(MM_GPIO) },
> +    [NODE_CAN_0] = { mfn_init(MM_CAN0) },
> +    [NODE_CAN_1] = { mfn_init(MM_CAN1) },
> +
> +    /* Only for the hardware domain.  */
> +    [NODE_AFI] = { .hwdom_access = true },
> +    [NODE_APLL] = { .hwdom_access = true },
> +    [NODE_VPLL] = { .hwdom_access = true },
> +    [NODE_DPLL] = { .hwdom_access = true },
> +    [NODE_RPLL] = { .hwdom_access = true },
> +    [NODE_IOPLL] = { .hwdom_access = true },
> +    [NODE_DDR] = { .hwdom_access = true },
> +    [NODE_IPI_APU] = { .hwdom_access = true },
> +    [NODE_PCAP] = { .hwdom_access = true },
> +
> +    [NODE_PCIE] = { mfn_init(MM_PCIE_ATTRIB) },
> +    [NODE_RTC] = { mfn_init(MM_RTC) },
> +};
> +
> +/*
> + * This table maps reset line IDs into a memory address.

Same here.

> + * If a guest has access to the address, it has enough control
> + * over the affected node to grant it access to EEMI calls for
> + * resetting that node.
> + */
> +#define XILPM_RESET_IDX(n) (n - XILPM_RESET_PCIE_CFG)
> +static const struct pm_access pm_reset_access[] = {
> +    [XILPM_RESET_IDX(XILPM_RESET_PCIE_CFG)] = { mfn_init(MM_AXIPCIE_MAIN) },
> +    [XILPM_RESET_IDX(XILPM_RESET_PCIE_BRIDGE)] = { mfn_init(MM_PCIE_ATTRIB) },
> +    [XILPM_RESET_IDX(XILPM_RESET_PCIE_CTRL)] = { mfn_init(MM_PCIE_ATTRIB) },
> +
> +    [XILPM_RESET_IDX(XILPM_RESET_DP)] = { mfn_init(MM_DP) },
> +    [XILPM_RESET_IDX(XILPM_RESET_SWDT_CRF)] = { mfn_init(MM_SWDT) },
> +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM5)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM4)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM3)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM2)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM1)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM0)] = { .hwdom_access = true },
> +
> +    /* Channel 0 grants PM access.  */
> +    [XILPM_RESET_IDX(XILPM_RESET_GDMA)] = { mfn_init(MM_GDMA_CH0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPU_PP1)] = { mfn_init(MM_GPU) },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPU_PP0)] = { mfn_init(MM_GPU) },
> +    [XILPM_RESET_IDX(XILPM_RESET_GT)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_SATA)] = { mfn_init(MM_SATA_AHCI_HBA) },
> +
> +    [XILPM_RESET_IDX(XILPM_RESET_APM_FPD)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_SOFT)] = { .hwdom_access = true },
> +
> +    [XILPM_RESET_IDX(XILPM_RESET_GEM0)] = { mfn_init(MM_GEM0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_GEM1)] = { mfn_init(MM_GEM1) },
> +    [XILPM_RESET_IDX(XILPM_RESET_GEM2)] = { mfn_init(MM_GEM2) },
> +    [XILPM_RESET_IDX(XILPM_RESET_GEM3)] = { mfn_init(MM_GEM3) },
> +
> +    [XILPM_RESET_IDX(XILPM_RESET_QSPI)] = { mfn_init(MM_QSPI) },
> +    [XILPM_RESET_IDX(XILPM_RESET_UART0)] = { mfn_init(MM_UART0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_UART1)] = { mfn_init(MM_UART1) },
> +    [XILPM_RESET_IDX(XILPM_RESET_SPI0)] = { mfn_init(MM_SPI0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_SPI1)] = { mfn_init(MM_SPI1) },
> +    [XILPM_RESET_IDX(XILPM_RESET_SDIO0)] = { mfn_init(MM_SD0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_SDIO1)] = { mfn_init(MM_SD1) },
> +    [XILPM_RESET_IDX(XILPM_RESET_CAN0)] = { mfn_init(MM_CAN0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_CAN1)] = { mfn_init(MM_CAN1) },
> +    [XILPM_RESET_IDX(XILPM_RESET_I2C0)] = { mfn_init(MM_I2C0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_I2C1)] = { mfn_init(MM_I2C1) },
> +    [XILPM_RESET_IDX(XILPM_RESET_TTC0)] = { mfn_init(MM_TTC0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_TTC1)] = { mfn_init(MM_TTC1) },
> +    [XILPM_RESET_IDX(XILPM_RESET_TTC2)] = { mfn_init(MM_TTC2) },
> +    [XILPM_RESET_IDX(XILPM_RESET_TTC3)] = { mfn_init(MM_TTC3) },
> +    [XILPM_RESET_IDX(XILPM_RESET_SWDT_CRL)] = { mfn_init(MM_SWDT) },
> +    [XILPM_RESET_IDX(XILPM_RESET_NAND)] = { mfn_init(MM_NAND) },
> +    [XILPM_RESET_IDX(XILPM_RESET_ADMA)] = { mfn_init(MM_ADMA_CH0) },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPIO)] = { mfn_init(MM_GPIO) },
> +    [XILPM_RESET_IDX(XILPM_RESET_IOU_CC)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_TIMESTAMP)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_RPU_R50)] = { mfn_init(MM_RPU) },
> +    [XILPM_RESET_IDX(XILPM_RESET_RPU_R51)] = { mfn_init(MM_RPU) },
> +    [XILPM_RESET_IDX(XILPM_RESET_RPU_AMBA)] = { mfn_init(MM_RPU) },
> +    [XILPM_RESET_IDX(XILPM_RESET_OCM)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_RPU_PGE)] = { mfn_init(MM_RPU) },
> +
> +    [XILPM_RESET_IDX(XILPM_RESET_USB0_CORERESET)] = { mfn_init(MM_USB3_0_XHCI) },
> +    [XILPM_RESET_IDX(XILPM_RESET_USB0_HIBERRESET)] = { mfn_init(MM_USB3_0_XHCI) },
> +    [XILPM_RESET_IDX(XILPM_RESET_USB0_APB)] = { mfn_init(MM_USB3_0_XHCI) },
> +
> +    [XILPM_RESET_IDX(XILPM_RESET_USB1_CORERESET)] = { mfn_init(MM_USB3_1_XHCI) },
> +    [XILPM_RESET_IDX(XILPM_RESET_USB1_HIBERRESET)] = { mfn_init(MM_USB3_1_XHCI) },
> +    [XILPM_RESET_IDX(XILPM_RESET_USB1_APB)] = { mfn_init(MM_USB3_1_XHCI) },
> +
> +    [XILPM_RESET_IDX(XILPM_RESET_IPI)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_APM_LPD)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_RTC)] = { mfn_init(MM_RTC) },
> +    [XILPM_RESET_IDX(XILPM_RESET_SYSMON)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM6)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_LPD_SWDT)] = { mfn_init(MM_SWDT) },
> +    [XILPM_RESET_IDX(XILPM_RESET_FPD)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_RPU_DBG1)] = { mfn_init(MM_RPU) },
> +    [XILPM_RESET_IDX(XILPM_RESET_RPU_DBG0)] = { mfn_init(MM_RPU) },
> +    [XILPM_RESET_IDX(XILPM_RESET_DBG_LPD)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_DBG_FPD)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_APLL)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_DPLL)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_VPLL)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_IOPLL)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_RPLL)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_0)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_1)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_2)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_3)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_4)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_5)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_6)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_7)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_8)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_9)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_10)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_11)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_12)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_13)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_14)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_15)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_16)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_17)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_18)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_19)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_20)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_21)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_22)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_23)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_24)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_25)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_26)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_27)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_28)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_29)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_30)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_31)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_RPU_LS)] = { mfn_init(MM_RPU) },
> +    [XILPM_RESET_IDX(XILPM_RESET_PS_ONLY)] = { .hwdom_access = true },
> +    [XILPM_RESET_IDX(XILPM_RESET_PL)] = { .hwdom_access = true },
> +};
> +
> +static bool pm_check_access(const struct pm_access *acl, struct domain *d,
> +                            uint32_t idx)
> +{
> +    if ( acl[idx].hwdom_access && is_hardware_domain(d) )
> +        return true;
> +
> +    if ( !mfn_x(acl[idx].mfn) )

Technically 0 is a valid mfn. If you want to encode an invalid value then 
MFN_INVALID is safer.

But what are you trying to prevent? Are the node IDs not allocated contiguously?

> +        return false;
> +
> +    return iomem_access_permitted(d, mfn_x(acl[idx].mfn), mfn_x(acl[idx].mfn));
> +}
> +
> +/* Check if a domain has access to a node.  */
> +static bool domain_has_node_access(struct domain *d, uint32_t nodeid)
> +{
> +    if ( nodeid >= ARRAY_SIZE(pm_node_access) )
> +        return false;
> +
> +    return pm_check_access(pm_node_access, d, nodeid);
> +}
> +
> +/* Check if a domain has access to a reset line.  */
> +static bool domain_has_reset_access(struct domain *d, uint32_t rst)
> +{
> +    if ( rst < XILPM_RESET_PCIE_CFG )
> +        return false;
> +
> +    rst -= XILPM_RESET_PCIE_CFG;
> +
> +    if ( rst >= ARRAY_SIZE(pm_reset_access) )
> +        return false;

I think you can make this code no rely at all on XILPM_RESET_PCIE_CFG by using 
XILPM_RESET_IDX.

if ( XILPM_RESET_IDX(rst) >= ARRAY_SIZE(pm_reset_access) )
    return false;

rst = XILPM_RESET_IDX(rst);

We rely on the unsigned underflow to catch value below XILPM_RESET_PCIE_CFG and 
make the code less error prone to change the array without the code here.

> +
> +    return pm_check_access(pm_reset_access, d, rst);
> +}
> +
> +/*
> + * Check if a given domain has access to perform an indirect
> + * MMIO access.
> + *
> + * If the provided mask is invalid, it will be fixed up.
> + */
> +static bool domain_has_mmio_access(struct domain *d,
> +                                   bool write, paddr_t addr,
> +                                   uint32_t *mask)
> +{
> +    unsigned int i;
> +    bool ret = false;
> +    uint32_t prot_mask = 0;
> +
> +    /*
> +     * The hardware domain gets read access to everything.
> +     * Lower layers will do further filtering.
> +     */
> +    if ( !write && is_hardware_domain(d) )
> +        return true;
> +
> +    /* Scan the ACL.  */
> +    for ( i = 0; i < ARRAY_SIZE(pm_mmio_access); i++ )
> +    {
> +        ASSERT(pm_mmio_access[i].start + pm_mmio_access[i].size >=
> +               pm_mmio_access[i].start);
> +
> +        if ( addr < pm_mmio_access[i].start )
> +            return false;
> +        if ( addr >= pm_mmio_access[i].start + pm_mmio_access[i].size )
> +            continue;
> +
> +        if ( write && pm_mmio_access[i].readonly )
> +            return false;
> +        if ( pm_mmio_access[i].hwdom_access && !is_hardware_domain(d) )
> +            return false;
> +        if ( !domain_has_node_access(d, pm_mmio_access[i].node) )
> +            return false;
> +
> +        /* We've got access to this reg (or parts of it).  */
> +        ret = true;
> +
> +        /* Permit write access to selected bits.  */
> +        prot_mask |= pm_mmio_access[i].mask ?: GENMASK(31, 0);

The field mask does not seem to exist.

> +        break;
> +    }
> +
> +    /*
> +     * Masking only applies to writes: values are safe to read, but not
> +     * all bits are writeable.
> +     */
> +    if ( write )
> +        *mask &= prot_mask;
> +
> +    return ret;
> +}
> +#endif
> +
>   bool zynqmp_eemi(struct cpu_user_regs *regs)
>   {
>       return false;
> 

Cheers,


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-12-11 15:37 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-03 21:02 [PATCH v5 0/7] zynqmp: Add forwarding of platform specific firmware calls Stefano Stabellini
2018-12-03 21:03 ` [PATCH v5 1/7] xen/arm: introduce platform_smc Stefano Stabellini
2018-12-03 21:07   ` Stefano Stabellini
2018-12-11 14:54   ` Julien Grall
2018-12-03 21:03 ` [PATCH v5 2/7] xen/arm: zynqmp: Forward plaform specific firmware calls Stefano Stabellini
2018-12-11 15:03   ` Julien Grall
2018-12-11 18:50     ` Stefano Stabellini
2018-12-11 19:09       ` Julien Grall
2018-12-11 20:19         ` Stefano Stabellini
2018-12-11 20:27           ` Julien Grall
2018-12-03 21:03 ` [PATCH v5 3/7] xen/arm: zynqmp: introduce zynqmp specific defines Stefano Stabellini
2018-12-11 15:21   ` Julien Grall
2018-12-11 19:22     ` Stefano Stabellini
2018-12-12 10:29       ` Julien Grall
2018-12-12 23:55         ` Stefano Stabellini
2018-12-13 15:50           ` Julien Grall
2018-12-03 21:03 ` [PATCH v5 4/7] xen: introduce mfn_init macro Stefano Stabellini
2018-12-04 10:25   ` Jan Beulich
2018-12-04 19:38     ` Stefano Stabellini
2018-12-05  8:27       ` Jan Beulich
2018-12-12 23:56         ` Stefano Stabellini
2018-12-03 21:03 ` [PATCH v5 5/7] xen/arm: zynqmp: eemi access control Stefano Stabellini
2018-12-11 15:37   ` Julien Grall [this message]
2018-12-12 23:57     ` Stefano Stabellini
2018-12-13 15:58       ` Julien Grall
2018-12-03 21:03 ` [PATCH v5 6/7] xen/arm: zynqmp: implement zynqmp_eemi Stefano Stabellini
2018-12-11 15:55   ` Julien Grall
2018-12-11 22:23     ` Stefano Stabellini
2018-12-12 10:48       ` Julien Grall
2018-12-12 23:56         ` Stefano Stabellini
2018-12-13 15:59           ` Julien Grall
2018-12-03 21:03 ` [PATCH v5 7/7] xen/arm: zynqmp: Remove blacklist of ZynqMP's PM node Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cb0ea7d3-f121-7226-896e-ea77459aea23@arm.com \
    --to=julien.grall@arm.com \
    --cc=edgar.iglesias@xilinx.com \
    --cc=saeed.nowshadi@xilinx.com \
    --cc=sstabellini@kernel.org \
    --cc=stefanos@xilinx.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).