From: Duc Dang <dhdang@apm.com> To: Bjorn Helgaas <bhelgaas@google.com>, Grant Likely <grant.likely@linaro.org>, Liviu Dudau <Liviu.Dudau@arm.com> Cc: linux-pci@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Tanmay Inamdar <tinamdar@apm.com>, Loc Ho <lho@apm.com>, Feng Kan <fkan@apm.com>, Duc Dang <dhdang@apm.com> Subject: [PATCH 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Date: Tue, 6 Jan 2015 08:15:41 -0800 [thread overview] Message-ID: <94f9823d9a8c9c7ef819173f0a5ab06fb8fff408.1420499393.git.dhdang@apm.com> (raw) In-Reply-To: <cover.1420499393.git.dhdang@apm.com> In-Reply-To: <cover.1420499393.git.dhdang@apm.com> X-Gene v1 SOC supports total 2688 MSI/MSIX vectors coalesced into 16 HW IRQ lines. Signed-off-by: Tanmay Inamdar <tinamdar@apm.com> Signed-off-by: Duc Dang <dhdang@apm.com> --- drivers/pci/host/Kconfig | 4 + drivers/pci/host/Makefile | 1 + drivers/pci/host/pci-xgene-msi.c | 370 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 375 insertions(+) create mode 100644 drivers/pci/host/pci-xgene-msi.c diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig index c4b6568..650fd1d 100644 --- a/drivers/pci/host/Kconfig +++ b/drivers/pci/host/Kconfig @@ -84,11 +84,15 @@ config PCIE_XILINX Say 'Y' here if you want kernel to support the Xilinx AXI PCIe Host Bridge driver. +config PCI_XGENE_MSI + bool + config PCI_XGENE bool "X-Gene PCIe controller" depends on ARCH_XGENE depends on OF select PCIEPORTBUS + select PCI_XGENE_MSI if PCI_MSI help Say Y here if you want internal PCI support on APM X-Gene SoC. There are 5 internal PCIe ports available. Each port is GEN3 capable diff --git a/drivers/pci/host/Makefile b/drivers/pci/host/Makefile index 44c2699..c261cf7 100644 --- a/drivers/pci/host/Makefile +++ b/drivers/pci/host/Makefile @@ -11,4 +11,5 @@ obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o obj-$(CONFIG_PCI_XGENE) += pci-xgene.o +obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c new file mode 100644 index 0000000..1d1e1aa --- /dev/null +++ b/drivers/pci/host/pci-xgene-msi.c @@ -0,0 +1,370 @@ +/* + * APM X-Gene MSI Driver + * + * Copyright (c) 2014, Applied Micro Circuits Corporation + * Author: Tanmay Inamdar <tinamdar@apm.com> + * Duc Dang <dhdang@apm.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#include <linux/interrupt.h> +#include <linux/module.h> +#include <linux/msi.h> +#include <linux/of_irq.h> +#include <linux/pci.h> +#include <linux/platform_device.h> + +#define MSI_INDEX0 0x000000 +#define MSI_INT0 0x800000 + +struct xgene_msi_settings { + u32 index_per_group; + u32 irqs_per_index; + u32 nr_msi_vec; + u32 nr_hw_irqs; +}; + +struct xgene_msi { + struct irq_domain *irqhost; + struct xgene_msi_settings *settings; + u32 msi_addr_lo; + u32 msi_addr_hi; + void __iomem *msi_regs; + unsigned long *bitmap; + struct mutex bitmap_lock; + int *msi_virqs; +}; + +struct xgene_msi_settings storm_msi_settings = { + .index_per_group = 8, + .irqs_per_index = 21, + .nr_msi_vec = 2688, + .nr_hw_irqs = 16, +}; + +typedef int (*xgene_msi_initcall_t)(struct xgene_msi *); +struct xgene_msi xgene_msi_data; + +static inline irq_hw_number_t virq_to_hw(unsigned int virq) +{ + struct irq_data *irq_data = irq_get_irq_data(virq); + + return WARN_ON(!irq_data) ? 0 : irq_data->hwirq; +} + +static int xgene_msi_init_storm_settings(struct xgene_msi *xgene_msi) +{ + xgene_msi->settings = &storm_msi_settings; + return 0; +} + +static struct irq_chip xgene_msi_chip = { + .name = "xgene-msi", + .irq_enable = unmask_msi_irq, + .irq_disable = mask_msi_irq, + .irq_mask = mask_msi_irq, + .irq_unmask = unmask_msi_irq, +}; + +static int xgene_msi_host_map(struct irq_domain *h, unsigned int virq, + irq_hw_number_t hw) +{ + irq_set_chip_and_handler(virq, &xgene_msi_chip, handle_simple_irq); + irq_set_chip_data(virq, h->host_data); + set_irq_flags(irq, IRQF_VALID); + + return 0; +} + +static const struct irq_domain_ops xgene_msi_host_ops = { + .map = xgene_msi_host_map, +}; + +static int xgene_msi_alloc(struct xgene_msi *xgene_msi) +{ + u32 msi_irq_count = xgene_msi->settings->nr_msi_vec; + int msi; + + mutex_lock(&xgene_msi->bitmap_lock); + + msi = find_first_zero_bit(xgene_msi->bitmap, msi_irq_count); + if (msi < msi_irq_count) + set_bit(msi, xgene_msi->bitmap); + else + msi = -ENOSPC; + + mutex_unlock(&xgene_msi->bitmap_lock); + + return msi; +} + +static void xgene_msi_free(struct xgene_msi *xgene_msi, unsigned long irq) +{ + mutex_lock(&xgene_msi->bitmap_lock); + + if (!test_bit(irq, xgene_msi->bitmap)) + pr_err("trying to free unused MSI#%lu\n", irq); + else + clear_bit(irq, xgene_msi->bitmap); + + mutex_unlock(&xgene_msi->bitmap_lock); +} + +static int xgene_msi_init_allocator(struct xgene_msi *xgene_msi) +{ + u32 msi_irq_count = xgene_msi->settings->nr_msi_vec; + u32 hw_irq_count = xgene_msi->settings->nr_hw_irqs; + int size = BITS_TO_LONGS(msi_irq_count) * sizeof(long); + + xgene_msi->bitmap = kzalloc(size, GFP_KERNEL); + if (!xgene_msi->bitmap) + return -ENOMEM; + mutex_init(&xgene_msi->bitmap_lock); + + xgene_msi->msi_virqs = kcalloc(hw_irq_count, sizeof(int), GFP_KERNEL); + if (!xgene_msi->msi_virqs) + return -ENOMEM; + return 0; +} + +void arch_teardown_msi_irqs(struct pci_dev *dev) +{ + struct msi_desc *entry; + struct xgene_msi *xgene_msi; + + list_for_each_entry(entry, &dev->msi_list, list) { + if (entry->irq == 0) + continue; + xgene_msi = irq_get_chip_data(entry->irq); + irq_set_msi_desc(entry->irq, NULL); + xgene_msi_free(xgene_msi, virq_to_hw(entry->irq)); + } +} + +static void xgene_compose_msi_msg(struct pci_dev *dev, int hwirq, + struct msi_msg *msg, + struct xgene_msi *xgene_msi) +{ + u32 nr_hw_irqs = xgene_msi->settings->nr_hw_irqs; + u32 irqs_per_index = xgene_msi->settings->irqs_per_index; + u32 reg_set = hwirq / (nr_hw_irqs * irqs_per_index); + u32 group = hwirq % nr_hw_irqs; + + msg->address_hi = xgene_msi->msi_addr_hi; + msg->address_lo = xgene_msi->msi_addr_lo + + (((8 * group) + reg_set) << 16); + msg->data = (hwirq / nr_hw_irqs) % irqs_per_index; +} + +int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type) +{ + struct xgene_msi *xgene_msi = &xgene_msi_data; + struct msi_desc *entry; + struct msi_msg msg; + unsigned long virq, gic_irq; + int hwirq; + + list_for_each_entry(entry, &pdev->msi_list, list) { + hwirq = xgene_msi_alloc(xgene_msi); + if (hwirq < 0) { + dev_err(&pdev->dev, "failed to allocate MSI\n"); + return -ENOSPC; + } + + virq = irq_create_mapping(xgene_msi->irqhost, hwirq); + if (virq == 0) { + dev_err(&pdev->dev, "failed to map hwirq %i\n", hwirq); + return -ENOSPC; + } + + gic_irq = xgene_msi->msi_virqs[hwirq % + xgene_msi->settings->nr_hw_irqs]; + pr_debug("Mapp HWIRQ %d on GIC IRQ %lu TO VIRQ %lu\n", + hwirq, gic_irq, virq); + irq_set_msi_desc(virq, entry); + xgene_compose_msi_msg(pdev, hwirq, &msg, xgene_msi); + irq_set_handler_data(virq, (void *)gic_irq); + write_msi_msg(virq, &msg); + } + + return 0; +} + +static irqreturn_t xgene_msi_isr(int irq, void *data) +{ + struct xgene_msi *xgene_msi = (struct xgene_msi *) data; + unsigned int virq; + int msir_index, msir_reg, msir_val, hw_irq; + u32 intr_index, grp_select, msi_grp, processed = 0; + u32 nr_hw_irqs, irqs_per_index, index_per_group; + + msi_grp = irq - xgene_msi->msi_virqs[0]; + if (msi_grp >= xgene_msi->settings->nr_hw_irqs) { + pr_err("invalid msi received\n"); + return IRQ_NONE; + } + + nr_hw_irqs = xgene_msi->settings->nr_hw_irqs; + irqs_per_index = xgene_msi->settings->irqs_per_index; + index_per_group = xgene_msi->settings->index_per_group; + + grp_select = readl(xgene_msi->msi_regs + MSI_INT0 + (msi_grp << 16)); + while (grp_select) { + msir_index = ffs(grp_select) - 1; + msir_reg = (msi_grp << 19) + (msir_index << 16); + msir_val = readl(xgene_msi->msi_regs + MSI_INDEX0 + msir_reg); + while (msir_val) { + intr_index = ffs(msir_val) - 1; + hw_irq = (((msir_index * irqs_per_index) + intr_index) * + nr_hw_irqs) + msi_grp; + virq = irq_find_mapping(xgene_msi->irqhost, hw_irq); + if (virq != 0) + generic_handle_irq(virq); + msir_val &= ~(1 << intr_index); + processed++; + } + grp_select &= ~(1 << msir_index); + } + + return processed > 0 ? IRQ_HANDLED : IRQ_NONE; +} + +static int xgene_msi_remove(struct platform_device *pdev) +{ + int virq, i; + struct xgene_msi *msi = platform_get_drvdata(pdev); + u32 nr_hw_irqs = msi->settings->nr_hw_irqs; + + for (i = 0; i < nr_hw_irqs; i++) { + virq = msi->msi_virqs[i]; + if (virq != 0) + free_irq(virq, msi); + } + + kfree(msi->bitmap); + msi->bitmap = NULL; + + return 0; +} + +static int xgene_msi_setup_hwirq(struct xgene_msi *msi, + struct platform_device *pdev, + int irq_index) +{ + int virt_msir; + cpumask_var_t mask; + int err; + + virt_msir = platform_get_irq(pdev, irq_index); + if (virt_msir < 0) { + dev_err(&pdev->dev, "Cannot translate IRQ index %d\n", + irq_index); + return -EINVAL; + } + + err = request_irq(virt_msir, xgene_msi_isr, 0, "xgene-msi", msi); + if (err) { + dev_err(&pdev->dev, "request irq failed\n"); + return err; + } + + if (alloc_cpumask_var(&mask, GFP_KERNEL)) { + cpumask_setall(mask); + irq_set_affinity(virt_msir, mask); + free_cpumask_var(mask); + } + + msi->msi_virqs[irq_index] = virt_msir; + + return 0; +} + +static const struct of_device_id xgene_msi_match_table[] = { + {.compatible = "apm,xgene-storm-pcie-msi", + .data = xgene_msi_init_storm_settings}, + {}, +}; + +static int xgene_msi_probe(struct platform_device *pdev) +{ + struct resource *res; + int rc, irq_index; + struct device_node *np; + const struct of_device_id *matched_np; + struct xgene_msi *xgene_msi = &xgene_msi_data; + xgene_msi_initcall_t init_fn; + u32 nr_hw_irqs, nr_msi_vecs; + + np = of_find_matching_node_and_match(NULL, + xgene_msi_match_table, &matched_np); + if (!np) + return -ENODEV; + + init_fn = (xgene_msi_initcall_t) matched_np->data; + rc = init_fn(xgene_msi); + if (rc) + return rc; + + nr_msi_vecs = xgene_msi->settings->nr_msi_vec; + xgene_msi->irqhost = irq_domain_add_linear(pdev->dev.of_node, + nr_msi_vecs, &xgene_msi_host_ops, xgene_msi); + if (!xgene_msi->irqhost) { + dev_err(&pdev->dev, "No memory for MSI irqhost\n"); + rc = -ENOMEM; + goto error; + } + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + xgene_msi->msi_regs = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(xgene_msi->msi_regs)) { + dev_err(&pdev->dev, "no reg space\n"); + rc = -EINVAL; + goto error; + } + + xgene_msi->msi_addr_hi = upper_32_bits(res->start); + xgene_msi->msi_addr_lo = lower_32_bits(res->start); + + rc = xgene_msi_init_allocator(xgene_msi); + if (rc) { + dev_err(&pdev->dev, "Error allocating MSI bitmap\n"); + goto error; + } + + nr_hw_irqs = xgene_msi->settings->nr_hw_irqs; + for (irq_index = 0; irq_index < nr_hw_irqs; irq_index++) { + rc = xgene_msi_setup_hwirq(xgene_msi, pdev, irq_index); + if (rc) + goto error; + } + + dev_info(&pdev->dev, "APM X-Gene PCIe MSI driver loaded\n"); + + return 0; +error: + xgene_msi_remove(pdev); + return rc; +} + +static struct platform_driver xgene_msi_driver = { + .driver = { + .name = "xgene-msi", + .owner = THIS_MODULE, + .of_match_table = xgene_msi_match_table, + }, + .probe = xgene_msi_probe, + .remove = xgene_msi_remove, +}; +module_platform_driver(xgene_msi_driver); + +MODULE_AUTHOR("Duc Dang <dhdang@apm.com>"); +MODULE_DESCRIPTION("APM X-Gene PCIe MSI driver"); +MODULE_LICENSE("GPL v2"); -- 1.9.1
WARNING: multiple messages have this Message-ID (diff)
From: dhdang@apm.com (Duc Dang) To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Date: Tue, 6 Jan 2015 08:15:41 -0800 [thread overview] Message-ID: <94f9823d9a8c9c7ef819173f0a5ab06fb8fff408.1420499393.git.dhdang@apm.com> (raw) In-Reply-To: <cover.1420499393.git.dhdang@apm.com> X-Gene v1 SOC supports total 2688 MSI/MSIX vectors coalesced into 16 HW IRQ lines. Signed-off-by: Tanmay Inamdar <tinamdar@apm.com> Signed-off-by: Duc Dang <dhdang@apm.com> --- drivers/pci/host/Kconfig | 4 + drivers/pci/host/Makefile | 1 + drivers/pci/host/pci-xgene-msi.c | 370 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 375 insertions(+) create mode 100644 drivers/pci/host/pci-xgene-msi.c diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig index c4b6568..650fd1d 100644 --- a/drivers/pci/host/Kconfig +++ b/drivers/pci/host/Kconfig @@ -84,11 +84,15 @@ config PCIE_XILINX Say 'Y' here if you want kernel to support the Xilinx AXI PCIe Host Bridge driver. +config PCI_XGENE_MSI + bool + config PCI_XGENE bool "X-Gene PCIe controller" depends on ARCH_XGENE depends on OF select PCIEPORTBUS + select PCI_XGENE_MSI if PCI_MSI help Say Y here if you want internal PCI support on APM X-Gene SoC. There are 5 internal PCIe ports available. Each port is GEN3 capable diff --git a/drivers/pci/host/Makefile b/drivers/pci/host/Makefile index 44c2699..c261cf7 100644 --- a/drivers/pci/host/Makefile +++ b/drivers/pci/host/Makefile @@ -11,4 +11,5 @@ obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o obj-$(CONFIG_PCI_XGENE) += pci-xgene.o +obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c new file mode 100644 index 0000000..1d1e1aa --- /dev/null +++ b/drivers/pci/host/pci-xgene-msi.c @@ -0,0 +1,370 @@ +/* + * APM X-Gene MSI Driver + * + * Copyright (c) 2014, Applied Micro Circuits Corporation + * Author: Tanmay Inamdar <tinamdar@apm.com> + * Duc Dang <dhdang@apm.com> + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or (at your + * option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#include <linux/interrupt.h> +#include <linux/module.h> +#include <linux/msi.h> +#include <linux/of_irq.h> +#include <linux/pci.h> +#include <linux/platform_device.h> + +#define MSI_INDEX0 0x000000 +#define MSI_INT0 0x800000 + +struct xgene_msi_settings { + u32 index_per_group; + u32 irqs_per_index; + u32 nr_msi_vec; + u32 nr_hw_irqs; +}; + +struct xgene_msi { + struct irq_domain *irqhost; + struct xgene_msi_settings *settings; + u32 msi_addr_lo; + u32 msi_addr_hi; + void __iomem *msi_regs; + unsigned long *bitmap; + struct mutex bitmap_lock; + int *msi_virqs; +}; + +struct xgene_msi_settings storm_msi_settings = { + .index_per_group = 8, + .irqs_per_index = 21, + .nr_msi_vec = 2688, + .nr_hw_irqs = 16, +}; + +typedef int (*xgene_msi_initcall_t)(struct xgene_msi *); +struct xgene_msi xgene_msi_data; + +static inline irq_hw_number_t virq_to_hw(unsigned int virq) +{ + struct irq_data *irq_data = irq_get_irq_data(virq); + + return WARN_ON(!irq_data) ? 0 : irq_data->hwirq; +} + +static int xgene_msi_init_storm_settings(struct xgene_msi *xgene_msi) +{ + xgene_msi->settings = &storm_msi_settings; + return 0; +} + +static struct irq_chip xgene_msi_chip = { + .name = "xgene-msi", + .irq_enable = unmask_msi_irq, + .irq_disable = mask_msi_irq, + .irq_mask = mask_msi_irq, + .irq_unmask = unmask_msi_irq, +}; + +static int xgene_msi_host_map(struct irq_domain *h, unsigned int virq, + irq_hw_number_t hw) +{ + irq_set_chip_and_handler(virq, &xgene_msi_chip, handle_simple_irq); + irq_set_chip_data(virq, h->host_data); + set_irq_flags(irq, IRQF_VALID); + + return 0; +} + +static const struct irq_domain_ops xgene_msi_host_ops = { + .map = xgene_msi_host_map, +}; + +static int xgene_msi_alloc(struct xgene_msi *xgene_msi) +{ + u32 msi_irq_count = xgene_msi->settings->nr_msi_vec; + int msi; + + mutex_lock(&xgene_msi->bitmap_lock); + + msi = find_first_zero_bit(xgene_msi->bitmap, msi_irq_count); + if (msi < msi_irq_count) + set_bit(msi, xgene_msi->bitmap); + else + msi = -ENOSPC; + + mutex_unlock(&xgene_msi->bitmap_lock); + + return msi; +} + +static void xgene_msi_free(struct xgene_msi *xgene_msi, unsigned long irq) +{ + mutex_lock(&xgene_msi->bitmap_lock); + + if (!test_bit(irq, xgene_msi->bitmap)) + pr_err("trying to free unused MSI#%lu\n", irq); + else + clear_bit(irq, xgene_msi->bitmap); + + mutex_unlock(&xgene_msi->bitmap_lock); +} + +static int xgene_msi_init_allocator(struct xgene_msi *xgene_msi) +{ + u32 msi_irq_count = xgene_msi->settings->nr_msi_vec; + u32 hw_irq_count = xgene_msi->settings->nr_hw_irqs; + int size = BITS_TO_LONGS(msi_irq_count) * sizeof(long); + + xgene_msi->bitmap = kzalloc(size, GFP_KERNEL); + if (!xgene_msi->bitmap) + return -ENOMEM; + mutex_init(&xgene_msi->bitmap_lock); + + xgene_msi->msi_virqs = kcalloc(hw_irq_count, sizeof(int), GFP_KERNEL); + if (!xgene_msi->msi_virqs) + return -ENOMEM; + return 0; +} + +void arch_teardown_msi_irqs(struct pci_dev *dev) +{ + struct msi_desc *entry; + struct xgene_msi *xgene_msi; + + list_for_each_entry(entry, &dev->msi_list, list) { + if (entry->irq == 0) + continue; + xgene_msi = irq_get_chip_data(entry->irq); + irq_set_msi_desc(entry->irq, NULL); + xgene_msi_free(xgene_msi, virq_to_hw(entry->irq)); + } +} + +static void xgene_compose_msi_msg(struct pci_dev *dev, int hwirq, + struct msi_msg *msg, + struct xgene_msi *xgene_msi) +{ + u32 nr_hw_irqs = xgene_msi->settings->nr_hw_irqs; + u32 irqs_per_index = xgene_msi->settings->irqs_per_index; + u32 reg_set = hwirq / (nr_hw_irqs * irqs_per_index); + u32 group = hwirq % nr_hw_irqs; + + msg->address_hi = xgene_msi->msi_addr_hi; + msg->address_lo = xgene_msi->msi_addr_lo + + (((8 * group) + reg_set) << 16); + msg->data = (hwirq / nr_hw_irqs) % irqs_per_index; +} + +int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type) +{ + struct xgene_msi *xgene_msi = &xgene_msi_data; + struct msi_desc *entry; + struct msi_msg msg; + unsigned long virq, gic_irq; + int hwirq; + + list_for_each_entry(entry, &pdev->msi_list, list) { + hwirq = xgene_msi_alloc(xgene_msi); + if (hwirq < 0) { + dev_err(&pdev->dev, "failed to allocate MSI\n"); + return -ENOSPC; + } + + virq = irq_create_mapping(xgene_msi->irqhost, hwirq); + if (virq == 0) { + dev_err(&pdev->dev, "failed to map hwirq %i\n", hwirq); + return -ENOSPC; + } + + gic_irq = xgene_msi->msi_virqs[hwirq % + xgene_msi->settings->nr_hw_irqs]; + pr_debug("Mapp HWIRQ %d on GIC IRQ %lu TO VIRQ %lu\n", + hwirq, gic_irq, virq); + irq_set_msi_desc(virq, entry); + xgene_compose_msi_msg(pdev, hwirq, &msg, xgene_msi); + irq_set_handler_data(virq, (void *)gic_irq); + write_msi_msg(virq, &msg); + } + + return 0; +} + +static irqreturn_t xgene_msi_isr(int irq, void *data) +{ + struct xgene_msi *xgene_msi = (struct xgene_msi *) data; + unsigned int virq; + int msir_index, msir_reg, msir_val, hw_irq; + u32 intr_index, grp_select, msi_grp, processed = 0; + u32 nr_hw_irqs, irqs_per_index, index_per_group; + + msi_grp = irq - xgene_msi->msi_virqs[0]; + if (msi_grp >= xgene_msi->settings->nr_hw_irqs) { + pr_err("invalid msi received\n"); + return IRQ_NONE; + } + + nr_hw_irqs = xgene_msi->settings->nr_hw_irqs; + irqs_per_index = xgene_msi->settings->irqs_per_index; + index_per_group = xgene_msi->settings->index_per_group; + + grp_select = readl(xgene_msi->msi_regs + MSI_INT0 + (msi_grp << 16)); + while (grp_select) { + msir_index = ffs(grp_select) - 1; + msir_reg = (msi_grp << 19) + (msir_index << 16); + msir_val = readl(xgene_msi->msi_regs + MSI_INDEX0 + msir_reg); + while (msir_val) { + intr_index = ffs(msir_val) - 1; + hw_irq = (((msir_index * irqs_per_index) + intr_index) * + nr_hw_irqs) + msi_grp; + virq = irq_find_mapping(xgene_msi->irqhost, hw_irq); + if (virq != 0) + generic_handle_irq(virq); + msir_val &= ~(1 << intr_index); + processed++; + } + grp_select &= ~(1 << msir_index); + } + + return processed > 0 ? IRQ_HANDLED : IRQ_NONE; +} + +static int xgene_msi_remove(struct platform_device *pdev) +{ + int virq, i; + struct xgene_msi *msi = platform_get_drvdata(pdev); + u32 nr_hw_irqs = msi->settings->nr_hw_irqs; + + for (i = 0; i < nr_hw_irqs; i++) { + virq = msi->msi_virqs[i]; + if (virq != 0) + free_irq(virq, msi); + } + + kfree(msi->bitmap); + msi->bitmap = NULL; + + return 0; +} + +static int xgene_msi_setup_hwirq(struct xgene_msi *msi, + struct platform_device *pdev, + int irq_index) +{ + int virt_msir; + cpumask_var_t mask; + int err; + + virt_msir = platform_get_irq(pdev, irq_index); + if (virt_msir < 0) { + dev_err(&pdev->dev, "Cannot translate IRQ index %d\n", + irq_index); + return -EINVAL; + } + + err = request_irq(virt_msir, xgene_msi_isr, 0, "xgene-msi", msi); + if (err) { + dev_err(&pdev->dev, "request irq failed\n"); + return err; + } + + if (alloc_cpumask_var(&mask, GFP_KERNEL)) { + cpumask_setall(mask); + irq_set_affinity(virt_msir, mask); + free_cpumask_var(mask); + } + + msi->msi_virqs[irq_index] = virt_msir; + + return 0; +} + +static const struct of_device_id xgene_msi_match_table[] = { + {.compatible = "apm,xgene-storm-pcie-msi", + .data = xgene_msi_init_storm_settings}, + {}, +}; + +static int xgene_msi_probe(struct platform_device *pdev) +{ + struct resource *res; + int rc, irq_index; + struct device_node *np; + const struct of_device_id *matched_np; + struct xgene_msi *xgene_msi = &xgene_msi_data; + xgene_msi_initcall_t init_fn; + u32 nr_hw_irqs, nr_msi_vecs; + + np = of_find_matching_node_and_match(NULL, + xgene_msi_match_table, &matched_np); + if (!np) + return -ENODEV; + + init_fn = (xgene_msi_initcall_t) matched_np->data; + rc = init_fn(xgene_msi); + if (rc) + return rc; + + nr_msi_vecs = xgene_msi->settings->nr_msi_vec; + xgene_msi->irqhost = irq_domain_add_linear(pdev->dev.of_node, + nr_msi_vecs, &xgene_msi_host_ops, xgene_msi); + if (!xgene_msi->irqhost) { + dev_err(&pdev->dev, "No memory for MSI irqhost\n"); + rc = -ENOMEM; + goto error; + } + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + xgene_msi->msi_regs = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(xgene_msi->msi_regs)) { + dev_err(&pdev->dev, "no reg space\n"); + rc = -EINVAL; + goto error; + } + + xgene_msi->msi_addr_hi = upper_32_bits(res->start); + xgene_msi->msi_addr_lo = lower_32_bits(res->start); + + rc = xgene_msi_init_allocator(xgene_msi); + if (rc) { + dev_err(&pdev->dev, "Error allocating MSI bitmap\n"); + goto error; + } + + nr_hw_irqs = xgene_msi->settings->nr_hw_irqs; + for (irq_index = 0; irq_index < nr_hw_irqs; irq_index++) { + rc = xgene_msi_setup_hwirq(xgene_msi, pdev, irq_index); + if (rc) + goto error; + } + + dev_info(&pdev->dev, "APM X-Gene PCIe MSI driver loaded\n"); + + return 0; +error: + xgene_msi_remove(pdev); + return rc; +} + +static struct platform_driver xgene_msi_driver = { + .driver = { + .name = "xgene-msi", + .owner = THIS_MODULE, + .of_match_table = xgene_msi_match_table, + }, + .probe = xgene_msi_probe, + .remove = xgene_msi_remove, +}; +module_platform_driver(xgene_msi_driver); + +MODULE_AUTHOR("Duc Dang <dhdang@apm.com>"); +MODULE_DESCRIPTION("APM X-Gene PCIe MSI driver"); +MODULE_LICENSE("GPL v2"); -- 1.9.1
next prev parent reply other threads:[~2015-01-06 16:22 UTC|newest] Thread overview: 230+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-01-06 16:15 [PATCH 0/4] PCI: X-Gene: Add APM X-Gene v1 MSI/MSIX termination driver Duc Dang 2015-01-06 16:15 ` Duc Dang 2015-01-06 16:15 ` Duc Dang [this message] 2015-01-06 16:15 ` [PATCH 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang 2015-01-06 19:33 ` Arnd Bergmann 2015-01-06 19:33 ` Arnd Bergmann 2015-01-12 18:53 ` Duc Dang 2015-01-12 18:53 ` Duc Dang 2015-01-12 19:44 ` Arnd Bergmann 2015-01-12 19:44 ` Arnd Bergmann 2015-03-04 19:39 ` [PATCH v2 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-03-04 19:39 ` Duc Dang 2015-03-18 17:43 ` Duc Dang 2015-03-18 17:43 ` Duc Dang 2015-03-19 20:49 ` Bjorn Helgaas 2015-03-19 20:49 ` Bjorn Helgaas 2015-03-19 20:59 ` Duc Dang 2015-03-19 20:59 ` Duc Dang 2015-03-19 21:08 ` Bjorn Helgaas 2015-03-19 21:08 ` Bjorn Helgaas 2015-03-04 19:39 ` [PATCH v2 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang 2015-03-04 19:39 ` Duc Dang 2015-03-18 18:05 ` Marc Zyngier 2015-03-18 18:05 ` Marc Zyngier 2015-03-18 18:29 ` Duc Dang 2015-03-18 18:29 ` Duc Dang 2015-03-18 18:52 ` Marc Zyngier 2015-03-18 18:52 ` Marc Zyngier 2015-04-07 19:56 ` Duc Dang 2015-04-07 19:56 ` Duc Dang 2015-04-08 7:44 ` Marc Zyngier 2015-04-08 7:44 ` Marc Zyngier 2015-04-09 17:05 ` [PATCH v3 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-04-09 17:05 ` Duc Dang 2015-04-09 17:05 ` [PATCH v3 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang 2015-04-09 17:05 ` Duc Dang 2015-04-09 20:11 ` Bjorn Helgaas 2015-04-09 20:11 ` Bjorn Helgaas 2015-04-09 21:52 ` Duc Dang 2015-04-09 21:52 ` Duc Dang 2015-04-09 22:39 ` Bjorn Helgaas 2015-04-09 22:39 ` Bjorn Helgaas 2015-04-09 23:26 ` Duc Dang 2015-04-09 23:26 ` Duc Dang 2015-04-10 17:20 ` Marc Zyngier 2015-04-10 17:20 ` Marc Zyngier 2015-04-10 17:20 ` Marc Zyngier 2015-04-10 23:42 ` Duc Dang 2015-04-10 23:42 ` Duc Dang 2015-04-10 23:42 ` Duc Dang 2015-04-11 12:06 ` Marc Zyngier 2015-04-11 12:06 ` Marc Zyngier 2015-04-14 18:20 ` Duc Dang 2015-04-14 18:20 ` Duc Dang 2015-04-15 8:16 ` Marc Zyngier 2015-04-15 8:16 ` Marc Zyngier 2015-04-17 9:50 ` [PATCH v4 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-04-17 9:50 ` Duc Dang 2015-04-17 9:50 ` [PATCH v4 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang 2015-04-17 9:50 ` Duc Dang 2015-04-17 14:10 ` Arnd Bergmann 2015-04-17 14:10 ` Arnd Bergmann 2015-04-19 18:40 ` Duc Dang 2015-04-19 18:40 ` Duc Dang 2015-04-19 19:55 ` Arnd Bergmann 2015-04-19 19:55 ` Arnd Bergmann 2015-04-20 18:49 ` Feng Kan 2015-04-20 18:49 ` Feng Kan 2015-04-20 18:49 ` Feng Kan 2015-04-21 7:16 ` Arnd Bergmann 2015-04-21 7:16 ` Arnd Bergmann 2015-04-17 9:50 ` [PATCH v4 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-04-17 9:50 ` Duc Dang 2015-04-17 9:50 ` [PATCH v4 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-04-17 9:50 ` Duc Dang 2015-04-17 9:50 ` [PATCH v4 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-04-17 9:50 ` Duc Dang 2015-04-17 10:00 ` [PATCH v3 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-04-17 10:00 ` Duc Dang 2015-04-17 10:17 ` Marc Zyngier 2015-04-17 10:17 ` Marc Zyngier 2015-04-17 12:37 ` Duc Dang 2015-04-17 12:37 ` Duc Dang 2015-04-17 12:45 ` Marc Zyngier 2015-04-17 12:45 ` Marc Zyngier 2015-04-20 18:51 ` Feng Kan 2015-04-20 18:51 ` Feng Kan 2015-04-21 8:32 ` Marc Zyngier 2015-04-21 8:32 ` Marc Zyngier 2015-04-21 4:04 ` [PATCH v5 0/4]PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-04-21 4:04 ` Duc Dang 2015-04-22 3:02 ` Jon Masters 2015-04-22 3:02 ` Jon Masters 2015-04-22 3:02 ` Jon Masters 2015-04-22 3:02 ` Jon Masters 2015-04-21 4:04 ` [PATCH v5 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe " Duc Dang 2015-04-21 4:04 ` Duc Dang 2015-04-21 15:08 ` Marc Zyngier 2015-04-21 15:08 ` Marc Zyngier 2015-04-21 15:08 ` Marc Zyngier 2015-04-22 6:15 ` [PATCH v6 0/4]PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-04-22 6:15 ` Duc Dang 2015-04-22 6:15 ` [PATCH v6 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-04-22 6:15 ` Duc Dang 2015-04-22 6:15 ` [PATCH v6 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-04-22 6:15 ` Duc Dang 2015-04-22 12:50 ` Marc Zyngier 2015-04-22 12:50 ` Marc Zyngier 2015-04-22 12:50 ` Marc Zyngier 2015-05-18 9:55 ` [PATCH v7 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-05-18 9:55 ` Duc Dang 2015-05-18 9:55 ` [PATCH v7 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-05-18 9:55 ` Duc Dang 2015-05-18 9:55 ` [PATCH v7 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-05-18 9:55 ` Duc Dang 2015-05-20 9:16 ` Marc Zyngier 2015-05-20 9:16 ` Marc Zyngier 2015-05-20 9:16 ` Marc Zyngier 2015-05-22 18:41 ` [PATCH v8 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-05-22 18:41 ` Duc Dang 2015-05-22 18:41 ` [PATCH v8 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-05-22 18:41 ` Duc Dang 2015-05-22 18:41 ` [PATCH v8 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-05-22 18:41 ` Duc Dang 2015-05-25 11:52 ` Marc Zyngier 2015-05-25 11:52 ` Marc Zyngier 2015-05-25 11:52 ` Marc Zyngier 2015-05-27 18:27 ` [PATCH v9 0/4]PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-05-27 18:27 ` Duc Dang 2015-05-27 18:27 ` [PATCH v9 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-05-27 18:27 ` Duc Dang 2015-05-27 18:27 ` [PATCH v9 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-05-27 18:27 ` Duc Dang 2015-05-28 8:05 ` Marc Zyngier 2015-05-28 8:05 ` Marc Zyngier 2015-05-28 8:05 ` Marc Zyngier 2015-05-28 17:16 ` Duc Dang 2015-05-28 17:16 ` Duc Dang 2015-05-28 17:16 ` Duc Dang 2015-05-29 18:24 ` [PATCH v10 0/4] PCI: X-Gene: Add APM X-Gene v1 " Duc Dang 2015-05-29 18:24 ` Duc Dang 2015-06-05 21:05 ` Bjorn Helgaas 2015-06-05 21:05 ` Bjorn Helgaas 2015-06-05 21:11 ` Duc Dang 2015-06-05 21:11 ` Duc Dang 2015-05-29 18:24 ` [PATCH v10 1/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-05-29 18:24 ` Duc Dang 2015-05-29 18:24 ` [PATCH v10 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-05-29 18:24 ` Duc Dang 2015-05-29 18:24 ` [PATCH v10 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-05-29 18:24 ` Duc Dang 2015-05-29 18:24 ` [PATCH v10 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-05-29 18:24 ` Duc Dang 2015-05-27 18:27 ` [PATCH v9 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-05-27 18:27 ` Duc Dang 2015-05-27 18:27 ` [PATCH v9 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-05-27 18:27 ` Duc Dang 2015-05-27 18:31 ` [PATCH v8 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-05-27 18:31 ` Duc Dang 2015-05-27 18:31 ` Duc Dang 2015-05-22 18:41 ` [PATCH v8 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-05-22 18:41 ` Duc Dang 2015-05-22 18:41 ` [PATCH v8 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-05-22 18:41 ` Duc Dang 2015-05-22 18:43 ` [PATCH v7 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-05-22 18:43 ` Duc Dang 2015-05-22 18:43 ` Duc Dang 2015-05-18 9:55 ` [PATCH v7 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-05-18 9:55 ` Duc Dang 2015-05-18 9:55 ` [PATCH v7 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-05-18 9:55 ` Duc Dang 2015-05-18 10:12 ` [PATCH v6 2/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-05-18 10:12 ` Duc Dang 2015-05-18 10:12 ` Duc Dang 2015-04-22 6:15 ` [PATCH v6 3/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-04-22 6:15 ` Duc Dang 2015-04-22 6:15 ` [PATCH v6 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-04-22 6:15 ` Duc Dang 2015-04-21 4:04 ` [PATCH v5 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-04-21 4:04 ` Duc Dang 2015-04-21 15:19 ` Marc Zyngier 2015-04-21 15:19 ` Marc Zyngier 2015-04-21 15:19 ` Marc Zyngier 2015-04-21 18:01 ` Duc Dang 2015-04-21 18:01 ` Duc Dang 2015-04-21 18:01 ` Duc Dang 2015-04-21 4:04 ` [PATCH v5 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-04-21 4:04 ` Duc Dang 2015-04-21 15:42 ` Mark Rutland 2015-04-21 15:42 ` Mark Rutland 2015-04-21 15:42 ` Mark Rutland 2015-04-21 17:37 ` Duc Dang 2015-04-21 17:37 ` Duc Dang 2015-04-21 17:37 ` Duc Dang 2015-04-21 4:04 ` [PATCH v5 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-04-21 4:04 ` Duc Dang 2015-04-11 0:16 ` [PATCH v3 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Feng Kan 2015-04-11 0:16 ` Feng Kan 2015-04-11 0:16 ` Feng Kan 2015-04-11 12:18 ` Marc Zyngier 2015-04-11 12:18 ` Marc Zyngier 2015-04-11 14:50 ` Arnd Bergmann 2015-04-11 14:50 ` Arnd Bergmann 2015-04-11 14:50 ` Arnd Bergmann 2015-04-10 18:13 ` Paul Bolle 2015-04-10 18:13 ` Paul Bolle 2015-04-10 23:55 ` Duc Dang 2015-04-10 23:55 ` Duc Dang 2015-04-09 17:05 ` [PATCH v3 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-04-09 17:05 ` Duc Dang 2015-04-09 17:05 ` [PATCH v3 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-04-09 17:05 ` Duc Dang 2015-04-09 17:05 ` [PATCH v3 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-04-09 17:05 ` Duc Dang 2015-04-09 17:20 ` [PATCH v2 1/4] PCI: X-Gene: Add the APM X-Gene v1 PCIe MSI/MSIX termination driver Duc Dang 2015-04-09 17:20 ` Duc Dang 2015-03-04 19:39 ` [PATCH v2 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-03-04 19:39 ` Duc Dang 2015-03-04 19:39 ` [PATCH v2 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-03-04 19:39 ` Duc Dang 2015-03-04 19:40 ` [PATCH v2 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-03-04 19:40 ` Duc Dang 2015-01-06 16:15 ` [PATCH 2/4] arm64: dts: Add the device tree entry for the APM X-Gene PCIe MSI node Duc Dang 2015-01-06 16:15 ` Duc Dang 2015-01-06 16:15 ` [PATCH 3/4] documentation: dts: Add the device tree binding for APM X-Gene v1 PCIe MSI device tree node Duc Dang 2015-01-06 16:15 ` Duc Dang 2015-01-06 19:34 ` Arnd Bergmann 2015-01-06 19:34 ` Arnd Bergmann 2015-01-06 16:15 ` [PATCH 4/4] PCI: X-Gene: Add the MAINTAINERS entry for APM X-Gene v1 PCIe MSI driver Duc Dang 2015-01-06 16:15 ` Duc Dang
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=94f9823d9a8c9c7ef819173f0a5ab06fb8fff408.1420499393.git.dhdang@apm.com \ --to=dhdang@apm.com \ --cc=Liviu.Dudau@arm.com \ --cc=bhelgaas@google.com \ --cc=fkan@apm.com \ --cc=grant.likely@linaro.org \ --cc=lho@apm.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-pci@vger.kernel.org \ --cc=tinamdar@apm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.