linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support
@ 2020-02-14 14:57 Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 01/20] irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributors Marc Zyngier
                   ` (19 more replies)
  0 siblings, 20 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

This (now shorter) series expands the existing GICv4 support to deal
with the new GICv4.1 architecture, which comes with a set of major
improvements compared to v4.0:

- One architectural doorbell per vcpu, instead of one doorbell per VLPI

- Doorbell entirely managed by the HW, with an "at most once" delivery
  guarantee per non-residency phase and only when requested by the
  hypervisor

- A shared memory scheme between ITSs and redistributors, allowing for an
  optimised residency sequence (the use of VMOVP becomes less frequent)

- Support for direct virtual SGI delivery (the injection path still involves
  the hypervisor), at the cost of losing the active state on SGIs. It
  shouldn't be a big deal, but some guest operating systems might notice
  (Linux definitely won't care).

On the other hand, public documentation is not available yet, so that's a
bit annoying...

The series is roughly organised in 3 parts:

(1) v4.1 doorbell management
(2) Virtual SGI support
(3) Plumbing of virtual SGIs in KVM

Notes:

  - The whole thing is tested on a FVP model, which can be obtained
    free of charge on ARM's developer website. It requires you to
    create an account, unfortunately... You'll need a fix for the
    devicetree that is in the kernel tree (should be merged before
    the 5.6 release).

  - This series has uncovered a behaviour that looks like a HW bug on
    the Cavium ThunderX (aka TX1) platform. I'd very much welcome some
    clarification from the Marvell/Cavium folks on Cc, as well as an
    official erratum number if this happens to be an actual bug.

    [v3 update]
    People have ignored for two months now, and it is fairly obvious
    that support for this machine is slowly bit-rotting. Maybe I'll
    drop the patch and instead start the process of removing all TX1
    support from the kernel (we'd certainly be better off without it).

    [v4 update]
    TX1 is now broken in mainline, and nobody cares. Make of this what
    you want.

* From v3 [3]:
  - Rebased on v5.6-rc1
  - Considerably smaller thanks to the initial patches being merged
  - Small bug fix after the v5.6 merge window

* From v2 [2]:
  - Another bunch of fixes thanks to Zenghui Yu's very careful review
  - HW-accelerated SGIs are now optional thanks to new architected
    discovery/selection bits exposed by KVM and used by the guest kernel
  - Rebased on v5.5-rc2

* From v1 [1]:
  - A bunch of minor reworks after Zenghui Yu's review
  - A workaround for what looks like a new and unexpected TX1 bug
  - A subtle reorder of the series so that some patches can go in early

[1] https://lore.kernel.org/lkml/20190923182606.32100-1-maz@kernel.org/
[2] https://lore.kernel.org/lkml/20191027144234.8395-1-maz@kernel.org/
[3] https://lore.kernel.org/r/20191224111055.11836-1-maz@kernel.org/

Marc Zyngier (20):
  irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributors
  irqchip/gic-v3: Use SGIs without active state if offered
  irqchip/gic-v4.1: Advertise support v4.1 to KVM
  irqchip/gic-v4.1: Map the ITS SGIR register page
  irqchip/gic-v4.1: Plumb skeletal VSGI irqchip
  irqchip/gic-v4.1: Add initial SGI configuration
  irqchip/gic-v4.1: Plumb mask/unmask SGI callbacks
  irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  irqchip/gic-v4.1: Plumb set_vcpu_affinity SGI callbacks
  irqchip/gic-v4.1: Move doorbell management to the GICv4 abstraction
    layer
  irqchip/gic-v4.1: Add VSGI allocation/teardown
  irqchip/gic-v4.1: Add VSGI property setup
  irqchip/gic-v4.1: Eagerly vmap vPEs
  KVM: arm64: GICv4.1: Let doorbells be auto-enabled
  KVM: arm64: GICv4.1: Add direct injection capability to SGI registers
  KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts
  KVM: arm64: GICv4.1: Plumb SGI implementation selection in the
    distributor
  KVM: arm64: GICv4.1: Reload VLPI configuration on distributor
    enable/disable
  KVM: arm64: GICv4.1: Allow non-trapping WFI when using HW SGIs
  KVM: arm64: GICv4.1: Expose HW-based SGIs in debugfs

 arch/arm/include/asm/kvm_host.h        |   1 +
 arch/arm64/include/asm/kvm_emulate.h   |   3 +-
 arch/arm64/include/asm/kvm_host.h      |   1 +
 drivers/irqchip/irq-gic-v3-its.c       | 301 ++++++++++++++++++++++++-
 drivers/irqchip/irq-gic-v3.c           |  12 +-
 drivers/irqchip/irq-gic-v4.c           | 134 ++++++++++-
 include/kvm/arm_vgic.h                 |   4 +
 include/linux/irqchip/arm-gic-common.h |   2 +
 include/linux/irqchip/arm-gic-v3.h     |  19 +-
 include/linux/irqchip/arm-gic-v4.h     |  20 +-
 virt/kvm/arm/arm.c                     |   8 +
 virt/kvm/arm/vgic/vgic-debug.c         |  14 +-
 virt/kvm/arm/vgic/vgic-mmio-v3.c       |  68 +++++-
 virt/kvm/arm/vgic/vgic-mmio.c          |  88 +++++++-
 virt/kvm/arm/vgic/vgic-v3.c            |   6 +-
 virt/kvm/arm/vgic/vgic-v4.c            | 139 ++++++++++--
 virt/kvm/arm/vgic/vgic.h               |   1 +
 17 files changed, 763 insertions(+), 58 deletions(-)

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v4 01/20] irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributors
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-17  9:11   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 02/20] irqchip/gic-v3: Use SGIs without active state if offered Marc Zyngier
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

In a system that is only sparsly populated with CPUs, we can end-up with
redistributors structures that are not initialized. Let's make sure we
don't try and access those when iterating over them (in this case when
checking we have a L2 VPE table).

Fixes: 4e6437f12d6e ("irqchip/gic-v4.1: Ensure L2 vPE table is allocated at RD level")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 83b1186ffcad..da883a691028 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -2452,6 +2452,10 @@ static bool allocate_vpe_l2_table(int cpu, u32 id)
 	if (!gic_rdists->has_rvpeid)
 		return true;
 
+	/* Skip non-present CPUs */
+	if (!base)
+		return true;
+
 	val  = gicr_read_vpropbaser(base + SZ_128K + GICR_VPROPBASER);
 
 	esz  = FIELD_GET(GICR_VPROPBASER_4_1_ENTRY_SIZE, val) + 1;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 02/20] irqchip/gic-v3: Use SGIs without active state if offered
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 01/20] irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributors Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-17  9:18   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 03/20] irqchip/gic-v4.1: Advertise support v4.1 to KVM Marc Zyngier
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

To allow the direct injection of SGIs into a guest, the GICv4.1
architecture has to sacrifice the Active state so that SGIs look
a lot like LPIs (they are injected by the same mechanism).

In order not to break existing software, the architecture gives
offers guests OSs the choice: SGIs with or without an active
state. It is the hypervisors duty to honor the guest's choice.

For this, the architecture offers a discovery bit indicating whether
the GIC supports GICv4.1 SGIs (GICD_TYPER2.nASSGIcap), and another
bit indicating whether the guest wants Active-less SGIs or not
(controlled by GICD_CTLR.nASSGIreq).

A hypervisor not supporting GICv4.1 SGIs would leave nASSGIcap
clear, and a guest not knowing about GICv4.1 SGIs (or definitely
wanting an Active state) would leave nASSGIreq clear (both being
thankfully backward compatible with oler revisions of the GIC).

Since Linux is perfectly happy without an active state on SGIs,
inform the hypervisor that we'll use that if offered.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3.c       | 10 ++++++++--
 include/linux/irqchip/arm-gic-v3.h |  2 ++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index cd76435c4a31..73e87e176d76 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -724,6 +724,7 @@ static void __init gic_dist_init(void)
 	unsigned int i;
 	u64 affinity;
 	void __iomem *base = gic_data.dist_base;
+	u32 val;
 
 	/* Disable the distributor */
 	writel_relaxed(0, base + GICD_CTLR);
@@ -756,9 +757,14 @@ static void __init gic_dist_init(void)
 	/* Now do the common stuff, and wait for the distributor to drain */
 	gic_dist_config(base, GIC_LINE_NR, gic_dist_wait_for_rwp);
 
+	val = GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A | GICD_CTLR_ENABLE_G1;
+	if (gic_data.rdists.gicd_typer2 & GICD_TYPER2_nASSGIcap) {
+		pr_info("Enabling SGIs without active state\n");
+		val |= GICD_CTLR_nASSGIreq;
+	}
+
 	/* Enable distributor with ARE, Group1 */
-	writel_relaxed(GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A | GICD_CTLR_ENABLE_G1,
-		       base + GICD_CTLR);
+	writel_relaxed(val, base + GICD_CTLR);
 
 	/*
 	 * Set all global interrupts to the boot CPU only. ARE must be
diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
index 83439bfb6c5b..c29a02678a6f 100644
--- a/include/linux/irqchip/arm-gic-v3.h
+++ b/include/linux/irqchip/arm-gic-v3.h
@@ -57,6 +57,7 @@
 #define GICD_SPENDSGIR			0x0F20
 
 #define GICD_CTLR_RWP			(1U << 31)
+#define GICD_CTLR_nASSGIreq		(1U << 8)
 #define GICD_CTLR_DS			(1U << 6)
 #define GICD_CTLR_ARE_NS		(1U << 4)
 #define GICD_CTLR_ENABLE_G1A		(1U << 1)
@@ -90,6 +91,7 @@
 #define GICD_TYPER_ESPIS(typer)						\
 	(((typer) & GICD_TYPER_ESPI) ? GICD_TYPER_SPIS((typer) >> 27) : 0)
 
+#define GICD_TYPER2_nASSGIcap		(1U << 8)
 #define GICD_TYPER2_VIL			(1U << 7)
 #define GICD_TYPER2_VID			GENMASK(4, 0)
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 03/20] irqchip/gic-v4.1: Advertise support v4.1 to KVM
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 01/20] irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributors Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 02/20] irqchip/gic-v3: Use SGIs without active state if offered Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-17  9:09   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 04/20] irqchip/gic-v4.1: Map the ITS SGIR register page Marc Zyngier
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Tell KVM that we support v4.1. Nothing uses this information so far.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c       | 9 ++++++++-
 drivers/irqchip/irq-gic-v3.c           | 2 ++
 include/linux/irqchip/arm-gic-common.h | 2 ++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index da883a691028..9a7854416b89 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -4822,6 +4822,7 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
 	struct device_node *of_node;
 	struct its_node *its;
 	bool has_v4 = false;
+	bool has_v4_1 = false;
 	int err;
 
 	gic_rdists = rdists;
@@ -4842,8 +4843,14 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
 	if (err)
 		return err;
 
-	list_for_each_entry(its, &its_nodes, entry)
+	list_for_each_entry(its, &its_nodes, entry) {
 		has_v4 |= is_v4(its);
+		has_v4_1 |= is_v4_1(its);
+	}
+
+	/* Don't bother with inconsistent systems */
+	if (WARN_ON(!has_v4_1 && rdists->has_rvpeid))
+		rdists->has_rvpeid = false;
 
 	if (has_v4 & rdists->has_vlpis) {
 		if (its_init_vpe_domain() ||
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 73e87e176d76..5b192c0b6b57 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -1784,6 +1784,7 @@ static void __init gic_of_setup_kvm_info(struct device_node *node)
 		gic_v3_kvm_info.vcpu = r;
 
 	gic_v3_kvm_info.has_v4 = gic_data.rdists.has_vlpis;
+	gic_v3_kvm_info.has_v4_1 = gic_data.rdists.has_rvpeid;
 	gic_set_kvm_info(&gic_v3_kvm_info);
 }
 
@@ -2099,6 +2100,7 @@ static void __init gic_acpi_setup_kvm_info(void)
 	}
 
 	gic_v3_kvm_info.has_v4 = gic_data.rdists.has_vlpis;
+	gic_v3_kvm_info.has_v4_1 = gic_data.rdists.has_rvpeid;
 	gic_set_kvm_info(&gic_v3_kvm_info);
 }
 
diff --git a/include/linux/irqchip/arm-gic-common.h b/include/linux/irqchip/arm-gic-common.h
index b9850f5f1906..fa8c0455c352 100644
--- a/include/linux/irqchip/arm-gic-common.h
+++ b/include/linux/irqchip/arm-gic-common.h
@@ -32,6 +32,8 @@ struct gic_kvm_info {
 	struct resource vctrl;
 	/* vlpi support */
 	bool		has_v4;
+	/* rvpeid support */
+	bool		has_v4_1;
 };
 
 const struct gic_kvm_info *gic_get_kvm_info(void);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 04/20] irqchip/gic-v4.1: Map the ITS SGIR register page
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (2 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 03/20] irqchip/gic-v4.1: Advertise support v4.1 to KVM Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-20  3:17   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 05/20] irqchip/gic-v4.1: Plumb skeletal VSGI irqchip Marc Zyngier
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

One of the new features of GICv4.1 is to allow virtual SGIs to be
directly signaled to a VPE. For that, the ITS has grown a new
64kB page containing only a single register that is used to
signal a SGI to a given VPE.

Add a second mapping covering this new 64kB range, and take this
opportunity to limit the original mapping to 64kB, which is enough
to cover the span of the ITS registers.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 9a7854416b89..d6201443b406 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -96,6 +96,7 @@ struct its_node {
 	struct mutex		dev_alloc_lock;
 	struct list_head	entry;
 	void __iomem		*base;
+	void __iomem		*sgir_base;
 	phys_addr_t		phys_base;
 	struct its_cmd_block	*cmd_base;
 	struct its_cmd_block	*cmd_write;
@@ -4408,7 +4409,7 @@ static int __init its_probe_one(struct resource *res,
 	struct page *page;
 	int err;
 
-	its_base = ioremap(res->start, resource_size(res));
+	its_base = ioremap(res->start, SZ_64K);
 	if (!its_base) {
 		pr_warn("ITS@%pa: Unable to map ITS registers\n", &res->start);
 		return -ENOMEM;
@@ -4459,6 +4460,13 @@ static int __init its_probe_one(struct resource *res,
 
 		if (is_v4_1(its)) {
 			u32 svpet = FIELD_GET(GITS_TYPER_SVPET, typer);
+
+			its->sgir_base = ioremap(res->start + SZ_128K, SZ_64K);
+			if (!its->sgir_base) {
+				err = -ENOMEM;
+				goto out_free_its;
+			}
+
 			its->mpidr = readl_relaxed(its_base + GITS_MPIDR);
 
 			pr_info("ITS@%pa: Using GICv4.1 mode %08x %08x\n",
@@ -4472,7 +4480,7 @@ static int __init its_probe_one(struct resource *res,
 				get_order(ITS_CMD_QUEUE_SZ));
 	if (!page) {
 		err = -ENOMEM;
-		goto out_free_its;
+		goto out_unmap_sgir;
 	}
 	its->cmd_base = (void *)page_address(page);
 	its->cmd_write = its->cmd_base;
@@ -4539,6 +4547,9 @@ static int __init its_probe_one(struct resource *res,
 	its_free_tables(its);
 out_free_cmd:
 	free_pages((unsigned long)its->cmd_base, get_order(ITS_CMD_QUEUE_SZ));
+out_unmap_sgir:
+	if (its->sgir_base)
+		iounmap(its->sgir_base);
 out_free_its:
 	kfree(its);
 out_unmap:
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 05/20] irqchip/gic-v4.1: Plumb skeletal VSGI irqchip
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (3 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 04/20] irqchip/gic-v4.1: Map the ITS SGIR register page Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-20  3:21   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 06/20] irqchip/gic-v4.1: Add initial SGI configuration Marc Zyngier
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Since GICv4.1 has the capability to inject 16 SGIs into each VPE,
and that I'm keen not to invent too many specific interfaces to
manupulate these interrupts, let's pretend that each of these SGIs
is an actual Linux interrupt.

For that matter, let's introduce a minimal irqchip and irqdomain
setup that will get fleshed up in the following patches.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c   | 68 +++++++++++++++++++++++++++++-
 drivers/irqchip/irq-gic-v4.c       |  8 +++-
 include/linux/irqchip/arm-gic-v4.h |  9 +++-
 3 files changed, 81 insertions(+), 4 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index d6201443b406..6121c8f2a8ce 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -3823,6 +3823,67 @@ static struct irq_chip its_vpe_4_1_irq_chip = {
 	.irq_set_vcpu_affinity	= its_vpe_4_1_set_vcpu_affinity,
 };
 
+static int its_sgi_set_affinity(struct irq_data *d,
+				const struct cpumask *mask_val,
+				bool force)
+{
+	return -EINVAL;
+}
+
+static struct irq_chip its_sgi_irq_chip = {
+	.name			= "GICv4.1-sgi",
+	.irq_set_affinity	= its_sgi_set_affinity,
+};
+
+static int its_sgi_irq_domain_alloc(struct irq_domain *domain,
+				    unsigned int virq, unsigned int nr_irqs,
+				    void *args)
+{
+	struct its_vpe *vpe = args;
+	int i;
+
+	/* Yes, we do want 16 SGIs */
+	WARN_ON(nr_irqs != 16);
+
+	for (i = 0; i < 16; i++) {
+		vpe->sgi_config[i].priority = 0;
+		vpe->sgi_config[i].enabled = false;
+		vpe->sgi_config[i].group = false;
+
+		irq_domain_set_hwirq_and_chip(domain, virq + i, i,
+					      &its_sgi_irq_chip, vpe);
+		irq_set_status_flags(virq + i, IRQ_DISABLE_UNLAZY);
+	}
+
+	return 0;
+}
+
+static void its_sgi_irq_domain_free(struct irq_domain *domain,
+				    unsigned int virq,
+				    unsigned int nr_irqs)
+{
+	/* Nothing to do */
+}
+
+static int its_sgi_irq_domain_activate(struct irq_domain *domain,
+				       struct irq_data *d, bool reserve)
+{
+	return 0;
+}
+
+static void its_sgi_irq_domain_deactivate(struct irq_domain *domain,
+					  struct irq_data *d)
+{
+	/* Nothing to do */
+}
+
+static struct irq_domain_ops its_sgi_domain_ops = {
+	.alloc		= its_sgi_irq_domain_alloc,
+	.free		= its_sgi_irq_domain_free,
+	.activate	= its_sgi_irq_domain_activate,
+	.deactivate	= its_sgi_irq_domain_deactivate,
+};
+
 static int its_vpe_id_alloc(void)
 {
 	return ida_simple_get(&its_vpeid_ida, 0, ITS_MAX_VPEID, GFP_KERNEL);
@@ -4864,8 +4925,13 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
 		rdists->has_rvpeid = false;
 
 	if (has_v4 & rdists->has_vlpis) {
+		struct irq_domain_ops *sgi_ops = NULL;
+
+		if (has_v4_1)
+			sgi_ops = &its_sgi_domain_ops;
+
 		if (its_init_vpe_domain() ||
-		    its_init_v4(parent_domain, &its_vpe_domain_ops)) {
+		    its_init_v4(parent_domain, &its_vpe_domain_ops, sgi_ops)) {
 			rdists->has_vlpis = false;
 			pr_err("ITS: Disabling GICv4 support\n");
 		}
diff --git a/drivers/irqchip/irq-gic-v4.c b/drivers/irqchip/irq-gic-v4.c
index 45969927cc81..c01910d53f9e 100644
--- a/drivers/irqchip/irq-gic-v4.c
+++ b/drivers/irqchip/irq-gic-v4.c
@@ -85,6 +85,7 @@
 
 static struct irq_domain *gic_domain;
 static const struct irq_domain_ops *vpe_domain_ops;
+static const struct irq_domain_ops *sgi_domain_ops;
 
 int its_alloc_vcpu_irqs(struct its_vm *vm)
 {
@@ -216,12 +217,15 @@ int its_prop_update_vlpi(int irq, u8 config, bool inv)
 	return irq_set_vcpu_affinity(irq, &info);
 }
 
-int its_init_v4(struct irq_domain *domain, const struct irq_domain_ops *ops)
+int its_init_v4(struct irq_domain *domain,
+		const struct irq_domain_ops *vpe_ops,
+		const struct irq_domain_ops *sgi_ops)
 {
 	if (domain) {
 		pr_info("ITS: Enabling GICv4 support\n");
 		gic_domain = domain;
-		vpe_domain_ops = ops;
+		vpe_domain_ops = vpe_ops;
+		sgi_domain_ops = sgi_ops;
 		return 0;
 	}
 
diff --git a/include/linux/irqchip/arm-gic-v4.h b/include/linux/irqchip/arm-gic-v4.h
index d9c34968467a..30b4855bf766 100644
--- a/include/linux/irqchip/arm-gic-v4.h
+++ b/include/linux/irqchip/arm-gic-v4.h
@@ -49,6 +49,11 @@ struct its_vpe {
 		};
 		/* GICv4.1 implementations */
 		struct {
+			struct {
+				u8	priority;
+				bool	enabled;
+				bool	group;
+			}			sgi_config[16];
 			atomic_t vmapp_count;
 		};
 	};
@@ -118,6 +123,8 @@ int its_unmap_vlpi(int irq);
 int its_prop_update_vlpi(int irq, u8 config, bool inv);
 
 struct irq_domain_ops;
-int its_init_v4(struct irq_domain *domain, const struct irq_domain_ops *ops);
+int its_init_v4(struct irq_domain *domain,
+		const struct irq_domain_ops *vpe_ops,
+		const struct irq_domain_ops *sgi_ops);
 
 #endif
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 06/20] irqchip/gic-v4.1: Add initial SGI configuration
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (4 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 05/20] irqchip/gic-v4.1: Plumb skeletal VSGI irqchip Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-18  7:25   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 07/20] irqchip/gic-v4.1: Plumb mask/unmask SGI callbacks Marc Zyngier
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

The GICv4.1 ITS has yet another new command (VSGI) which allows
a VPE-targeted SGI to be configured (or have its pending state
cleared). Add support for this command and plumb it into the
activate irqdomain callback so that it is ready to be used.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c   | 76 +++++++++++++++++++++++++++++-
 include/linux/irqchip/arm-gic-v3.h |  3 +-
 2 files changed, 77 insertions(+), 2 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 6121c8f2a8ce..229e4ae9c59b 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -354,6 +354,15 @@ struct its_cmd_desc {
 		struct {
 			struct its_vpe *vpe;
 		} its_invdb_cmd;
+
+		struct {
+			struct its_vpe *vpe;
+			u8 sgi;
+			u8 priority;
+			bool enable;
+			bool group;
+			bool clear;
+		} its_vsgi_cmd;
 	};
 };
 
@@ -502,6 +511,31 @@ static void its_encode_db(struct its_cmd_block *cmd, bool db)
 	its_mask_encode(&cmd->raw_cmd[2], db, 63, 63);
 }
 
+static void its_encode_sgi_intid(struct its_cmd_block *cmd, u8 sgi)
+{
+	its_mask_encode(&cmd->raw_cmd[0], sgi, 35, 32);
+}
+
+static void its_encode_sgi_priority(struct its_cmd_block *cmd, u8 prio)
+{
+	its_mask_encode(&cmd->raw_cmd[0], prio >> 4, 23, 20);
+}
+
+static void its_encode_sgi_group(struct its_cmd_block *cmd, bool grp)
+{
+	its_mask_encode(&cmd->raw_cmd[0], grp, 10, 10);
+}
+
+static void its_encode_sgi_clear(struct its_cmd_block *cmd, bool clr)
+{
+	its_mask_encode(&cmd->raw_cmd[0], clr, 9, 9);
+}
+
+static void its_encode_sgi_enable(struct its_cmd_block *cmd, bool en)
+{
+	its_mask_encode(&cmd->raw_cmd[0], en, 8, 8);
+}
+
 static inline void its_fixup_cmd(struct its_cmd_block *cmd)
 {
 	/* Let's fixup BE commands */
@@ -867,6 +901,26 @@ static struct its_vpe *its_build_invdb_cmd(struct its_node *its,
 	return valid_vpe(its, desc->its_invdb_cmd.vpe);
 }
 
+static struct its_vpe *its_build_vsgi_cmd(struct its_node *its,
+					  struct its_cmd_block *cmd,
+					  struct its_cmd_desc *desc)
+{
+	if (WARN_ON(!is_v4_1(its)))
+		return NULL;
+
+	its_encode_cmd(cmd, GITS_CMD_VSGI);
+	its_encode_vpeid(cmd, desc->its_vsgi_cmd.vpe->vpe_id);
+	its_encode_sgi_intid(cmd, desc->its_vsgi_cmd.sgi);
+	its_encode_sgi_priority(cmd, desc->its_vsgi_cmd.priority);
+	its_encode_sgi_group(cmd, desc->its_vsgi_cmd.group);
+	its_encode_sgi_clear(cmd, desc->its_vsgi_cmd.clear);
+	its_encode_sgi_enable(cmd, desc->its_vsgi_cmd.enable);
+
+	its_fixup_cmd(cmd);
+
+	return valid_vpe(its, desc->its_vsgi_cmd.vpe);
+}
+
 static u64 its_cmd_ptr_to_offset(struct its_node *its,
 				 struct its_cmd_block *ptr)
 {
@@ -3823,6 +3877,21 @@ static struct irq_chip its_vpe_4_1_irq_chip = {
 	.irq_set_vcpu_affinity	= its_vpe_4_1_set_vcpu_affinity,
 };
 
+static void its_configure_sgi(struct irq_data *d, bool clear)
+{
+	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+	struct its_cmd_desc desc;
+
+	desc.its_vsgi_cmd.vpe = vpe;
+	desc.its_vsgi_cmd.sgi = d->hwirq;
+	desc.its_vsgi_cmd.priority = vpe->sgi_config[d->hwirq].priority;
+	desc.its_vsgi_cmd.enable = vpe->sgi_config[d->hwirq].enabled;
+	desc.its_vsgi_cmd.group = vpe->sgi_config[d->hwirq].group;
+	desc.its_vsgi_cmd.clear = clear;
+
+	its_send_single_vcommand(find_4_1_its(), its_build_vsgi_cmd, &desc);
+}
+
 static int its_sgi_set_affinity(struct irq_data *d,
 				const struct cpumask *mask_val,
 				bool force)
@@ -3868,13 +3937,18 @@ static void its_sgi_irq_domain_free(struct irq_domain *domain,
 static int its_sgi_irq_domain_activate(struct irq_domain *domain,
 				       struct irq_data *d, bool reserve)
 {
+	/* Write out the initial SGI configuration */
+	its_configure_sgi(d, false);
 	return 0;
 }
 
 static void its_sgi_irq_domain_deactivate(struct irq_domain *domain,
 					  struct irq_data *d)
 {
-	/* Nothing to do */
+	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+
+	vpe->sgi_config[d->hwirq].enabled = false;
+	its_configure_sgi(d, true);
 }
 
 static struct irq_domain_ops its_sgi_domain_ops = {
diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
index c29a02678a6f..a89578884263 100644
--- a/include/linux/irqchip/arm-gic-v3.h
+++ b/include/linux/irqchip/arm-gic-v3.h
@@ -502,8 +502,9 @@
 #define GITS_CMD_VMAPTI			GITS_CMD_GICv4(GITS_CMD_MAPTI)
 #define GITS_CMD_VMOVI			GITS_CMD_GICv4(GITS_CMD_MOVI)
 #define GITS_CMD_VSYNC			GITS_CMD_GICv4(GITS_CMD_SYNC)
-/* VMOVP and INVDB are the odd ones, as they dont have a physical counterpart */
+/* VMOVP, VSGI and INVDB are the odd ones, as they dont have a physical counterpart */
 #define GITS_CMD_VMOVP			GITS_CMD_GICv4(2)
+#define GITS_CMD_VSGI			GITS_CMD_GICv4(3)
 #define GITS_CMD_INVDB			GITS_CMD_GICv4(0xe)
 
 /*
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 07/20] irqchip/gic-v4.1: Plumb mask/unmask SGI callbacks
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (5 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 06/20] irqchip/gic-v4.1: Add initial SGI configuration Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-20  3:32   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state " Marc Zyngier
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Implement mask/unmask for virtual SGIs by calling into the
configuration helper.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 229e4ae9c59b..1e448d9a16ea 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -3892,6 +3892,22 @@ static void its_configure_sgi(struct irq_data *d, bool clear)
 	its_send_single_vcommand(find_4_1_its(), its_build_vsgi_cmd, &desc);
 }
 
+static void its_sgi_mask_irq(struct irq_data *d)
+{
+	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+
+	vpe->sgi_config[d->hwirq].enabled = false;
+	its_configure_sgi(d, false);
+}
+
+static void its_sgi_unmask_irq(struct irq_data *d)
+{
+	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+
+	vpe->sgi_config[d->hwirq].enabled = true;
+	its_configure_sgi(d, false);
+}
+
 static int its_sgi_set_affinity(struct irq_data *d,
 				const struct cpumask *mask_val,
 				bool force)
@@ -3901,6 +3917,8 @@ static int its_sgi_set_affinity(struct irq_data *d,
 
 static struct irq_chip its_sgi_irq_chip = {
 	.name			= "GICv4.1-sgi",
+	.irq_mask		= its_sgi_mask_irq,
+	.irq_unmask		= its_sgi_unmask_irq,
 	.irq_set_affinity	= its_sgi_set_affinity,
 };
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (6 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 07/20] irqchip/gic-v4.1: Plumb mask/unmask SGI callbacks Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-18  7:00   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 09/20] irqchip/gic-v4.1: Plumb set_vcpu_affinity " Marc Zyngier
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

To implement the get/set_irqchip_state callbacks (limited to the
PENDING state), we have to use a particular set of hacks:

- Reading the pending state is done by using a pair of new redistributor
  registers (GICR_VSGIR, GICR_VSGIPENDR), which allow the 16 interrupts
  state to be retrieved.
- Setting the pending state is done by generating it as we'd otherwise do
  for a guest (writing to GITS_SGIR)
- Clearing the pending state is done by emiting a VSGI command with the
  "clear" bit set.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c   | 56 ++++++++++++++++++++++++++++++
 include/linux/irqchip/arm-gic-v3.h | 14 ++++++++
 2 files changed, 70 insertions(+)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 1e448d9a16ea..a9753435c4ff 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -3915,11 +3915,67 @@ static int its_sgi_set_affinity(struct irq_data *d,
 	return -EINVAL;
 }
 
+static int its_sgi_set_irqchip_state(struct irq_data *d,
+				     enum irqchip_irq_state which,
+				     bool state)
+{
+	if (which != IRQCHIP_STATE_PENDING)
+		return -EINVAL;
+
+	if (state) {
+		struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+		struct its_node *its = find_4_1_its();
+		u64 val;
+
+		val  = FIELD_PREP(GITS_SGIR_VPEID, vpe->vpe_id);
+		val |= FIELD_PREP(GITS_SGIR_VINTID, d->hwirq);
+		writeq_relaxed(val, its->sgir_base + GITS_SGIR - SZ_128K);
+	} else {
+		its_configure_sgi(d, true);
+	}
+
+	return 0;
+}
+
+static int its_sgi_get_irqchip_state(struct irq_data *d,
+				     enum irqchip_irq_state which, bool *val)
+{
+	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+	void __iomem *base = gic_data_rdist_cpu(vpe->col_idx)->rd_base + SZ_128K;
+	u32 count = 1000000;	/* 1s! */
+	u32 status;
+
+	if (which != IRQCHIP_STATE_PENDING)
+		return -EINVAL;
+
+	writel_relaxed(vpe->vpe_id, base + GICR_VSGIR);
+	do {
+		status = readl_relaxed(base + GICR_VSGIPENDR);
+		if (!(status & GICR_VSGIPENDR_BUSY))
+			goto out;
+
+		count--;
+		if (!count) {
+			pr_err_ratelimited("Unable to get SGI status\n");
+			goto out;
+		}
+		cpu_relax();
+		udelay(1);
+	} while(count);
+
+out:
+	*val = !!(status & (1 << d->hwirq));
+
+	return 0;
+}
+
 static struct irq_chip its_sgi_irq_chip = {
 	.name			= "GICv4.1-sgi",
 	.irq_mask		= its_sgi_mask_irq,
 	.irq_unmask		= its_sgi_unmask_irq,
 	.irq_set_affinity	= its_sgi_set_affinity,
+	.irq_set_irqchip_state	= its_sgi_set_irqchip_state,
+	.irq_get_irqchip_state	= its_sgi_get_irqchip_state,
 };
 
 static int its_sgi_irq_domain_alloc(struct irq_domain *domain,
diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
index a89578884263..64da945486ac 100644
--- a/include/linux/irqchip/arm-gic-v3.h
+++ b/include/linux/irqchip/arm-gic-v3.h
@@ -345,6 +345,15 @@
 #define GICR_VPENDBASER_4_1_VGRP1EN	(1ULL << 58)
 #define GICR_VPENDBASER_4_1_VPEID	GENMASK_ULL(15, 0)
 
+#define GICR_VSGIR			0x0080
+
+#define GICR_VSGIR_VPEID		GENMASK(15, 0)
+
+#define GICR_VSGIPENDR			0x0088
+
+#define GICR_VSGIPENDR_BUSY		(1U << 31)
+#define GICR_VSGIPENDR_PENDING		GENMASK(15, 0)
+
 /*
  * ITS registers, offsets from ITS_base
  */
@@ -368,6 +377,11 @@
 
 #define GITS_TRANSLATER			0x10040
 
+#define GITS_SGIR			0x20020
+
+#define GITS_SGIR_VPEID			GENMASK_ULL(47, 32)
+#define GITS_SGIR_VINTID		GENMASK_ULL(7, 0)
+
 #define GITS_CTLR_ENABLE		(1U << 0)
 #define GITS_CTLR_ImDe			(1U << 1)
 #define	GITS_CTLR_ITS_NUMBER_SHIFT	4
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 09/20] irqchip/gic-v4.1: Plumb set_vcpu_affinity SGI callbacks
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (7 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state " Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-20  3:37   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 10/20] irqchip/gic-v4.1: Move doorbell management to the GICv4 abstraction layer Marc Zyngier
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

As for VLPIs, there is a number of configuration bits that cannot
be directly communicated through the normal irqchip API, and we
have to use our good old friend set_vcpu_affinity.

This is used to configure group and priority for a given vSGI.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c   | 18 ++++++++++++++++++
 include/linux/irqchip/arm-gic-v4.h |  5 +++++
 2 files changed, 23 insertions(+)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index a9753435c4ff..a2e824eae43f 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -3969,6 +3969,23 @@ static int its_sgi_get_irqchip_state(struct irq_data *d,
 	return 0;
 }
 
+static int its_sgi_set_vcpu_affinity(struct irq_data *d, void *vcpu_info)
+{
+	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+	struct its_cmd_info *info = vcpu_info;
+
+	switch (info->cmd_type) {
+	case PROP_UPDATE_SGI:
+		vpe->sgi_config[d->hwirq].priority = info->priority;
+		vpe->sgi_config[d->hwirq].group = info->group;
+		its_configure_sgi(d, false);
+		return 0;
+
+	default:
+		return -EINVAL;
+	}
+}
+
 static struct irq_chip its_sgi_irq_chip = {
 	.name			= "GICv4.1-sgi",
 	.irq_mask		= its_sgi_mask_irq,
@@ -3976,6 +3993,7 @@ static struct irq_chip its_sgi_irq_chip = {
 	.irq_set_affinity	= its_sgi_set_affinity,
 	.irq_set_irqchip_state	= its_sgi_set_irqchip_state,
 	.irq_get_irqchip_state	= its_sgi_get_irqchip_state,
+	.irq_set_vcpu_affinity	= its_sgi_set_vcpu_affinity,
 };
 
 static int its_sgi_irq_domain_alloc(struct irq_domain *domain,
diff --git a/include/linux/irqchip/arm-gic-v4.h b/include/linux/irqchip/arm-gic-v4.h
index 30b4855bf766..a1a9d40266f5 100644
--- a/include/linux/irqchip/arm-gic-v4.h
+++ b/include/linux/irqchip/arm-gic-v4.h
@@ -98,6 +98,7 @@ enum its_vcpu_info_cmd_type {
 	SCHEDULE_VPE,
 	DESCHEDULE_VPE,
 	INVALL_VPE,
+	PROP_UPDATE_SGI,
 };
 
 struct its_cmd_info {
@@ -110,6 +111,10 @@ struct its_cmd_info {
 			bool		g0en;
 			bool		g1en;
 		};
+		struct {
+			u8		priority;
+			bool		group;
+		};
 	};
 };
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 10/20] irqchip/gic-v4.1: Move doorbell management to the GICv4 abstraction layer
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (8 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 09/20] irqchip/gic-v4.1: Plumb set_vcpu_affinity " Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 11/20] irqchip/gic-v4.1: Add VSGI allocation/teardown Marc Zyngier
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

In order to hide some of the differences between v4.0 and v4.1, move
the doorbell management out of the KVM code, and into the GICv4-specific
layer. This allows the calling code to ask for the doorbell when blocking,
and otherwise to leave the doorbell permanently disabled.

This matches the v4.1 code perfectly, and only results in a minor
refactoring of the v4.0 code.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v4.c       | 45 +++++++++++++++++++++++++++---
 include/kvm/arm_vgic.h             |  1 +
 include/linux/irqchip/arm-gic-v4.h |  3 +-
 virt/kvm/arm/vgic/vgic-v3.c        |  4 ++-
 virt/kvm/arm/vgic/vgic-v4.c        | 34 ++++++++++------------
 5 files changed, 61 insertions(+), 26 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v4.c b/drivers/irqchip/irq-gic-v4.c
index c01910d53f9e..117ba6db023d 100644
--- a/drivers/irqchip/irq-gic-v4.c
+++ b/drivers/irqchip/irq-gic-v4.c
@@ -87,6 +87,11 @@ static struct irq_domain *gic_domain;
 static const struct irq_domain_ops *vpe_domain_ops;
 static const struct irq_domain_ops *sgi_domain_ops;
 
+static bool has_v4_1(void)
+{
+	return !!sgi_domain_ops;
+}
+
 int its_alloc_vcpu_irqs(struct its_vm *vm)
 {
 	int vpe_base_irq, i;
@@ -139,18 +144,50 @@ static int its_send_vpe_cmd(struct its_vpe *vpe, struct its_cmd_info *info)
 	return irq_set_vcpu_affinity(vpe->irq, info);
 }
 
-int its_schedule_vpe(struct its_vpe *vpe, bool on)
+int its_make_vpe_non_resident(struct its_vpe *vpe, bool db)
 {
-	struct its_cmd_info info;
+	struct irq_desc *desc = irq_to_desc(vpe->irq);
+	struct its_cmd_info info = { };
 	int ret;
 
 	WARN_ON(preemptible());
 
-	info.cmd_type = on ? SCHEDULE_VPE : DESCHEDULE_VPE;
+	info.cmd_type = DESCHEDULE_VPE;
+	if (has_v4_1()) {
+		/* GICv4.1 can directly deal with doorbells */
+		info.req_db = db;
+	} else {
+		/* Undo the nested disable_irq() calls... */
+		while (db && irqd_irq_disabled(&desc->irq_data))
+			enable_irq(vpe->irq);
+	}
+
+	ret = its_send_vpe_cmd(vpe, &info);
+	if (!ret)
+		vpe->resident = false;
+
+	return ret;
+}
+
+int its_make_vpe_resident(struct its_vpe *vpe, bool g0en, bool g1en)
+{
+	struct its_cmd_info info = { };
+	int ret;
+
+	WARN_ON(preemptible());
+
+	info.cmd_type = SCHEDULE_VPE;
+	if (has_v4_1()) {
+		info.g0en = g0en;
+		info.g1en = g1en;
+	} else {
+		/* Disabled the doorbell, as we're about to enter the guest */
+		disable_irq_nosync(vpe->irq);
+	}
 
 	ret = its_send_vpe_cmd(vpe, &info);
 	if (!ret)
-		vpe->resident = on;
+		vpe->resident = true;
 
 	return ret;
 }
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 9d53f545a3d5..63457908c9c4 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -70,6 +70,7 @@ struct vgic_global {
 
 	/* Hardware has GICv4? */
 	bool			has_gicv4;
+	bool			has_gicv4_1;
 
 	/* GIC system register CPU interface */
 	struct static_key_false gicv3_cpuif;
diff --git a/include/linux/irqchip/arm-gic-v4.h b/include/linux/irqchip/arm-gic-v4.h
index a1a9d40266f5..cca4198fa1d5 100644
--- a/include/linux/irqchip/arm-gic-v4.h
+++ b/include/linux/irqchip/arm-gic-v4.h
@@ -120,7 +120,8 @@ struct its_cmd_info {
 
 int its_alloc_vcpu_irqs(struct its_vm *vm);
 void its_free_vcpu_irqs(struct its_vm *vm);
-int its_schedule_vpe(struct its_vpe *vpe, bool on);
+int its_make_vpe_resident(struct its_vpe *vpe, bool g0en, bool g1en);
+int its_make_vpe_non_resident(struct its_vpe *vpe, bool db);
 int its_invall_vpe(struct its_vpe *vpe);
 int its_map_vlpi(int irq, struct its_vlpi_map *map);
 int its_get_vlpi(int irq, struct its_vlpi_map *map);
diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index f45635a6f0ec..1bc09b523486 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -595,7 +595,9 @@ int vgic_v3_probe(const struct gic_kvm_info *info)
 	/* GICv4 support? */
 	if (info->has_v4) {
 		kvm_vgic_global_state.has_gicv4 = gicv4_enable;
-		kvm_info("GICv4 support %sabled\n",
+		kvm_vgic_global_state.has_gicv4_1 = info->has_v4_1 && gicv4_enable;
+		kvm_info("GICv4%s support %sabled\n",
+			 kvm_vgic_global_state.has_gicv4_1 ? ".1" : "",
 			 gicv4_enable ? "en" : "dis");
 	}
 
diff --git a/virt/kvm/arm/vgic/vgic-v4.c b/virt/kvm/arm/vgic/vgic-v4.c
index 46f875589c47..1eb0f8c76219 100644
--- a/virt/kvm/arm/vgic/vgic-v4.c
+++ b/virt/kvm/arm/vgic/vgic-v4.c
@@ -67,10 +67,10 @@
  * it. And if we've migrated our vcpu from one CPU to another, we must
  * tell the ITS (so that the messages reach the right redistributor).
  * This is done in two steps: first issue a irq_set_affinity() on the
- * irq corresponding to the vcpu, then call its_schedule_vpe(). You
- * must be in a non-preemptible context. On exit, another call to
- * its_schedule_vpe() tells the redistributor that we're done with the
- * vcpu.
+ * irq corresponding to the vcpu, then call its_make_vpe_resident().
+ * You must be in a non-preemptible context. On exit, a call to
+ * its_make_vpe_non_resident() tells the redistributor that we're done
+ * with the vcpu.
  *
  * Finally, the doorbell handling: Each vcpu is allocated an interrupt
  * which will fire each time a VLPI is made pending whilst the vcpu is
@@ -86,7 +86,8 @@ static irqreturn_t vgic_v4_doorbell_handler(int irq, void *info)
 	struct kvm_vcpu *vcpu = info;
 
 	/* We got the message, no need to fire again */
-	if (!irqd_irq_disabled(&irq_to_desc(irq)->irq_data))
+	if (!kvm_vgic_global_state.has_gicv4_1 &&
+	    !irqd_irq_disabled(&irq_to_desc(irq)->irq_data))
 		disable_irq_nosync(irq);
 
 	vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last = true;
@@ -199,19 +200,11 @@ void vgic_v4_teardown(struct kvm *kvm)
 int vgic_v4_put(struct kvm_vcpu *vcpu, bool need_db)
 {
 	struct its_vpe *vpe = &vcpu->arch.vgic_cpu.vgic_v3.its_vpe;
-	struct irq_desc *desc = irq_to_desc(vpe->irq);
 
 	if (!vgic_supports_direct_msis(vcpu->kvm) || !vpe->resident)
 		return 0;
 
-	/*
-	 * If blocking, a doorbell is required. Undo the nested
-	 * disable_irq() calls...
-	 */
-	while (need_db && irqd_irq_disabled(&desc->irq_data))
-		enable_irq(vpe->irq);
-
-	return its_schedule_vpe(vpe, false);
+	return its_make_vpe_non_resident(vpe, need_db);
 }
 
 int vgic_v4_load(struct kvm_vcpu *vcpu)
@@ -232,18 +225,19 @@ int vgic_v4_load(struct kvm_vcpu *vcpu)
 	if (err)
 		return err;
 
-	/* Disabled the doorbell, as we're about to enter the guest */
-	disable_irq_nosync(vpe->irq);
-
-	err = its_schedule_vpe(vpe, true);
+	err = its_make_vpe_resident(vpe, false, vcpu->kvm->arch.vgic.enabled);
 	if (err)
 		return err;
 
 	/*
 	 * Now that the VPE is resident, let's get rid of a potential
-	 * doorbell interrupt that would still be pending.
+	 * doorbell interrupt that would still be pending. This is a
+	 * GICv4.0 only "feature"...
 	 */
-	return irq_set_irqchip_state(vpe->irq, IRQCHIP_STATE_PENDING, false);
+	if (!kvm_vgic_global_state.has_gicv4_1)
+		err = irq_set_irqchip_state(vpe->irq, IRQCHIP_STATE_PENDING, false);
+
+	return err;
 }
 
 static struct vgic_its *vgic_get_its(struct kvm *kvm,
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 11/20] irqchip/gic-v4.1: Add VSGI allocation/teardown
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (9 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 10/20] irqchip/gic-v4.1: Move doorbell management to the GICv4 abstraction layer Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 12/20] irqchip/gic-v4.1: Add VSGI property setup Marc Zyngier
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Allocate per-VPE SGIs when initializing the GIC-specific part of the
VPE data structure.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v4.c       | 68 +++++++++++++++++++++++++++++-
 include/linux/irqchip/arm-gic-v4.h |  2 +
 2 files changed, 69 insertions(+), 1 deletion(-)

diff --git a/drivers/irqchip/irq-gic-v4.c b/drivers/irqchip/irq-gic-v4.c
index 117ba6db023d..99b33f60ac63 100644
--- a/drivers/irqchip/irq-gic-v4.c
+++ b/drivers/irqchip/irq-gic-v4.c
@@ -92,6 +92,47 @@ static bool has_v4_1(void)
 	return !!sgi_domain_ops;
 }
 
+static int its_alloc_vcpu_sgis(struct its_vpe *vpe, int idx)
+{
+	char *name;
+	int sgi_base;
+
+	if (!has_v4_1())
+		return 0;
+
+	name = kasprintf(GFP_KERNEL, "GICv4-sgi-%d", task_pid_nr(current));
+	if (!name)
+		goto err;
+
+	vpe->fwnode = irq_domain_alloc_named_id_fwnode(name, idx);
+	if (!vpe->fwnode)
+		goto err;
+
+	kfree(name);
+	name = NULL;
+
+	vpe->sgi_domain = irq_domain_create_linear(vpe->fwnode, 16,
+						   sgi_domain_ops, vpe);
+	if (!vpe->sgi_domain)
+		goto err;
+
+	sgi_base = __irq_domain_alloc_irqs(vpe->sgi_domain, -1, 16,
+					       NUMA_NO_NODE, vpe,
+					       false, NULL);
+	if (sgi_base <= 0)
+		goto err;
+
+	return 0;
+
+err:
+	if (vpe->sgi_domain)
+		irq_domain_remove(vpe->sgi_domain);
+	if (vpe->fwnode)
+		irq_domain_free_fwnode(vpe->fwnode);
+	kfree(name);
+	return -ENOMEM;
+}
+
 int its_alloc_vcpu_irqs(struct its_vm *vm)
 {
 	int vpe_base_irq, i;
@@ -118,8 +159,13 @@ int its_alloc_vcpu_irqs(struct its_vm *vm)
 	if (vpe_base_irq <= 0)
 		goto err;
 
-	for (i = 0; i < vm->nr_vpes; i++)
+	for (i = 0; i < vm->nr_vpes; i++) {
+		int ret;
 		vm->vpes[i]->irq = vpe_base_irq + i;
+		ret = its_alloc_vcpu_sgis(vm->vpes[i], i);
+		if (ret)
+			goto err;
+	}
 
 	return 0;
 
@@ -132,8 +178,28 @@ int its_alloc_vcpu_irqs(struct its_vm *vm)
 	return -ENOMEM;
 }
 
+static void its_free_sgi_irqs(struct its_vm *vm)
+{
+	int i;
+
+	if (!has_v4_1())
+		return;
+
+	for (i = 0; i < vm->nr_vpes; i++) {
+		unsigned int irq = irq_find_mapping(vm->vpes[i]->sgi_domain, 0);
+
+		if (WARN_ON(!irq))
+			continue;
+
+		irq_domain_free_irqs(irq, 16);
+		irq_domain_remove(vm->vpes[i]->sgi_domain);
+		irq_domain_free_fwnode(vm->vpes[i]->fwnode);
+	}
+}
+
 void its_free_vcpu_irqs(struct its_vm *vm)
 {
+	its_free_sgi_irqs(vm);
 	irq_domain_free_irqs(vm->vpes[0]->irq, vm->nr_vpes);
 	irq_domain_remove(vm->domain);
 	irq_domain_free_fwnode(vm->fwnode);
diff --git a/include/linux/irqchip/arm-gic-v4.h b/include/linux/irqchip/arm-gic-v4.h
index cca4198fa1d5..9fbd0418f569 100644
--- a/include/linux/irqchip/arm-gic-v4.h
+++ b/include/linux/irqchip/arm-gic-v4.h
@@ -49,6 +49,8 @@ struct its_vpe {
 		};
 		/* GICv4.1 implementations */
 		struct {
+			struct fwnode_handle	*fwnode;
+			struct irq_domain	*sgi_domain;
 			struct {
 				u8	priority;
 				bool	enabled;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 12/20] irqchip/gic-v4.1: Add VSGI property setup
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (10 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 11/20] irqchip/gic-v4.1: Add VSGI allocation/teardown Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 13/20] irqchip/gic-v4.1: Eagerly vmap vPEs Marc Zyngier
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Add the SGI configuration entry point for KVM to use.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v4.c       | 13 +++++++++++++
 include/linux/irqchip/arm-gic-v4.h |  1 +
 2 files changed, 14 insertions(+)

diff --git a/drivers/irqchip/irq-gic-v4.c b/drivers/irqchip/irq-gic-v4.c
index 99b33f60ac63..f3f06c5c7e54 100644
--- a/drivers/irqchip/irq-gic-v4.c
+++ b/drivers/irqchip/irq-gic-v4.c
@@ -320,6 +320,19 @@ int its_prop_update_vlpi(int irq, u8 config, bool inv)
 	return irq_set_vcpu_affinity(irq, &info);
 }
 
+int its_prop_update_vsgi(int irq, u8 priority, bool group)
+{
+	struct its_cmd_info info = {
+		.cmd_type = PROP_UPDATE_SGI,
+		{
+			.priority	= priority,
+			.group		= group,
+		},
+	};
+
+	return irq_set_vcpu_affinity(irq, &info);
+}
+
 int its_init_v4(struct irq_domain *domain,
 		const struct irq_domain_ops *vpe_ops,
 		const struct irq_domain_ops *sgi_ops)
diff --git a/include/linux/irqchip/arm-gic-v4.h b/include/linux/irqchip/arm-gic-v4.h
index 9fbd0418f569..46c167a6349f 100644
--- a/include/linux/irqchip/arm-gic-v4.h
+++ b/include/linux/irqchip/arm-gic-v4.h
@@ -129,6 +129,7 @@ int its_map_vlpi(int irq, struct its_vlpi_map *map);
 int its_get_vlpi(int irq, struct its_vlpi_map *map);
 int its_unmap_vlpi(int irq);
 int its_prop_update_vlpi(int irq, u8 config, bool inv);
+int its_prop_update_vsgi(int irq, u8 priority, bool group);
 
 struct irq_domain_ops;
 int its_init_v4(struct irq_domain *domain,
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 13/20] irqchip/gic-v4.1: Eagerly vmap vPEs
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (11 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 12/20] irqchip/gic-v4.1: Add VSGI property setup Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 14/20] KVM: arm64: GICv4.1: Let doorbells be auto-enabled Marc Zyngier
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Now that we have HW-accelerated SGIs being delivered to VPEs, it
becomes required to map the VPEs on all ITSs instead of relying
on the lazy approach that we would use when using the ITS-list
mechanism.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/irqchip/irq-gic-v3-its.c | 39 +++++++++++++++++++++++++-------
 1 file changed, 31 insertions(+), 8 deletions(-)

diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index a2e824eae43f..7656b353a95f 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -1554,12 +1554,31 @@ static int its_irq_set_irqchip_state(struct irq_data *d,
 	return 0;
 }
 
+/*
+ * Two favourable cases:
+ *
+ * (a) Either we have a GICv4.1, and all vPEs have to be mapped at all times
+ *     for vSGI delivery
+ *
+ * (b) Or the ITSs do not use a list map, meaning that VMOVP is cheap enough
+ *     and we're better off mapping all VPEs always
+ *
+ * If neither (a) nor (b) is true, then we map vPEs on demand.
+ *
+ */
+static bool gic_requires_eager_mapping(void)
+{
+	if (!its_list_map || gic_rdists->has_rvpeid)
+		return true;
+
+	return false;
+}
+
 static void its_map_vm(struct its_node *its, struct its_vm *vm)
 {
 	unsigned long flags;
 
-	/* Not using the ITS list? Everything is always mapped. */
-	if (!its_list_map)
+	if (gic_requires_eager_mapping())
 		return;
 
 	raw_spin_lock_irqsave(&vmovp_lock, flags);
@@ -1593,7 +1612,7 @@ static void its_unmap_vm(struct its_node *its, struct its_vm *vm)
 	unsigned long flags;
 
 	/* Not using the ITS list? Everything is always mapped. */
-	if (!its_list_map)
+	if (gic_requires_eager_mapping())
 		return;
 
 	raw_spin_lock_irqsave(&vmovp_lock, flags);
@@ -4192,8 +4211,12 @@ static int its_vpe_irq_domain_activate(struct irq_domain *domain,
 	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
 	struct its_node *its;
 
-	/* If we use the list map, we issue VMAPP on demand... */
-	if (its_list_map)
+	/*
+	 * If we use the list map, we issue VMAPP on demand... Unless
+	 * we're on a GICv4.1 and we eagerly map the VPE on all ITSs
+	 * so that VSGIs can work.
+	 */
+	if (!gic_requires_eager_mapping())
 		return 0;
 
 	/* Map the VPE to the first possible CPU */
@@ -4219,10 +4242,10 @@ static void its_vpe_irq_domain_deactivate(struct irq_domain *domain,
 	struct its_node *its;
 
 	/*
-	 * If we use the list map, we unmap the VPE once no VLPIs are
-	 * associated with the VM.
+	 * If we use the list map on GICv4.0, we unmap the VPE once no
+	 * VLPIs are associated with the VM.
 	 */
-	if (its_list_map)
+	if (!gic_requires_eager_mapping())
 		return;
 
 	list_for_each_entry(its, &its_nodes, entry) {
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 14/20] KVM: arm64: GICv4.1: Let doorbells be auto-enabled
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (12 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 13/20] irqchip/gic-v4.1: Eagerly vmap vPEs Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 15/20] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers Marc Zyngier
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

As GICv4.1 understands the life cycle of doorbells (instead of
just randomly firing them at the most inconvenient time), just
enable them at irq_request time, and be done with it.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 virt/kvm/arm/vgic/vgic-v4.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic-v4.c b/virt/kvm/arm/vgic/vgic-v4.c
index 1eb0f8c76219..c2fcde104ea2 100644
--- a/virt/kvm/arm/vgic/vgic-v4.c
+++ b/virt/kvm/arm/vgic/vgic-v4.c
@@ -141,6 +141,7 @@ int vgic_v4_init(struct kvm *kvm)
 
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		int irq = dist->its_vm.vpes[i]->irq;
+		unsigned long irq_flags = DB_IRQ_FLAGS;
 
 		/*
 		 * Don't automatically enable the doorbell, as we're
@@ -148,8 +149,14 @@ int vgic_v4_init(struct kvm *kvm)
 		 * blocked. Also disable the lazy disabling, as the
 		 * doorbell could kick us out of the guest too
 		 * early...
+		 *
+		 * On GICv4.1, the doorbell is managed in HW and must
+		 * be left enabled.
 		 */
-		irq_set_status_flags(irq, DB_IRQ_FLAGS);
+		if (kvm_vgic_global_state.has_gicv4_1)
+			irq_flags &= ~IRQ_NOAUTOEN;
+		irq_set_status_flags(irq, irq_flags);
+
 		ret = request_irq(irq, vgic_v4_doorbell_handler,
 				  0, "vcpu", vcpu);
 		if (ret) {
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 15/20] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (13 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 14/20] KVM: arm64: GICv4.1: Let doorbells be auto-enabled Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-18  8:46   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 16/20] KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts Marc Zyngier
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Most of the GICv3 emulation code that deals with SGIs now has to be
aware of the v4.1 capabilities in order to benefit from it.

Add such support, keyed on the interrupt having the hw flag set and
being a SGI.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 virt/kvm/arm/vgic/vgic-mmio-v3.c | 15 +++++-
 virt/kvm/arm/vgic/vgic-mmio.c    | 88 ++++++++++++++++++++++++++++++--
 2 files changed, 96 insertions(+), 7 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
index ebc218840fc2..de89da76a379 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
@@ -6,6 +6,7 @@
 #include <linux/irqchip/arm-gic-v3.h>
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
+#include <linux/interrupt.h>
 #include <kvm/iodev.h>
 #include <kvm/arm_vgic.h>
 
@@ -942,8 +943,18 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1)
 		 * generate interrupts of either group.
 		 */
 		if (!irq->group || allow_group1) {
-			irq->pending_latch = true;
-			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+			if (!irq->hw) {
+				irq->pending_latch = true;
+				vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+			} else {
+				/* HW SGI? Ask the GIC to inject it */
+				int err;
+				err = irq_set_irqchip_state(irq->host_irq,
+							    IRQCHIP_STATE_PENDING,
+							    true);
+				WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
+				raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+			}
 		} else {
 			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
 		}
diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index d656ebd5f9d4..0a1fb61e5b89 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -5,6 +5,8 @@
 
 #include <linux/bitops.h>
 #include <linux/bsearch.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <kvm/iodev.h>
@@ -59,6 +61,11 @@ unsigned long vgic_mmio_read_group(struct kvm_vcpu *vcpu,
 	return value;
 }
 
+static void vgic_update_vsgi(struct vgic_irq *irq)
+{
+	WARN_ON(its_prop_update_vsgi(irq->host_irq, irq->priority, irq->group));
+}
+
 void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr,
 			   unsigned int len, unsigned long val)
 {
@@ -71,7 +78,12 @@ void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr,
 
 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
 		irq->group = !!(val & BIT(i));
-		vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
+			vgic_update_vsgi(irq);
+			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+		} else {
+			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+		}
 
 		vgic_put_irq(vcpu->kvm, irq);
 	}
@@ -113,7 +125,21 @@ void vgic_mmio_write_senable(struct kvm_vcpu *vcpu,
 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
 
 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
-		if (vgic_irq_is_mapped_level(irq)) {
+		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
+			if (!irq->enabled) {
+				struct irq_data *data;
+
+				irq->enabled = true;
+				data = &irq_to_desc(irq->host_irq)->irq_data;
+				while (irqd_irq_disabled(data))
+					enable_irq(irq->host_irq);
+			}
+
+			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+			vgic_put_irq(vcpu->kvm, irq);
+
+			continue;
+		} else if (vgic_irq_is_mapped_level(irq)) {
 			bool was_high = irq->line_level;
 
 			/*
@@ -148,6 +174,8 @@ void vgic_mmio_write_cenable(struct kvm_vcpu *vcpu,
 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
 
 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
+		if (irq->hw && vgic_irq_is_sgi(irq->intid) && irq->enabled)
+			disable_irq_nosync(irq->host_irq);
 
 		irq->enabled = false;
 
@@ -167,10 +195,22 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
 	for (i = 0; i < len * 8; i++) {
 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
 		unsigned long flags;
+		bool val;
 
 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
-		if (irq_is_pending(irq))
-			value |= (1U << i);
+		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
+			int err;
+
+			val = false;
+			err = irq_get_irqchip_state(irq->host_irq,
+						    IRQCHIP_STATE_PENDING,
+						    &val);
+			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
+		} else {
+			val = irq_is_pending(irq);
+		}
+
+		value |= ((u32)val << i);
 		raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
 
 		vgic_put_irq(vcpu->kvm, irq);
@@ -227,6 +267,21 @@ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu,
 		}
 
 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
+
+		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
+			/* HW SGI? Ask the GIC to inject it */
+			int err;
+			err = irq_set_irqchip_state(irq->host_irq,
+						    IRQCHIP_STATE_PENDING,
+						    true);
+			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
+
+			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+			vgic_put_irq(vcpu->kvm, irq);
+
+			continue;
+		}
+
 		if (irq->hw)
 			vgic_hw_irq_spending(vcpu, irq, is_uaccess);
 		else
@@ -281,6 +336,20 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu,
 
 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
 
+		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
+			/* HW SGI? Ask the GIC to inject it */
+			int err;
+			err = irq_set_irqchip_state(irq->host_irq,
+						    IRQCHIP_STATE_PENDING,
+						    false);
+			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
+
+			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+			vgic_put_irq(vcpu->kvm, irq);
+
+			continue;
+		}
+
 		if (irq->hw)
 			vgic_hw_irq_cpending(vcpu, irq, is_uaccess);
 		else
@@ -330,8 +399,15 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 
 	raw_spin_lock_irqsave(&irq->irq_lock, flags);
 
-	if (irq->hw) {
+	if (irq->hw && !vgic_irq_is_sgi(irq->intid)) {
 		vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu);
+	} else if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
+		/*
+		 * GICv4.1 VSGI feature doesn't track an active state,
+		 * so let's not kid ourselves, there is nothing we can
+		 * do here.
+		 */
+		irq->active = false;
 	} else {
 		u32 model = vcpu->kvm->arch.vgic.vgic_model;
 		u8 active_source;
@@ -505,6 +581,8 @@ void vgic_mmio_write_priority(struct kvm_vcpu *vcpu,
 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
 		/* Narrow the priority range to what we actually support */
 		irq->priority = (val >> (i * 8)) & GENMASK(7, 8 - VGIC_PRI_BITS);
+		if (irq->hw && vgic_irq_is_sgi(irq->intid))
+			vgic_update_vsgi(irq);
 		raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
 
 		vgic_put_irq(vcpu->kvm, irq);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 16/20] KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (14 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 15/20] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-20  3:55   ` Zenghui Yu
  2020-02-14 14:57 ` [PATCH v4 17/20] KVM: arm64: GICv4.1: Plumb SGI implementation selection in the distributor Marc Zyngier
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

In order to let a guest buy in the new, active-less SGIs, we
need to be able to switch between the two modes.

Handle this by stopping all guest activity, transfer the state
from one mode to the other, and resume the guest.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 include/kvm/arm_vgic.h      |  3 ++
 virt/kvm/arm/vgic/vgic-v3.c |  2 +
 virt/kvm/arm/vgic/vgic-v4.c | 96 +++++++++++++++++++++++++++++++++++++
 virt/kvm/arm/vgic/vgic.h    |  1 +
 4 files changed, 102 insertions(+)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 63457908c9c4..69f4164d6477 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -231,6 +231,9 @@ struct vgic_dist {
 	/* distributor enabled */
 	bool			enabled;
 
+	/* Wants SGIs without active state */
+	bool			nassgireq;
+
 	struct vgic_irq		*spis;
 
 	struct vgic_io_device	dist_iodev;
diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index 1bc09b523486..2c9fc13e2c59 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -540,6 +540,8 @@ int vgic_v3_map_resources(struct kvm *kvm)
 		goto out;
 	}
 
+	if (kvm_vgic_global_state.has_gicv4_1)
+		vgic_v4_configure_vsgis(kvm);
 	dist->ready = true;
 
 out:
diff --git a/virt/kvm/arm/vgic/vgic-v4.c b/virt/kvm/arm/vgic/vgic-v4.c
index c2fcde104ea2..063785fd2dc7 100644
--- a/virt/kvm/arm/vgic/vgic-v4.c
+++ b/virt/kvm/arm/vgic/vgic-v4.c
@@ -97,6 +97,102 @@ static irqreturn_t vgic_v4_doorbell_handler(int irq, void *info)
 	return IRQ_HANDLED;
 }
 
+static void vgic_v4_sync_sgi_config(struct its_vpe *vpe, struct vgic_irq *irq)
+{
+	vpe->sgi_config[irq->intid].enabled	= irq->enabled;
+	vpe->sgi_config[irq->intid].group 	= irq->group;
+	vpe->sgi_config[irq->intid].priority	= irq->priority;
+}
+
+static void vgic_v4_enable_vsgis(struct kvm_vcpu *vcpu)
+{
+	struct its_vpe *vpe = &vcpu->arch.vgic_cpu.vgic_v3.its_vpe;
+	int i;
+
+	/*
+	 * With GICv4.1, every virtual SGI can be directly injected. So
+	 * let's pretend that they are HW interrupts, tied to a host
+	 * IRQ. The SGI code will do its magic.
+	 */
+	for (i = 0; i < VGIC_NR_SGIS; i++) {
+		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, i);
+		struct irq_desc *desc;
+		int ret;
+
+		if (irq->hw) {
+			vgic_put_irq(vcpu->kvm, irq);
+			continue;
+		}
+
+		irq->hw = true;
+		irq->host_irq = irq_find_mapping(vpe->sgi_domain, i);
+		vgic_v4_sync_sgi_config(vpe, irq);
+		/*
+		 * SGIs are initialised as disabled. Enable them if
+		 * required by the rest of the VGIC init code.
+		 */
+		desc = irq_to_desc(irq->host_irq);
+		ret = irq_domain_activate_irq(irq_desc_get_irq_data(desc),
+					      false);
+		if (!WARN_ON(ret)) {
+			/* Transfer pending state */
+			ret = irq_set_irqchip_state(irq->host_irq,
+						    IRQCHIP_STATE_PENDING,
+						    irq->pending_latch);
+			WARN_ON(ret);
+			irq->pending_latch = false;
+		}
+
+		vgic_put_irq(vcpu->kvm, irq);
+	}
+}
+
+static void vgic_v4_disable_vsgis(struct kvm_vcpu *vcpu)
+{
+	int i;
+
+	for (i = 0; i < VGIC_NR_SGIS; i++) {
+		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, i);
+		struct irq_desc *desc;
+		int ret;
+
+		if (!irq->hw) {
+			vgic_put_irq(vcpu->kvm, irq);
+			continue;
+		}
+
+		irq->hw = false;
+		ret = irq_get_irqchip_state(irq->host_irq,
+					    IRQCHIP_STATE_PENDING,
+					    &irq->pending_latch);
+		WARN_ON(ret);
+
+		desc = irq_to_desc(irq->host_irq);
+		irq_domain_deactivate_irq(irq_desc_get_irq_data(desc));
+
+		vgic_put_irq(vcpu->kvm, irq);
+	}
+}
+
+/* Must be called with the kvm lock held */
+void vgic_v4_configure_vsgis(struct kvm *kvm)
+{
+	struct vgic_dist *dist = &kvm->arch.vgic;
+	struct kvm_vcpu *vcpu;
+	int i;
+
+	kvm_arm_halt_guest(kvm);
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (dist->nassgireq)
+			vgic_v4_enable_vsgis(vcpu);
+		else
+			vgic_v4_disable_vsgis(vcpu);
+	}
+
+	kvm_arm_resume_guest(kvm);
+}
+
 /**
  * vgic_v4_init - Initialize the GICv4 data structures
  * @kvm:	Pointer to the VM being initialized
diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h
index c7fefd6b1c80..769e4802645e 100644
--- a/virt/kvm/arm/vgic/vgic.h
+++ b/virt/kvm/arm/vgic/vgic.h
@@ -316,5 +316,6 @@ void vgic_its_invalidate_cache(struct kvm *kvm);
 bool vgic_supports_direct_msis(struct kvm *kvm);
 int vgic_v4_init(struct kvm *kvm);
 void vgic_v4_teardown(struct kvm *kvm);
+void vgic_v4_configure_vsgis(struct kvm *kvm);
 
 #endif
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 17/20] KVM: arm64: GICv4.1: Plumb SGI implementation selection in the distributor
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (15 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 16/20] KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 18/20] KVM: arm64: GICv4.1: Reload VLPI configuration on distributor enable/disable Marc Zyngier
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

The GICv4.1 architecture gives the hypervisor the option to let
the guest choose whether it wants the good old SGIs with an
active state, or the new, HW-based ones that do not have one.

For this, plumb the configuration of SGIs into the GICv3 MMIO
handling, present the GICD_TYPER2.nASSGIcap to the guest,
and handle the GICD_CTLR.nASSGIreq setting.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 virt/kvm/arm/vgic/vgic-mmio-v3.c | 48 ++++++++++++++++++++++++++++++--
 1 file changed, 46 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
index de89da76a379..442f3b8c2559 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
@@ -3,6 +3,7 @@
  * VGICv3 MMIO handling functions
  */
 
+#include <linux/bitfield.h>
 #include <linux/irqchip/arm-gic-v3.h>
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
@@ -70,6 +71,8 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
 		if (vgic->enabled)
 			value |= GICD_CTLR_ENABLE_SS_G1;
 		value |= GICD_CTLR_ARE_NS | GICD_CTLR_DS;
+		if (kvm_vgic_global_state.has_gicv4_1 && vgic->nassgireq)
+			value |= GICD_CTLR_nASSGIreq;
 		break;
 	case GICD_TYPER:
 		value = vgic->nr_spis + VGIC_NR_PRIVATE_IRQS;
@@ -81,6 +84,10 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
 			value |= (INTERRUPT_ID_BITS_SPIS - 1) << 19;
 		}
 		break;
+	case GICD_TYPER2:
+		if (kvm_vgic_global_state.has_gicv4_1)
+			value = GICD_TYPER2_nASSGIcap;
+		break;
 	case GICD_IIDR:
 		value = (PRODUCT_ID_KVM << GICD_IIDR_PRODUCT_ID_SHIFT) |
 			(vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) |
@@ -98,17 +105,43 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
 				    unsigned long val)
 {
 	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
-	bool was_enabled = dist->enabled;
 
 	switch (addr & 0x0c) {
-	case GICD_CTLR:
+	case GICD_CTLR: {
+		bool was_enabled, is_hwsgi;
+
+		mutex_lock(&vcpu->kvm->lock);
+
+		was_enabled = dist->enabled;
+		is_hwsgi = dist->nassgireq;
+
 		dist->enabled = val & GICD_CTLR_ENABLE_SS_G1;
 
+		/* Not a GICv4.1? No HW SGIs */
+		if (!kvm_vgic_global_state.has_gicv4_1)
+			val &= ~GICD_CTLR_nASSGIreq;
+
+		/* Dist stays enabled? nASSGIreq is RO */
+		if (was_enabled && dist->enabled) {
+			val &= ~GICD_CTLR_nASSGIreq;
+			val |= FIELD_PREP(GICD_CTLR_nASSGIreq, is_hwsgi);
+		}
+
+		/* Switching HW SGIs? */
+		dist->nassgireq = val & GICD_CTLR_nASSGIreq;
+		if (is_hwsgi != dist->nassgireq)
+			vgic_v4_configure_vsgis(vcpu->kvm);
+
 		if (!was_enabled && dist->enabled)
 			vgic_kick_vcpus(vcpu->kvm);
+
+		mutex_unlock(&vcpu->kvm->lock);
 		break;
+	}
 	case GICD_TYPER:
+	case GICD_TYPER2:
 	case GICD_IIDR:
+		/* This is at best for documentation purposes... */
 		return;
 	}
 }
@@ -117,10 +150,21 @@ static int vgic_mmio_uaccess_write_v3_misc(struct kvm_vcpu *vcpu,
 					   gpa_t addr, unsigned int len,
 					   unsigned long val)
 {
+	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
 	switch (addr & 0x0c) {
 	case GICD_IIDR:
 		if (val != vgic_mmio_read_v3_misc(vcpu, addr, len))
 			return -EINVAL;
+		return 0;
+	case GICD_CTLR:
+		/* Not a GICv4.1? No HW SGIs */
+		if (!kvm_vgic_global_state.has_gicv4_1)
+			val &= ~GICD_CTLR_nASSGIreq;
+
+		dist->enabled = val & GICD_CTLR_ENABLE_SS_G1;
+		dist->nassgireq = val & GICD_CTLR_nASSGIreq;
+		return 0;
 	}
 
 	vgic_mmio_write_v3_misc(vcpu, addr, len, val);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 18/20] KVM: arm64: GICv4.1: Reload VLPI configuration on distributor enable/disable
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (16 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 17/20] KVM: arm64: GICv4.1: Plumb SGI implementation selection in the distributor Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 19/20] KVM: arm64: GICv4.1: Allow non-trapping WFI when using HW SGIs Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 20/20] KVM: arm64: GICv4.1: Expose HW-based SGIs in debugfs Marc Zyngier
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Each time a Group-enable bit gets flipped, the state of these bits
needs to be forwarded to the hardware. This is a pretty heavy
handed operation, requiring all vcpus to reload their GICv4
configuration. It is thus implemented as a new request type.

Of course, we only support Group-1 for now...

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm/include/asm/kvm_host.h   | 1 +
 arch/arm64/include/asm/kvm_host.h | 1 +
 virt/kvm/arm/arm.c                | 8 ++++++++
 virt/kvm/arm/vgic/vgic-mmio-v3.c  | 5 ++++-
 4 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index c3314b286a61..7104babef15a 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -39,6 +39,7 @@
 #define KVM_REQ_IRQ_PENDING	KVM_ARCH_REQ(1)
 #define KVM_REQ_VCPU_RESET	KVM_ARCH_REQ(2)
 #define KVM_REQ_RECORD_STEAL	KVM_ARCH_REQ(3)
+#define KVM_REQ_RELOAD_GICv4	KVM_ARCH_REQ(4)
 
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d87aa609d2b6..90b86f349ceb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -44,6 +44,7 @@
 #define KVM_REQ_IRQ_PENDING	KVM_ARCH_REQ(1)
 #define KVM_REQ_VCPU_RESET	KVM_ARCH_REQ(2)
 #define KVM_REQ_RECORD_STEAL	KVM_ARCH_REQ(3)
+#define KVM_REQ_RELOAD_GICv4	KVM_ARCH_REQ(4)
 
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index d65a0faa46d8..d21e17b7af45 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -625,6 +625,14 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 
 		if (kvm_check_request(KVM_REQ_RECORD_STEAL, vcpu))
 			kvm_update_stolen_time(vcpu);
+
+		if (kvm_check_request(KVM_REQ_RELOAD_GICv4, vcpu)) {
+			/* The distributor enable bits were changed */
+			preempt_disable();
+			vgic_v4_put(vcpu, false);
+			vgic_v4_load(vcpu);
+			preempt_enable();
+		}
 	}
 }
 
diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
index 442f3b8c2559..48fd9fc229a2 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
@@ -132,7 +132,10 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
 		if (is_hwsgi != dist->nassgireq)
 			vgic_v4_configure_vsgis(vcpu->kvm);
 
-		if (!was_enabled && dist->enabled)
+		if (kvm_vgic_global_state.has_gicv4_1 &&
+		    was_enabled != dist->enabled)
+			kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_RELOAD_GICv4);
+		else if (!was_enabled && dist->enabled)
 			vgic_kick_vcpus(vcpu->kvm);
 
 		mutex_unlock(&vcpu->kvm->lock);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 19/20] KVM: arm64: GICv4.1: Allow non-trapping WFI when using HW SGIs
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (17 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 18/20] KVM: arm64: GICv4.1: Reload VLPI configuration on distributor enable/disable Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  2020-02-14 14:57 ` [PATCH v4 20/20] KVM: arm64: GICv4.1: Expose HW-based SGIs in debugfs Marc Zyngier
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

Just like for VLPIs, it is beneficial to avoid trapping on WFI when the
vcpu is using the GICv4.1 SGIs.

Add such a check to vcpu_clear_wfx_traps().

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 688c63412cc2..755654c839e2 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -89,7 +89,8 @@ static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
 static inline void vcpu_clear_wfx_traps(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.hcr_el2 &= ~HCR_TWE;
-	if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count))
+	if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) ||
+	    vcpu->kvm->arch.vgic.nassgireq)
 		vcpu->arch.hcr_el2 &= ~HCR_TWI;
 	else
 		vcpu->arch.hcr_el2 |= HCR_TWI;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 20/20] KVM: arm64: GICv4.1: Expose HW-based SGIs in debugfs
  2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
                   ` (18 preceding siblings ...)
  2020-02-14 14:57 ` [PATCH v4 19/20] KVM: arm64: GICv4.1: Allow non-trapping WFI when using HW SGIs Marc Zyngier
@ 2020-02-14 14:57 ` Marc Zyngier
  19 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-14 14:57 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Zenghui Yu, Thomas Gleixner,
	Julien Thierry

The vgic-state debugfs file could do with showing the pending state
of the HW-backed SGIs. Plug it into the low-level code.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 virt/kvm/arm/vgic/vgic-debug.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic-debug.c b/virt/kvm/arm/vgic/vgic-debug.c
index cc12fe9b2df3..b13a9e3f99dd 100644
--- a/virt/kvm/arm/vgic/vgic-debug.c
+++ b/virt/kvm/arm/vgic/vgic-debug.c
@@ -178,6 +178,8 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq,
 			    struct kvm_vcpu *vcpu)
 {
 	char *type;
+	bool pending;
+
 	if (irq->intid < VGIC_NR_SGIS)
 		type = "SGI";
 	else if (irq->intid < VGIC_NR_PRIVATE_IRQS)
@@ -190,6 +192,16 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq,
 	if (irq->intid ==0 || irq->intid == VGIC_NR_PRIVATE_IRQS)
 		print_header(s, irq, vcpu);
 
+	pending = irq->pending_latch;
+	if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
+		int err;
+
+		err = irq_get_irqchip_state(irq->host_irq,
+					    IRQCHIP_STATE_PENDING,
+					    &pending);
+		WARN_ON_ONCE(err);
+	}
+
 	seq_printf(s, "       %s %4d "
 		      "    %2d "
 		      "%d%d%d%d%d%d%d "
@@ -201,7 +213,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq,
 		      "\n",
 			type, irq->intid,
 			(irq->target_vcpu) ? irq->target_vcpu->vcpu_id : -1,
-			irq->pending_latch,
+			pending,
 			irq->line_level,
 			irq->active,
 			irq->enabled,
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 03/20] irqchip/gic-v4.1: Advertise support v4.1 to KVM
  2020-02-14 14:57 ` [PATCH v4 03/20] irqchip/gic-v4.1: Advertise support v4.1 to KVM Marc Zyngier
@ 2020-02-17  9:09   ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-02-17  9:09 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

On 2020/2/14 22:57, Marc Zyngier wrote:
> Tell KVM that we support v4.1. Nothing uses this information so far.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 01/20] irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributors
  2020-02-14 14:57 ` [PATCH v4 01/20] irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributors Marc Zyngier
@ 2020-02-17  9:11   ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-02-17  9:11 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

Hi Marc,

On 2020/2/14 22:57, Marc Zyngier wrote:
> In a system that is only sparsly populated with CPUs, we can end-up with
> redistributors structures that are not initialized. Let's make sure we
> don't try and access those when iterating over them (in this case when
> checking we have a L2 VPE table).
> 
> Fixes: 4e6437f12d6e ("irqchip/gic-v4.1: Ensure L2 vPE table is allocated at RD level")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   drivers/irqchip/irq-gic-v3-its.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> index 83b1186ffcad..da883a691028 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c
> @@ -2452,6 +2452,10 @@ static bool allocate_vpe_l2_table(int cpu, u32 id)
>   	if (!gic_rdists->has_rvpeid)
>   		return true;
>   
> +	/* Skip non-present CPUs */
> +	if (!base)
> +		return true;
> +

Thanks for fixing this,

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>

>   	val  = gicr_read_vpropbaser(base + SZ_128K + GICR_VPROPBASER);
>   
>   	esz  = FIELD_GET(GICR_VPROPBASER_4_1_ENTRY_SIZE, val) + 1;
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 02/20] irqchip/gic-v3: Use SGIs without active state if offered
  2020-02-14 14:57 ` [PATCH v4 02/20] irqchip/gic-v3: Use SGIs without active state if offered Marc Zyngier
@ 2020-02-17  9:18   ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-02-17  9:18 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

Hi Marc,

On 2020/2/14 22:57, Marc Zyngier wrote:
> To allow the direct injection of SGIs into a guest, the GICv4.1
> architecture has to sacrifice the Active state so that SGIs look
> a lot like LPIs (they are injected by the same mechanism).
> 
> In order not to break existing software, the architecture gives
> offers guests OSs the choice: SGIs with or without an active
> state. It is the hypervisors duty to honor the guest's choice.
> 
> For this, the architecture offers a discovery bit indicating whether
> the GIC supports GICv4.1 SGIs (GICD_TYPER2.nASSGIcap), and another
> bit indicating whether the guest wants Active-less SGIs or not
> (controlled by GICD_CTLR.nASSGIreq).
> 
> A hypervisor not supporting GICv4.1 SGIs would leave nASSGIcap
> clear, and a guest not knowing about GICv4.1 SGIs (or definitely
> wanting an Active state) would leave nASSGIreq clear (both being
> thankfully backward compatible with oler revisions of the GIC).
older?

> 
> Since Linux is perfectly happy without an active state on SGIs,
> inform the hypervisor that we'll use that if offered.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Per the description of these two bits in the commit message,

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>


Thanks


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-14 14:57 ` [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state " Marc Zyngier
@ 2020-02-18  7:00   ` Zenghui Yu
  2020-02-18  9:27     ` Marc Zyngier
  0 siblings, 1 reply; 47+ messages in thread
From: Zenghui Yu @ 2020-02-18  7:00 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

Hi Marc,

On 2020/2/14 22:57, Marc Zyngier wrote:
> To implement the get/set_irqchip_state callbacks (limited to the
> PENDING state), we have to use a particular set of hacks:
> 
> - Reading the pending state is done by using a pair of new redistributor
>    registers (GICR_VSGIR, GICR_VSGIPENDR), which allow the 16 interrupts
>    state to be retrieved.
> - Setting the pending state is done by generating it as we'd otherwise do
>    for a guest (writing to GITS_SGIR)
> - Clearing the pending state is done by emiting a VSGI command with the
>    "clear" bit set.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   drivers/irqchip/irq-gic-v3-its.c   | 56 ++++++++++++++++++++++++++++++
>   include/linux/irqchip/arm-gic-v3.h | 14 ++++++++
>   2 files changed, 70 insertions(+)
> 
> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> index 1e448d9a16ea..a9753435c4ff 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c
> @@ -3915,11 +3915,67 @@ static int its_sgi_set_affinity(struct irq_data *d,
>   	return -EINVAL;
>   }
>   
> +static int its_sgi_set_irqchip_state(struct irq_data *d,
> +				     enum irqchip_irq_state which,
> +				     bool state)
> +{
> +	if (which != IRQCHIP_STATE_PENDING)
> +		return -EINVAL;
> +
> +	if (state) {
> +		struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
> +		struct its_node *its = find_4_1_its();
> +		u64 val;
> +
> +		val  = FIELD_PREP(GITS_SGIR_VPEID, vpe->vpe_id);
> +		val |= FIELD_PREP(GITS_SGIR_VINTID, d->hwirq);
> +		writeq_relaxed(val, its->sgir_base + GITS_SGIR - SZ_128K);
> +	} else {
> +		its_configure_sgi(d, true);
> +	}
> +
> +	return 0;
> +}
> +
> +static int its_sgi_get_irqchip_state(struct irq_data *d,
> +				     enum irqchip_irq_state which, bool *val)
> +{
> +	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
> +	void __iomem *base = gic_data_rdist_cpu(vpe->col_idx)->rd_base + SZ_128K;

There might be a race on reading the 'vpe->col_idx' against a concurrent
vPE schedule (col_idx will be modified in its_vpe_set_affinity)? Will we
end up accessing the GICR_VSGI* registers of the old redistributor,
while the vPE is now resident on the new one? Or is it harmful?

The same question for direct_lpi_inv(), where 'vpe->col_idx' will be
used in irq_to_cpuid().

> +	u32 count = 1000000;	/* 1s! */
> +	u32 status;
> +
> +	if (which != IRQCHIP_STATE_PENDING)
> +		return -EINVAL;
> +
> +	writel_relaxed(vpe->vpe_id, base + GICR_VSGIR);
> +	do {
> +		status = readl_relaxed(base + GICR_VSGIPENDR);
> +		if (!(status & GICR_VSGIPENDR_BUSY))
> +			goto out;
> +
> +		count--;
> +		if (!count) {
> +			pr_err_ratelimited("Unable to get SGI status\n");
> +			goto out;
> +		}
> +		cpu_relax();
> +		udelay(1);
> +	} while(count);
> +
> +out:
> +	*val = !!(status & (1 << d->hwirq));
> +
> +	return 0;
> +}
> +
>   static struct irq_chip its_sgi_irq_chip = {
>   	.name			= "GICv4.1-sgi",
>   	.irq_mask		= its_sgi_mask_irq,
>   	.irq_unmask		= its_sgi_unmask_irq,
>   	.irq_set_affinity	= its_sgi_set_affinity,
> +	.irq_set_irqchip_state	= its_sgi_set_irqchip_state,
> +	.irq_get_irqchip_state	= its_sgi_get_irqchip_state,
>   };
>   
>   static int its_sgi_irq_domain_alloc(struct irq_domain *domain,
> diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
> index a89578884263..64da945486ac 100644
> --- a/include/linux/irqchip/arm-gic-v3.h
> +++ b/include/linux/irqchip/arm-gic-v3.h
> @@ -345,6 +345,15 @@
>   #define GICR_VPENDBASER_4_1_VGRP1EN	(1ULL << 58)
>   #define GICR_VPENDBASER_4_1_VPEID	GENMASK_ULL(15, 0)
>   
> +#define GICR_VSGIR			0x0080
> +
> +#define GICR_VSGIR_VPEID		GENMASK(15, 0)
> +
> +#define GICR_VSGIPENDR			0x0088
> +
> +#define GICR_VSGIPENDR_BUSY		(1U << 31)
> +#define GICR_VSGIPENDR_PENDING		GENMASK(15, 0)
> +
>   /*
>    * ITS registers, offsets from ITS_base
>    */
> @@ -368,6 +377,11 @@
>   
>   #define GITS_TRANSLATER			0x10040
>   
> +#define GITS_SGIR			0x20020
> +
> +#define GITS_SGIR_VPEID			GENMASK_ULL(47, 32)
> +#define GITS_SGIR_VINTID		GENMASK_ULL(7, 0)

The spec says vINTID field is [3:0] of the GITS_SGIR.


Thanks,
Zenghui

> +
>   #define GITS_CTLR_ENABLE		(1U << 0)
>   #define GITS_CTLR_ImDe			(1U << 1)
>   #define	GITS_CTLR_ITS_NUMBER_SHIFT	4
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 06/20] irqchip/gic-v4.1: Add initial SGI configuration
  2020-02-14 14:57 ` [PATCH v4 06/20] irqchip/gic-v4.1: Add initial SGI configuration Marc Zyngier
@ 2020-02-18  7:25   ` Zenghui Yu
  2020-02-18  9:46     ` Marc Zyngier
  0 siblings, 1 reply; 47+ messages in thread
From: Zenghui Yu @ 2020-02-18  7:25 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

Hi Marc,

On 2020/2/14 22:57, Marc Zyngier wrote:
> The GICv4.1 ITS has yet another new command (VSGI) which allows
> a VPE-targeted SGI to be configured (or have its pending state
> cleared). Add support for this command and plumb it into the
> activate irqdomain callback so that it is ready to be used.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   drivers/irqchip/irq-gic-v3-its.c   | 76 +++++++++++++++++++++++++++++-
>   include/linux/irqchip/arm-gic-v3.h |  3 +-
>   2 files changed, 77 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> index 6121c8f2a8ce..229e4ae9c59b 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c
> @@ -354,6 +354,15 @@ struct its_cmd_desc {
>   		struct {
>   			struct its_vpe *vpe;
>   		} its_invdb_cmd;
> +
> +		struct {
> +			struct its_vpe *vpe;
> +			u8 sgi;
> +			u8 priority;
> +			bool enable;
> +			bool group;
> +			bool clear;
> +		} its_vsgi_cmd;
>   	};
>   };
>   
> @@ -502,6 +511,31 @@ static void its_encode_db(struct its_cmd_block *cmd, bool db)
>   	its_mask_encode(&cmd->raw_cmd[2], db, 63, 63);
>   }
>   
> +static void its_encode_sgi_intid(struct its_cmd_block *cmd, u8 sgi)
> +{
> +	its_mask_encode(&cmd->raw_cmd[0], sgi, 35, 32);
> +}
> +
> +static void its_encode_sgi_priority(struct its_cmd_block *cmd, u8 prio)
> +{
> +	its_mask_encode(&cmd->raw_cmd[0], prio >> 4, 23, 20);
> +}
> +
> +static void its_encode_sgi_group(struct its_cmd_block *cmd, bool grp)
> +{
> +	its_mask_encode(&cmd->raw_cmd[0], grp, 10, 10);
> +}
> +
> +static void its_encode_sgi_clear(struct its_cmd_block *cmd, bool clr)
> +{
> +	its_mask_encode(&cmd->raw_cmd[0], clr, 9, 9);
> +}
> +
> +static void its_encode_sgi_enable(struct its_cmd_block *cmd, bool en)
> +{
> +	its_mask_encode(&cmd->raw_cmd[0], en, 8, 8);
> +}
> +
>   static inline void its_fixup_cmd(struct its_cmd_block *cmd)
>   {
>   	/* Let's fixup BE commands */
> @@ -867,6 +901,26 @@ static struct its_vpe *its_build_invdb_cmd(struct its_node *its,
>   	return valid_vpe(its, desc->its_invdb_cmd.vpe);
>   }
>   
> +static struct its_vpe *its_build_vsgi_cmd(struct its_node *its,
> +					  struct its_cmd_block *cmd,
> +					  struct its_cmd_desc *desc)
> +{
> +	if (WARN_ON(!is_v4_1(its)))
> +		return NULL;
> +
> +	its_encode_cmd(cmd, GITS_CMD_VSGI);
> +	its_encode_vpeid(cmd, desc->its_vsgi_cmd.vpe->vpe_id);
> +	its_encode_sgi_intid(cmd, desc->its_vsgi_cmd.sgi);
> +	its_encode_sgi_priority(cmd, desc->its_vsgi_cmd.priority);
> +	its_encode_sgi_group(cmd, desc->its_vsgi_cmd.group);
> +	its_encode_sgi_clear(cmd, desc->its_vsgi_cmd.clear);
> +	its_encode_sgi_enable(cmd, desc->its_vsgi_cmd.enable);
> +
> +	its_fixup_cmd(cmd);
> +
> +	return valid_vpe(its, desc->its_vsgi_cmd.vpe);
> +}
> +
>   static u64 its_cmd_ptr_to_offset(struct its_node *its,
>   				 struct its_cmd_block *ptr)
>   {
> @@ -3823,6 +3877,21 @@ static struct irq_chip its_vpe_4_1_irq_chip = {
>   	.irq_set_vcpu_affinity	= its_vpe_4_1_set_vcpu_affinity,
>   };
>   
> +static void its_configure_sgi(struct irq_data *d, bool clear)
> +{
> +	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
> +	struct its_cmd_desc desc;
> +
> +	desc.its_vsgi_cmd.vpe = vpe;
> +	desc.its_vsgi_cmd.sgi = d->hwirq;
> +	desc.its_vsgi_cmd.priority = vpe->sgi_config[d->hwirq].priority;
> +	desc.its_vsgi_cmd.enable = vpe->sgi_config[d->hwirq].enabled;
> +	desc.its_vsgi_cmd.group = vpe->sgi_config[d->hwirq].group;
> +	desc.its_vsgi_cmd.clear = clear;
> +
> +	its_send_single_vcommand(find_4_1_its(), its_build_vsgi_cmd, &desc);
> +}
> +
>   static int its_sgi_set_affinity(struct irq_data *d,
>   				const struct cpumask *mask_val,
>   				bool force)
> @@ -3868,13 +3937,18 @@ static void its_sgi_irq_domain_free(struct irq_domain *domain,
>   static int its_sgi_irq_domain_activate(struct irq_domain *domain,
>   				       struct irq_data *d, bool reserve)
>   {
> +	/* Write out the initial SGI configuration */
> +	its_configure_sgi(d, false);
>   	return 0;
>   }
>   
>   static void its_sgi_irq_domain_deactivate(struct irq_domain *domain,
>   					  struct irq_data *d)
>   {
> -	/* Nothing to do */
> +	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
> +
> +	vpe->sgi_config[d->hwirq].enabled = false;
> +	its_configure_sgi(d, true);

The spec says, when C==1, VSGI clears the pending state of the vSGI,
leaving the configuration unchanged.  So should we first clear the
pending state and then disable vSGI (let E==0)?


Thanks,
Zenghui

>   }
>   
>   static struct irq_domain_ops its_sgi_domain_ops = {
> diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
> index c29a02678a6f..a89578884263 100644
> --- a/include/linux/irqchip/arm-gic-v3.h
> +++ b/include/linux/irqchip/arm-gic-v3.h
> @@ -502,8 +502,9 @@
>   #define GITS_CMD_VMAPTI			GITS_CMD_GICv4(GITS_CMD_MAPTI)
>   #define GITS_CMD_VMOVI			GITS_CMD_GICv4(GITS_CMD_MOVI)
>   #define GITS_CMD_VSYNC			GITS_CMD_GICv4(GITS_CMD_SYNC)
> -/* VMOVP and INVDB are the odd ones, as they dont have a physical counterpart */
> +/* VMOVP, VSGI and INVDB are the odd ones, as they dont have a physical counterpart */
>   #define GITS_CMD_VMOVP			GITS_CMD_GICv4(2)
> +#define GITS_CMD_VSGI			GITS_CMD_GICv4(3)
>   #define GITS_CMD_INVDB			GITS_CMD_GICv4(0xe)
>   
>   /*
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 15/20] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers
  2020-02-14 14:57 ` [PATCH v4 15/20] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers Marc Zyngier
@ 2020-02-18  8:46   ` Zenghui Yu
  2020-02-18  9:41     ` Marc Zyngier
  0 siblings, 1 reply; 47+ messages in thread
From: Zenghui Yu @ 2020-02-18  8:46 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

Hi Marc,

On 2020/2/14 22:57, Marc Zyngier wrote:
> Most of the GICv3 emulation code that deals with SGIs now has to be
> aware of the v4.1 capabilities in order to benefit from it.
> 
> Add such support, keyed on the interrupt having the hw flag set and
> being a SGI.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   virt/kvm/arm/vgic/vgic-mmio-v3.c | 15 +++++-
>   virt/kvm/arm/vgic/vgic-mmio.c    | 88 ++++++++++++++++++++++++++++++--
>   2 files changed, 96 insertions(+), 7 deletions(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> index ebc218840fc2..de89da76a379 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> @@ -6,6 +6,7 @@
>   #include <linux/irqchip/arm-gic-v3.h>
>   #include <linux/kvm.h>
>   #include <linux/kvm_host.h>
> +#include <linux/interrupt.h>
>   #include <kvm/iodev.h>
>   #include <kvm/arm_vgic.h>
>   
> @@ -942,8 +943,18 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1)
>   		 * generate interrupts of either group.
>   		 */
>   		if (!irq->group || allow_group1) {
> -			irq->pending_latch = true;
> -			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
> +			if (!irq->hw) {
> +				irq->pending_latch = true;
> +				vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
> +			} else {
> +				/* HW SGI? Ask the GIC to inject it */
> +				int err;
> +				err = irq_set_irqchip_state(irq->host_irq,
> +							    IRQCHIP_STATE_PENDING,
> +							    true);
> +				WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
> +				raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
> +			}
>   		} else {
>   			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>   		}
> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> index d656ebd5f9d4..0a1fb61e5b89 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> @@ -5,6 +5,8 @@
>   
>   #include <linux/bitops.h>
>   #include <linux/bsearch.h>
> +#include <linux/interrupt.h>
> +#include <linux/irq.h>
>   #include <linux/kvm.h>
>   #include <linux/kvm_host.h>
>   #include <kvm/iodev.h>
> @@ -59,6 +61,11 @@ unsigned long vgic_mmio_read_group(struct kvm_vcpu *vcpu,
>   	return value;
>   }
>   
> +static void vgic_update_vsgi(struct vgic_irq *irq)
> +{
> +	WARN_ON(its_prop_update_vsgi(irq->host_irq, irq->priority, irq->group));
> +}
> +
>   void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr,
>   			   unsigned int len, unsigned long val)
>   {
> @@ -71,7 +78,12 @@ void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr,
>   
>   		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>   		irq->group = !!(val & BIT(i));
> -		vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
> +			vgic_update_vsgi(irq);
> +			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
> +		} else {
> +			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
> +		}
>   
>   		vgic_put_irq(vcpu->kvm, irq);
>   	}
> @@ -113,7 +125,21 @@ void vgic_mmio_write_senable(struct kvm_vcpu *vcpu,
>   		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
>   
>   		raw_spin_lock_irqsave(&irq->irq_lock, flags);
> -		if (vgic_irq_is_mapped_level(irq)) {
> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
> +			if (!irq->enabled) {
> +				struct irq_data *data;
> +
> +				irq->enabled = true;
> +				data = &irq_to_desc(irq->host_irq)->irq_data;
> +				while (irqd_irq_disabled(data))
> +					enable_irq(irq->host_irq);
> +			}
> +
> +			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
> +			vgic_put_irq(vcpu->kvm, irq);
> +
> +			continue;
> +		} else if (vgic_irq_is_mapped_level(irq)) {
>   			bool was_high = irq->line_level;
>   
>   			/*
> @@ -148,6 +174,8 @@ void vgic_mmio_write_cenable(struct kvm_vcpu *vcpu,
>   		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
>   
>   		raw_spin_lock_irqsave(&irq->irq_lock, flags);
> +		if (irq->hw && vgic_irq_is_sgi(irq->intid) && irq->enabled)
> +			disable_irq_nosync(irq->host_irq);
>   
>   		irq->enabled = false;
>   
> @@ -167,10 +195,22 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
>   	for (i = 0; i < len * 8; i++) {
>   		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
>   		unsigned long flags;
> +		bool val;
>   
>   		raw_spin_lock_irqsave(&irq->irq_lock, flags);
> -		if (irq_is_pending(irq))
> -			value |= (1U << i);
> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
> +			int err;
> +
> +			val = false;
> +			err = irq_get_irqchip_state(irq->host_irq,
> +						    IRQCHIP_STATE_PENDING,
> +						    &val);
> +			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
> +		} else {
> +			val = irq_is_pending(irq);
> +		}
> +
> +		value |= ((u32)val << i);
>   		raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>   
>   		vgic_put_irq(vcpu->kvm, irq);
> @@ -227,6 +267,21 @@ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu,
>   		}
>   
>   		raw_spin_lock_irqsave(&irq->irq_lock, flags);
> +
> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
> +			/* HW SGI? Ask the GIC to inject it */
> +			int err;
> +			err = irq_set_irqchip_state(irq->host_irq,
> +						    IRQCHIP_STATE_PENDING,
> +						    true);
> +			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
> +
> +			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
> +			vgic_put_irq(vcpu->kvm, irq);
> +
> +			continue;
> +		}
> +
>   		if (irq->hw)
>   			vgic_hw_irq_spending(vcpu, irq, is_uaccess);
>   		else

Should we consider taking the GICv4.1 support into uaccess_{read/write}
callbacks for GICR_ISPENDR0 so that userspace can properly save/restore
the pending state of GICv4.1 vSGIs?

I *think* we can do it because on restoration, GICD_CTLR(.nASSGIreq) is
restored before GICR_ISPENDR0.  So we know whether we're restoring
pending for vSGIs, and we can restore it to the HW level if v4.1 is
supported by GIC. Otherwise restore it by the normal way.

And saving is easy with the get_irqchip_state callback, right?

> @@ -281,6 +336,20 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu,
>   
>   		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>   
> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
> +			/* HW SGI? Ask the GIC to inject it */

"Ask the GIC to clear its pending state" :-)


Thanks,
Zenghui

> +			int err;
> +			err = irq_set_irqchip_state(irq->host_irq,
> +						    IRQCHIP_STATE_PENDING,
> +						    false);
> +			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
> +
> +			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
> +			vgic_put_irq(vcpu->kvm, irq);
> +
> +			continue;
> +		}
> +
>   		if (irq->hw)
>   			vgic_hw_irq_cpending(vcpu, irq, is_uaccess);
>   		else
> @@ -330,8 +399,15 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>   
>   	raw_spin_lock_irqsave(&irq->irq_lock, flags);
>   
> -	if (irq->hw) {
> +	if (irq->hw && !vgic_irq_is_sgi(irq->intid)) {
>   		vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu);
> +	} else if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
> +		/*
> +		 * GICv4.1 VSGI feature doesn't track an active state,
> +		 * so let's not kid ourselves, there is nothing we can
> +		 * do here.
> +		 */
> +		irq->active = false;
>   	} else {
>   		u32 model = vcpu->kvm->arch.vgic.vgic_model;
>   		u8 active_source;
> @@ -505,6 +581,8 @@ void vgic_mmio_write_priority(struct kvm_vcpu *vcpu,
>   		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>   		/* Narrow the priority range to what we actually support */
>   		irq->priority = (val >> (i * 8)) & GENMASK(7, 8 - VGIC_PRI_BITS);
> +		if (irq->hw && vgic_irq_is_sgi(irq->intid))
> +			vgic_update_vsgi(irq);
>   		raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>   
>   		vgic_put_irq(vcpu->kvm, irq);
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-18  7:00   ` Zenghui Yu
@ 2020-02-18  9:27     ` Marc Zyngier
  2020-02-18 15:31       ` Marc Zyngier
  0 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-18  9:27 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Zenghui,

On 2020-02-18 07:00, Zenghui Yu wrote:
> Hi Marc,
> 
> On 2020/2/14 22:57, Marc Zyngier wrote:
>> To implement the get/set_irqchip_state callbacks (limited to the
>> PENDING state), we have to use a particular set of hacks:
>> 
>> - Reading the pending state is done by using a pair of new 
>> redistributor
>>    registers (GICR_VSGIR, GICR_VSGIPENDR), which allow the 16 
>> interrupts
>>    state to be retrieved.
>> - Setting the pending state is done by generating it as we'd otherwise 
>> do
>>    for a guest (writing to GITS_SGIR)
>> - Clearing the pending state is done by emiting a VSGI command with 
>> the
>>    "clear" bit set.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>   drivers/irqchip/irq-gic-v3-its.c   | 56 
>> ++++++++++++++++++++++++++++++
>>   include/linux/irqchip/arm-gic-v3.h | 14 ++++++++
>>   2 files changed, 70 insertions(+)
>> 
>> diff --git a/drivers/irqchip/irq-gic-v3-its.c 
>> b/drivers/irqchip/irq-gic-v3-its.c
>> index 1e448d9a16ea..a9753435c4ff 100644
>> --- a/drivers/irqchip/irq-gic-v3-its.c
>> +++ b/drivers/irqchip/irq-gic-v3-its.c
>> @@ -3915,11 +3915,67 @@ static int its_sgi_set_affinity(struct 
>> irq_data *d,
>>   	return -EINVAL;
>>   }
>>   +static int its_sgi_set_irqchip_state(struct irq_data *d,
>> +				     enum irqchip_irq_state which,
>> +				     bool state)
>> +{
>> +	if (which != IRQCHIP_STATE_PENDING)
>> +		return -EINVAL;
>> +
>> +	if (state) {
>> +		struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
>> +		struct its_node *its = find_4_1_its();
>> +		u64 val;
>> +
>> +		val  = FIELD_PREP(GITS_SGIR_VPEID, vpe->vpe_id);
>> +		val |= FIELD_PREP(GITS_SGIR_VINTID, d->hwirq);
>> +		writeq_relaxed(val, its->sgir_base + GITS_SGIR - SZ_128K);
>> +	} else {
>> +		its_configure_sgi(d, true);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int its_sgi_get_irqchip_state(struct irq_data *d,
>> +				     enum irqchip_irq_state which, bool *val)
>> +{
>> +	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
>> +	void __iomem *base = gic_data_rdist_cpu(vpe->col_idx)->rd_base + 
>> SZ_128K;
> 
> There might be a race on reading the 'vpe->col_idx' against a 
> concurrent
> vPE schedule (col_idx will be modified in its_vpe_set_affinity)? Will 
> we
> end up accessing the GICR_VSGI* registers of the old redistributor,
> while the vPE is now resident on the new one? Or is it harmful?

Very well spotted. There is a potential problem if old and new RDs are 
not part
of the same CommonLPIAff group.

> The same question for direct_lpi_inv(), where 'vpe->col_idx' will be
> used in irq_to_cpuid().

Same problem indeed. We need to ensure that no VMOVP operation can occur 
whilst
we use col_idx to access a redistributor. This means a vPE lock of some 
sort
that will protect the affinity.

But I think there is a slightly more general problem here, which we 
failed to
see initially: the same issue exists for physical LPIs, as col_map[] can 
be
updated (its_set_affinity()) in parallel with a direct invalidate.

The good old invalidation through the ITS does guarantee that the two 
operation
don't overlap, but direct invalidation breaks it.

Let me have a think about it.

> 
>> +	u32 count = 1000000;	/* 1s! */
>> +	u32 status;
>> +
>> +	if (which != IRQCHIP_STATE_PENDING)
>> +		return -EINVAL;
>> +
>> +	writel_relaxed(vpe->vpe_id, base + GICR_VSGIR);
>> +	do {
>> +		status = readl_relaxed(base + GICR_VSGIPENDR);
>> +		if (!(status & GICR_VSGIPENDR_BUSY))
>> +			goto out;
>> +
>> +		count--;
>> +		if (!count) {
>> +			pr_err_ratelimited("Unable to get SGI status\n");
>> +			goto out;
>> +		}
>> +		cpu_relax();
>> +		udelay(1);
>> +	} while(count);
>> +
>> +out:
>> +	*val = !!(status & (1 << d->hwirq));
>> +
>> +	return 0;
>> +}
>> +
>>   static struct irq_chip its_sgi_irq_chip = {
>>   	.name			= "GICv4.1-sgi",
>>   	.irq_mask		= its_sgi_mask_irq,
>>   	.irq_unmask		= its_sgi_unmask_irq,
>>   	.irq_set_affinity	= its_sgi_set_affinity,
>> +	.irq_set_irqchip_state	= its_sgi_set_irqchip_state,
>> +	.irq_get_irqchip_state	= its_sgi_get_irqchip_state,
>>   };
>>     static int its_sgi_irq_domain_alloc(struct irq_domain *domain,
>> diff --git a/include/linux/irqchip/arm-gic-v3.h 
>> b/include/linux/irqchip/arm-gic-v3.h
>> index a89578884263..64da945486ac 100644
>> --- a/include/linux/irqchip/arm-gic-v3.h
>> +++ b/include/linux/irqchip/arm-gic-v3.h
>> @@ -345,6 +345,15 @@
>>   #define GICR_VPENDBASER_4_1_VGRP1EN	(1ULL << 58)
>>   #define GICR_VPENDBASER_4_1_VPEID	GENMASK_ULL(15, 0)
>>   +#define GICR_VSGIR			0x0080
>> +
>> +#define GICR_VSGIR_VPEID		GENMASK(15, 0)
>> +
>> +#define GICR_VSGIPENDR			0x0088
>> +
>> +#define GICR_VSGIPENDR_BUSY		(1U << 31)
>> +#define GICR_VSGIPENDR_PENDING		GENMASK(15, 0)
>> +
>>   /*
>>    * ITS registers, offsets from ITS_base
>>    */
>> @@ -368,6 +377,11 @@
>>     #define GITS_TRANSLATER			0x10040
>>   +#define GITS_SGIR			0x20020
>> +
>> +#define GITS_SGIR_VPEID			GENMASK_ULL(47, 32)
>> +#define GITS_SGIR_VINTID		GENMASK_ULL(7, 0)
> 
> The spec says vINTID field is [3:0] of the GITS_SGIR.

Indeed, well spotted again!

Thanks,

          M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 15/20] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers
  2020-02-18  8:46   ` Zenghui Yu
@ 2020-02-18  9:41     ` Marc Zyngier
  0 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-18  9:41 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

On 2020-02-18 08:46, Zenghui Yu wrote:
> Hi Marc,
> 
> On 2020/2/14 22:57, Marc Zyngier wrote:
>> Most of the GICv3 emulation code that deals with SGIs now has to be
>> aware of the v4.1 capabilities in order to benefit from it.
>> 
>> Add such support, keyed on the interrupt having the hw flag set and
>> being a SGI.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>   virt/kvm/arm/vgic/vgic-mmio-v3.c | 15 +++++-
>>   virt/kvm/arm/vgic/vgic-mmio.c    | 88 
>> ++++++++++++++++++++++++++++++--
>>   2 files changed, 96 insertions(+), 7 deletions(-)
>> 
>> diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c 
>> b/virt/kvm/arm/vgic/vgic-mmio-v3.c
>> index ebc218840fc2..de89da76a379 100644
>> --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
>> +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
>> @@ -6,6 +6,7 @@
>>   #include <linux/irqchip/arm-gic-v3.h>
>>   #include <linux/kvm.h>
>>   #include <linux/kvm_host.h>
>> +#include <linux/interrupt.h>
>>   #include <kvm/iodev.h>
>>   #include <kvm/arm_vgic.h>
>>   @@ -942,8 +943,18 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu 
>> *vcpu, u64 reg, bool allow_group1)
>>   		 * generate interrupts of either group.
>>   		 */
>>   		if (!irq->group || allow_group1) {
>> -			irq->pending_latch = true;
>> -			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
>> +			if (!irq->hw) {
>> +				irq->pending_latch = true;
>> +				vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
>> +			} else {
>> +				/* HW SGI? Ask the GIC to inject it */
>> +				int err;
>> +				err = irq_set_irqchip_state(irq->host_irq,
>> +							    IRQCHIP_STATE_PENDING,
>> +							    true);
>> +				WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
>> +				raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>> +			}
>>   		} else {
>>   			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>>   		}
>> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c 
>> b/virt/kvm/arm/vgic/vgic-mmio.c
>> index d656ebd5f9d4..0a1fb61e5b89 100644
>> --- a/virt/kvm/arm/vgic/vgic-mmio.c
>> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
>> @@ -5,6 +5,8 @@
>>     #include <linux/bitops.h>
>>   #include <linux/bsearch.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/irq.h>
>>   #include <linux/kvm.h>
>>   #include <linux/kvm_host.h>
>>   #include <kvm/iodev.h>
>> @@ -59,6 +61,11 @@ unsigned long vgic_mmio_read_group(struct kvm_vcpu 
>> *vcpu,
>>   	return value;
>>   }
>>   +static void vgic_update_vsgi(struct vgic_irq *irq)
>> +{
>> +	WARN_ON(its_prop_update_vsgi(irq->host_irq, irq->priority, 
>> irq->group));
>> +}
>> +
>>   void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr,
>>   			   unsigned int len, unsigned long val)
>>   {
>> @@ -71,7 +78,12 @@ void vgic_mmio_write_group(struct kvm_vcpu *vcpu, 
>> gpa_t addr,
>>     		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>>   		irq->group = !!(val & BIT(i));
>> -		vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
>> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
>> +			vgic_update_vsgi(irq);
>> +			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>> +		} else {
>> +			vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
>> +		}
>>     		vgic_put_irq(vcpu->kvm, irq);
>>   	}
>> @@ -113,7 +125,21 @@ void vgic_mmio_write_senable(struct kvm_vcpu 
>> *vcpu,
>>   		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
>>     		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>> -		if (vgic_irq_is_mapped_level(irq)) {
>> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
>> +			if (!irq->enabled) {
>> +				struct irq_data *data;
>> +
>> +				irq->enabled = true;
>> +				data = &irq_to_desc(irq->host_irq)->irq_data;
>> +				while (irqd_irq_disabled(data))
>> +					enable_irq(irq->host_irq);
>> +			}
>> +
>> +			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>> +			vgic_put_irq(vcpu->kvm, irq);
>> +
>> +			continue;
>> +		} else if (vgic_irq_is_mapped_level(irq)) {
>>   			bool was_high = irq->line_level;
>>     			/*
>> @@ -148,6 +174,8 @@ void vgic_mmio_write_cenable(struct kvm_vcpu 
>> *vcpu,
>>   		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
>>     		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>> +		if (irq->hw && vgic_irq_is_sgi(irq->intid) && irq->enabled)
>> +			disable_irq_nosync(irq->host_irq);
>>     		irq->enabled = false;
>>   @@ -167,10 +195,22 @@ unsigned long vgic_mmio_read_pending(struct 
>> kvm_vcpu *vcpu,
>>   	for (i = 0; i < len * 8; i++) {
>>   		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
>>   		unsigned long flags;
>> +		bool val;
>>     		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>> -		if (irq_is_pending(irq))
>> -			value |= (1U << i);
>> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
>> +			int err;
>> +
>> +			val = false;
>> +			err = irq_get_irqchip_state(irq->host_irq,
>> +						    IRQCHIP_STATE_PENDING,
>> +						    &val);
>> +			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
>> +		} else {
>> +			val = irq_is_pending(irq);
>> +		}
>> +
>> +		value |= ((u32)val << i);
>>   		raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>>     		vgic_put_irq(vcpu->kvm, irq);
>> @@ -227,6 +267,21 @@ void vgic_mmio_write_spending(struct kvm_vcpu 
>> *vcpu,
>>   		}
>>     		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>> +
>> +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
>> +			/* HW SGI? Ask the GIC to inject it */
>> +			int err;
>> +			err = irq_set_irqchip_state(irq->host_irq,
>> +						    IRQCHIP_STATE_PENDING,
>> +						    true);
>> +			WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
>> +
>> +			raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
>> +			vgic_put_irq(vcpu->kvm, irq);
>> +
>> +			continue;
>> +		}
>> +
>>   		if (irq->hw)
>>   			vgic_hw_irq_spending(vcpu, irq, is_uaccess);
>>   		else
> 
> Should we consider taking the GICv4.1 support into uaccess_{read/write}
> callbacks for GICR_ISPENDR0 so that userspace can properly save/restore
> the pending state of GICv4.1 vSGIs?
> 
> I *think* we can do it because on restoration, GICD_CTLR(.nASSGIreq) is
> restored before GICR_ISPENDR0.  So we know whether we're restoring
> pending for vSGIs, and we can restore it to the HW level if v4.1 is
> supported by GIC. Otherwise restore it by the normal way.
> 
> And saving is easy with the get_irqchip_state callback, right?

Yes, this should be pretty easy to do, but I haven't completely worked 
out
the ordering dependencies (you're way ahead of me here!).

There is still a chance that userspace will play with us trying to set 
and
reset nASSGIreq, so we need to define what is acceptable...

> 
>> @@ -281,6 +336,20 @@ void vgic_mmio_write_cpending(struct kvm_vcpu 
>> *vcpu,
>>     		raw_spin_lock_irqsave(&irq->irq_lock, flags);
>>   +		if (irq->hw && vgic_irq_is_sgi(irq->intid)) {
>> +			/* HW SGI? Ask the GIC to inject it */
> 
> "Ask the GIC to clear its pending state" :-)

One day, I'll ban copy/paste from my editor... ;-)

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 06/20] irqchip/gic-v4.1: Add initial SGI configuration
  2020-02-18  7:25   ` Zenghui Yu
@ 2020-02-18  9:46     ` Marc Zyngier
  2020-02-20  3:25       ` Zenghui Yu
  0 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-18  9:46 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Zenghui,

On 2020-02-18 07:25, Zenghui Yu wrote:
> Hi Marc,

[...]

>>     static void its_sgi_irq_domain_deactivate(struct irq_domain 
>> *domain,
>>   					  struct irq_data *d)
>>   {
>> -	/* Nothing to do */
>> +	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
>> +
>> +	vpe->sgi_config[d->hwirq].enabled = false;
>> +	its_configure_sgi(d, true);
> 
> The spec says, when C==1, VSGI clears the pending state of the vSGI,
> leaving the configuration unchanged.  So should we first clear the
> pending state and then disable vSGI (let E==0)?

Right you are again. We need two commands, not just one (the pseudocode 
is
pretty explicit).

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-18  9:27     ` Marc Zyngier
@ 2020-02-18 15:31       ` Marc Zyngier
  2020-02-19 11:50         ` Zenghui Yu
  2020-02-20  3:11         ` Zenghui Yu
  0 siblings, 2 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-18 15:31 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Zenghui,

On 2020-02-18 09:27, Marc Zyngier wrote:
> Hi Zenghui,
> 
> On 2020-02-18 07:00, Zenghui Yu wrote:
>> Hi Marc,

[...]

>> There might be a race on reading the 'vpe->col_idx' against a 
>> concurrent
>> vPE schedule (col_idx will be modified in its_vpe_set_affinity)? Will 
>> we
>> end up accessing the GICR_VSGI* registers of the old redistributor,
>> while the vPE is now resident on the new one? Or is it harmful?
> 
> Very well spotted. There is a potential problem if old and new RDs are 
> not part
> of the same CommonLPIAff group.
> 
>> The same question for direct_lpi_inv(), where 'vpe->col_idx' will be
>> used in irq_to_cpuid().
> 
> Same problem indeed. We need to ensure that no VMOVP operation can 
> occur whilst
> we use col_idx to access a redistributor. This means a vPE lock of some 
> sort
> that will protect the affinity.
> 
> But I think there is a slightly more general problem here, which we 
> failed to
> see initially: the same issue exists for physical LPIs, as col_map[] 
> can be
> updated (its_set_affinity()) in parallel with a direct invalidate.
> 
> The good old invalidation through the ITS does guarantee that the two 
> operation
> don't overlap, but direct invalidation breaks it.
> 
> Let me have a think about it.

So I've thought about it, wrote a patch, and I don't really like the 
look of it.
This is pretty invasive, and we end-up serializing a lot more than we 
used to
(the repurposing of vlpi_lock to a general "lpi mapping lock" is 
probably too
coarse).

It of course needs splitting over at least three patches, but it'd be 
good if
you could have a look (applies on top of the whole series).

Thanks,

         M.

diff --git a/drivers/irqchip/irq-gic-v3-its.c 
b/drivers/irqchip/irq-gic-v3-its.c
index 7656b353a95f..0ed286dba827 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -144,7 +144,7 @@ struct event_lpi_map {
  	u16			*col_map;
  	irq_hw_number_t		lpi_base;
  	int			nr_lpis;
-	raw_spinlock_t		vlpi_lock;
+	raw_spinlock_t		map_lock;
  	struct its_vm		*vm;
  	struct its_vlpi_map	*vlpi_maps;
  	int			nr_vlpis;
@@ -240,15 +240,33 @@ static struct its_vlpi_map *get_vlpi_map(struct 
irq_data *d)
  	return NULL;
  }

-static int irq_to_cpuid(struct irq_data *d)
+static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags)
  {
-	struct its_device *its_dev = irq_data_get_irq_chip_data(d);
  	struct its_vlpi_map *map = get_vlpi_map(d);
+	int cpu;

-	if (map)
-		return map->vpe->col_idx;
+	if (map) {
+		raw_spin_lock_irqsave(&map->vpe->vpe_lock, *flags);
+		cpu = map->vpe->col_idx;
+	} else {
+		struct its_device *its_dev = irq_data_get_irq_chip_data(d);
+		raw_spin_lock_irqsave(&its_dev->event_map.map_lock, *flags);
+		cpu = its_dev->event_map.col_map[its_get_event_id(d)];
+	}

-	return its_dev->event_map.col_map[its_get_event_id(d)];
+	return cpu;
+}
+
+static void irq_to_cpuid_unlock(struct irq_data *d, unsigned long 
flags)
+{
+	struct its_vlpi_map *map = get_vlpi_map(d);
+
+	if (map) {
+		raw_spin_unlock_irqrestore(&map->vpe->vpe_lock, flags);
+	} else {
+		struct its_device *its_dev = irq_data_get_irq_chip_data(d);
+		raw_spin_unlock_irqrestore(&its_dev->event_map.map_lock, flags);
+	}
  }

  static struct its_collection *valid_col(struct its_collection *col)
@@ -1384,6 +1402,8 @@ static void direct_lpi_inv(struct irq_data *d)
  {
  	struct its_vlpi_map *map = get_vlpi_map(d);
  	void __iomem *rdbase;
+	unsigned long flags;
+	int cpu;
  	u64 val;

  	if (map) {
@@ -1399,10 +1419,12 @@ static void direct_lpi_inv(struct irq_data *d)
  	}

  	/* Target the redistributor this LPI is currently routed to */
-	rdbase = per_cpu_ptr(gic_rdists->rdist, irq_to_cpuid(d))->rd_base;
+	cpu = irq_to_cpuid_lock(d, &flags);
+	rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
  	gic_write_lpir(val, rdbase + GICR_INVLPIR);

  	wait_for_syncr(rdbase);
+	irq_to_cpuid_unlock(d, flags);
  }

  static void lpi_update_config(struct irq_data *d, u8 clr, u8 set)
@@ -1471,11 +1493,11 @@ static void its_unmask_irq(struct irq_data *d)
  static int its_set_affinity(struct irq_data *d, const struct cpumask 
*mask_val,
  			    bool force)
  {
-	unsigned int cpu;
  	const struct cpumask *cpu_mask = cpu_online_mask;
  	struct its_device *its_dev = irq_data_get_irq_chip_data(d);
  	struct its_collection *target_col;
-	u32 id = its_get_event_id(d);
+	unsigned int from, cpu;
+	unsigned long flags;

  	/* A forwarded interrupt should use irq_set_vcpu_affinity */
  	if (irqd_is_forwarded_to_vcpu(d))
@@ -1496,12 +1518,16 @@ static int its_set_affinity(struct irq_data *d, 
const struct cpumask *mask_val,
  		return -EINVAL;

  	/* don't set the affinity when the target cpu is same as current one 
*/
-	if (cpu != its_dev->event_map.col_map[id]) {
+	from = irq_to_cpuid_lock(d, &flags);
+	if (cpu != from) {
+		u32 id = its_get_event_id(d);
+
  		target_col = &its_dev->its->collections[cpu];
  		its_send_movi(its_dev, target_col, id);
  		its_dev->event_map.col_map[id] = cpu;
  		irq_data_update_effective_affinity(d, cpumask_of(cpu));
  	}
+	irq_to_cpuid_unlock(d, flags);

  	return IRQ_SET_MASK_OK_DONE;
  }
@@ -1636,7 +1662,7 @@ static int its_vlpi_map(struct irq_data *d, struct 
its_cmd_info *info)
  	if (!info->map)
  		return -EINVAL;

-	raw_spin_lock(&its_dev->event_map.vlpi_lock);
+	raw_spin_lock(&its_dev->event_map.map_lock);

  	if (!its_dev->event_map.vm) {
  		struct its_vlpi_map *maps;
@@ -1685,7 +1711,7 @@ static int its_vlpi_map(struct irq_data *d, struct 
its_cmd_info *info)
  	}

  out:
-	raw_spin_unlock(&its_dev->event_map.vlpi_lock);
+	raw_spin_unlock(&its_dev->event_map.map_lock);
  	return ret;
  }

@@ -1695,7 +1721,7 @@ static int its_vlpi_get(struct irq_data *d, struct 
its_cmd_info *info)
  	struct its_vlpi_map *map;
  	int ret = 0;

-	raw_spin_lock(&its_dev->event_map.vlpi_lock);
+	raw_spin_lock(&its_dev->event_map.map_lock);

  	map = get_vlpi_map(d);

@@ -1708,7 +1734,7 @@ static int its_vlpi_get(struct irq_data *d, struct 
its_cmd_info *info)
  	*info->map = *map;

  out:
-	raw_spin_unlock(&its_dev->event_map.vlpi_lock);
+	raw_spin_unlock(&its_dev->event_map.map_lock);
  	return ret;
  }

@@ -1718,7 +1744,7 @@ static int its_vlpi_unmap(struct irq_data *d)
  	u32 event = its_get_event_id(d);
  	int ret = 0;

-	raw_spin_lock(&its_dev->event_map.vlpi_lock);
+	raw_spin_lock(&its_dev->event_map.map_lock);

  	if (!its_dev->event_map.vm || !irqd_is_forwarded_to_vcpu(d)) {
  		ret = -EINVAL;
@@ -1748,7 +1774,7 @@ static int its_vlpi_unmap(struct irq_data *d)
  	}

  out:
-	raw_spin_unlock(&its_dev->event_map.vlpi_lock);
+	raw_spin_unlock(&its_dev->event_map.map_lock);
  	return ret;
  }

@@ -3193,7 +3219,7 @@ static struct its_device *its_create_device(struct 
its_node *its, u32 dev_id,
  	dev->event_map.col_map = col_map;
  	dev->event_map.lpi_base = lpi_base;
  	dev->event_map.nr_lpis = nr_lpis;
-	raw_spin_lock_init(&dev->event_map.vlpi_lock);
+	raw_spin_lock_init(&dev->event_map.map_lock);
  	dev->device_id = dev_id;
  	INIT_LIST_HEAD(&dev->entry);

@@ -3560,6 +3586,7 @@ static int its_vpe_set_affinity(struct irq_data 
*d,
  {
  	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
  	int from, cpu = cpumask_first(mask_val);
+	unsigned long flags;

  	/*
  	 * Changing affinity is mega expensive, so let's be as lazy as
@@ -3567,6 +3594,7 @@ static int its_vpe_set_affinity(struct irq_data 
*d,
  	 * into the proxy device, we need to move the doorbell
  	 * interrupt to its new location.
  	 */
+	raw_spin_lock_irqsave(&vpe->vpe_lock, flags);
  	if (vpe->col_idx == cpu)
  		goto out;

@@ -3586,6 +3614,7 @@ static int its_vpe_set_affinity(struct irq_data 
*d,

  out:
  	irq_data_update_effective_affinity(d, cpumask_of(cpu));
+	raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);

  	return IRQ_SET_MASK_OK_DONE;
  }
@@ -3695,11 +3724,15 @@ static void its_vpe_send_inv(struct irq_data *d)

  	if (gic_rdists->has_direct_lpi) {
  		void __iomem *rdbase;
+		unsigned long flags;
+		int cpu;

  		/* Target the redistributor this VPE is currently known on */
-		rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
+		cpu = irq_to_cpuid_lock(d, &flags);
+		rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
  		gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR);
  		wait_for_syncr(rdbase);
+		irq_to_cpuid_unlock(d, flags);
  	} else {
  		its_vpe_send_cmd(vpe, its_send_inv);
  	}
@@ -3735,14 +3768,18 @@ static int its_vpe_set_irqchip_state(struct 
irq_data *d,

  	if (gic_rdists->has_direct_lpi) {
  		void __iomem *rdbase;
+		unsigned long flags;
+		int cpu;

-		rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
+		cpu = irq_to_cpuid_lock(d, &flags);
+		rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
  		if (state) {
  			gic_write_lpir(vpe->vpe_db_lpi, rdbase + GICR_SETLPIR);
  		} else {
  			gic_write_lpir(vpe->vpe_db_lpi, rdbase + GICR_CLRLPIR);
  			wait_for_syncr(rdbase);
  		}
+		irq_to_cpuid_unlock(d, flags);
  	} else {
  		if (state)
  			its_vpe_send_cmd(vpe, its_send_int);
@@ -3854,14 +3891,17 @@ static void its_vpe_4_1_deschedule(struct 
its_vpe *vpe,
  static void its_vpe_4_1_invall(struct its_vpe *vpe)
  {
  	void __iomem *rdbase;
+	unsigned long flags;
  	u64 val;

  	val  = GICR_INVALLR_V;
  	val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);

  	/* Target the redistributor this vPE is currently known on */
+	raw_spin_lock_irqsave(&vpe->vpe_lock, flags);
  	rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
  	gic_write_lpir(val, rdbase + GICR_INVALLR);
+	raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);
  }

  static int its_vpe_4_1_set_vcpu_affinity(struct irq_data *d, void 
*vcpu_info)
@@ -3960,13 +4000,17 @@ static int its_sgi_get_irqchip_state(struct 
irq_data *d,
  				     enum irqchip_irq_state which, bool *val)
  {
  	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
-	void __iomem *base = gic_data_rdist_cpu(vpe->col_idx)->rd_base + 
SZ_128K;
+	void __iomem *base;
+	unsigned long flags;
  	u32 count = 1000000;	/* 1s! */
  	u32 status;
+	int cpu;

  	if (which != IRQCHIP_STATE_PENDING)
  		return -EINVAL;

+	cpu = irq_to_cpuid_lock(d, &flags);
+	base = gic_data_rdist_cpu(cpu)->rd_base + SZ_128K;
  	writel_relaxed(vpe->vpe_id, base + GICR_VSGIR);
  	do {
  		status = readl_relaxed(base + GICR_VSGIPENDR);
@@ -3983,6 +4027,7 @@ static int its_sgi_get_irqchip_state(struct 
irq_data *d,
  	} while(count);

  out:
+	irq_to_cpuid_unlock(d, flags);
  	*val = !!(status & (1 << d->hwirq));

  	return 0;
@@ -4102,6 +4147,7 @@ static int its_vpe_init(struct its_vpe *vpe)
  		return -ENOMEM;
  	}

+	raw_spin_lock_init(&vpe->vpe_lock);
  	vpe->vpe_id = vpe_id;
  	vpe->vpt_page = vpt_page;
  	if (gic_rdists->has_rvpeid)
diff --git a/include/linux/irqchip/arm-gic-v4.h 
b/include/linux/irqchip/arm-gic-v4.h
index 46c167a6349f..fc43a63875a3 100644
--- a/include/linux/irqchip/arm-gic-v4.h
+++ b/include/linux/irqchip/arm-gic-v4.h
@@ -60,6 +60,7 @@ struct its_vpe {
  		};
  	};

+	raw_spinlock_t		vpe_lock;
  	/*
  	 * This collection ID is used to indirect the target
  	 * redistributor for this VPE. The ID itself isn't involved in

-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-18 15:31       ` Marc Zyngier
@ 2020-02-19 11:50         ` Zenghui Yu
  2020-02-19 15:18           ` Zenghui Yu
  2020-02-20  3:11         ` Zenghui Yu
  1 sibling, 1 reply; 47+ messages in thread
From: Zenghui Yu @ 2020-02-19 11:50 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Marc,

On 2020/2/18 23:31, Marc Zyngier wrote:
> Hi Zenghui,
> 
> On 2020-02-18 09:27, Marc Zyngier wrote:
>> Hi Zenghui,
>>
>> On 2020-02-18 07:00, Zenghui Yu wrote:
>>> Hi Marc,
> 
> [...]
> 
>>> There might be a race on reading the 'vpe->col_idx' against a concurrent
>>> vPE schedule (col_idx will be modified in its_vpe_set_affinity)? Will we
>>> end up accessing the GICR_VSGI* registers of the old redistributor,
>>> while the vPE is now resident on the new one? Or is it harmful?
>>
>> Very well spotted. There is a potential problem if old and new RDs are 
>> not part
>> of the same CommonLPIAff group.
>>
>>> The same question for direct_lpi_inv(), where 'vpe->col_idx' will be
>>> used in irq_to_cpuid().
>>
>> Same problem indeed. We need to ensure that no VMOVP operation can 
>> occur whilst
>> we use col_idx to access a redistributor. This means a vPE lock of 
>> some sort
>> that will protect the affinity.

Yeah, I had the same view here, a vPE level lock might help.

>> But I think there is a slightly more general problem here, which we 
>> failed to
>> see initially: the same issue exists for physical LPIs, as col_map[] 
>> can be
>> updated (its_set_affinity()) in parallel with a direct invalidate.
>>
>> The good old invalidation through the ITS does guarantee that the two 
>> operation
>> don't overlap, but direct invalidation breaks it.

Agreed!

>> Let me have a think about it.
> 
> So I've thought about it, wrote a patch, and I don't really like the 
> look of it.
> This is pretty invasive, and we end-up serializing a lot more than we 
> used to
> (the repurposing of vlpi_lock to a general "lpi mapping lock" is 
> probably too
> coarse).
> 
> It of course needs splitting over at least three patches, but it'd be 
> good if
> you could have a look (applies on top of the whole series).

So the first thing is that

1. there're races on choosing the RD against a concurrent LPI/vPE
affinity changing.

And sure, I will have a look on the following patch! But I'd first
talk about some other issues I've seen today...

2. Another potential race is on accessing the same RD by different
CPUs, which gets more obvious after introducing the GICv4.1.
We can as least take two registers for example:

  - GICR_VSGIR:
    Let's assume that vPE0 is just descheduled from CPU0 and then vPE1
    is scheduled on. CPU0 is writing its GICR_VSGIR with vpeid1 to serve
    vPE1's GICR_ISPENDR0 read trap, whilst userspace is getting vSGI's
    pending state of vPE0 (i.e., by a debugfs read) thus another CPU
    will try to write the same GICR_VSGIR with vpeid0... without waiting
    GICR_VSGIPENDR.Busy reads as 0.
    This is a CONSTRAINED UNPREDICTABLE behavior from the spec and at
    least one of the queries will fail.
  - GICR_INV{LPI,ALL}R:
    Multiple LPIs can be targeted to the same RD, thus multiple writes to
    the same GICR_INVLPIR (with different INITID, even with different V)
    can happen concurrently...

Above comes from the fact that the same redistributor can be accessed
(concurrently) by multiple CPUs but we don't have a mechanism to ensure
some extent of serialization. I also had a look at how KVM will handle
this kind of access, but

3. it looks like KVM makes the assumption that the per-RD MMIO region
will only be accessed by the associated VCPU?  But I think this's not
restricted by the architecture, we can do it better.  Or I've just
missed some important points here.


I will look at the following patch asap but may need some time to
think about all above, and do some fix if possible :-)

> diff --git a/drivers/irqchip/irq-gic-v3-its.c 
> b/drivers/irqchip/irq-gic-v3-its.c
> index 7656b353a95f..0ed286dba827 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c

[...]


Thanks,
Zenghui


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-19 11:50         ` Zenghui Yu
@ 2020-02-19 15:18           ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-02-19 15:18 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

On 2020/2/19 19:50, Zenghui Yu wrote:
> 3. it looks like KVM makes the assumption that the per-RD MMIO region
> will only be accessed by the associated VCPU?  But I think this's not
> restricted by the architecture, we can do it better.  Or I've just
> missed some important points here.

(After some basic tests, KVM actually does the right thing!)
So I must have some mis-understanding on this point, please
ignore it.  I'll dig it further myself, sorry for the noisy.


Thanks,
Zenghui


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-18 15:31       ` Marc Zyngier
  2020-02-19 11:50         ` Zenghui Yu
@ 2020-02-20  3:11         ` Zenghui Yu
  2020-02-28 19:37           ` Marc Zyngier
  1 sibling, 1 reply; 47+ messages in thread
From: Zenghui Yu @ 2020-02-20  3:11 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Marc,

On 2020/2/18 23:31, Marc Zyngier wrote:
> diff --git a/drivers/irqchip/irq-gic-v3-its.c 
> b/drivers/irqchip/irq-gic-v3-its.c
> index 7656b353a95f..0ed286dba827 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c
> @@ -144,7 +144,7 @@ struct event_lpi_map {
>       u16            *col_map;
>       irq_hw_number_t        lpi_base;
>       int            nr_lpis;
> -    raw_spinlock_t        vlpi_lock;
> +    raw_spinlock_t        map_lock;

So we use map_lock to protect both LPI's and VLPI's mapping affinity of
a device, and use vpe_lock to protect vPE's affinity, OK.

>       struct its_vm        *vm;
>       struct its_vlpi_map    *vlpi_maps;
>       int            nr_vlpis;
> @@ -240,15 +240,33 @@ static struct its_vlpi_map *get_vlpi_map(struct 
> irq_data *d)
>       return NULL;
>   }
> 
> -static int irq_to_cpuid(struct irq_data *d)
> +static int irq_to_cpuid_lock(struct irq_data *d, unsigned long *flags)
>   {
> -    struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>       struct its_vlpi_map *map = get_vlpi_map(d);
> +    int cpu;
> 
> -    if (map)
> -        return map->vpe->col_idx;
> +    if (map) {
> +        raw_spin_lock_irqsave(&map->vpe->vpe_lock, *flags);
> +        cpu = map->vpe->col_idx;
> +    } else {
> +        struct its_device *its_dev = irq_data_get_irq_chip_data(d);
> +        raw_spin_lock_irqsave(&its_dev->event_map.map_lock, *flags);
> +        cpu = its_dev->event_map.col_map[its_get_event_id(d)];
> +    }
> 
> -    return its_dev->event_map.col_map[its_get_event_id(d)];
> +    return cpu;
> +}

This helper is correct for normal LPIs and VLPIs, but wrong for per-vPE
IRQ (doorbell) and vSGIs. irq_data_get_irq_chip_data() gets confused by
both of them.

> +
> +static void irq_to_cpuid_unlock(struct irq_data *d, unsigned long flags)
> +{
> +    struct its_vlpi_map *map = get_vlpi_map(d);
> +
> +    if (map) {
> +        raw_spin_unlock_irqrestore(&map->vpe->vpe_lock, flags);
> +    } else {
> +        struct its_device *its_dev = irq_data_get_irq_chip_data(d);
> +        raw_spin_unlock_irqrestore(&its_dev->event_map.map_lock, flags);
> +    }
>   }

The same problem for this helper.

> 
>   static struct its_collection *valid_col(struct its_collection *col)
> @@ -1384,6 +1402,8 @@ static void direct_lpi_inv(struct irq_data *d)
>   {
>       struct its_vlpi_map *map = get_vlpi_map(d);
>       void __iomem *rdbase;
> +    unsigned long flags;
> +    int cpu;
>       u64 val;
> 
>       if (map) {
> @@ -1399,10 +1419,12 @@ static void direct_lpi_inv(struct irq_data *d)
>       }
> 
>       /* Target the redistributor this LPI is currently routed to */
> -    rdbase = per_cpu_ptr(gic_rdists->rdist, irq_to_cpuid(d))->rd_base;
> +    cpu = irq_to_cpuid_lock(d, &flags);
> +    rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
>       gic_write_lpir(val, rdbase + GICR_INVLPIR);
> 
>       wait_for_syncr(rdbase);
> +    irq_to_cpuid_unlock(d, flags);
>   }
> 
>   static void lpi_update_config(struct irq_data *d, u8 clr, u8 set)
> @@ -1471,11 +1493,11 @@ static void its_unmask_irq(struct irq_data *d)
>   static int its_set_affinity(struct irq_data *d, const struct cpumask 
> *mask_val,
>                   bool force)
>   {
> -    unsigned int cpu;
>       const struct cpumask *cpu_mask = cpu_online_mask;
>       struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>       struct its_collection *target_col;
> -    u32 id = its_get_event_id(d);
> +    unsigned int from, cpu;
> +    unsigned long flags;
> 
>       /* A forwarded interrupt should use irq_set_vcpu_affinity */
>       if (irqd_is_forwarded_to_vcpu(d))
> @@ -1496,12 +1518,16 @@ static int its_set_affinity(struct irq_data *d, 
> const struct cpumask *mask_val,
>           return -EINVAL;
> 
>       /* don't set the affinity when the target cpu is same as current 
> one */
> -    if (cpu != its_dev->event_map.col_map[id]) {
> +    from = irq_to_cpuid_lock(d, &flags);
> +    if (cpu != from) {
> +        u32 id = its_get_event_id(d);
> +
>           target_col = &its_dev->its->collections[cpu];
>           its_send_movi(its_dev, target_col, id);
>           its_dev->event_map.col_map[id] = cpu;
>           irq_data_update_effective_affinity(d, cpumask_of(cpu));
>       }
> +    irq_to_cpuid_unlock(d, flags);
> 
>       return IRQ_SET_MASK_OK_DONE;
>   }
> @@ -1636,7 +1662,7 @@ static int its_vlpi_map(struct irq_data *d, struct 
> its_cmd_info *info)
>       if (!info->map)
>           return -EINVAL;
> 
> -    raw_spin_lock(&its_dev->event_map.vlpi_lock);
> +    raw_spin_lock(&its_dev->event_map.map_lock);
> 
>       if (!its_dev->event_map.vm) {
>           struct its_vlpi_map *maps;
> @@ -1685,7 +1711,7 @@ static int its_vlpi_map(struct irq_data *d, struct 
> its_cmd_info *info)
>       }
> 
>   out:
> -    raw_spin_unlock(&its_dev->event_map.vlpi_lock);
> +    raw_spin_unlock(&its_dev->event_map.map_lock);
>       return ret;
>   }
> 
> @@ -1695,7 +1721,7 @@ static int its_vlpi_get(struct irq_data *d, struct 
> its_cmd_info *info)
>       struct its_vlpi_map *map;
>       int ret = 0;
> 
> -    raw_spin_lock(&its_dev->event_map.vlpi_lock);
> +    raw_spin_lock(&its_dev->event_map.map_lock);
> 
>       map = get_vlpi_map(d);
> 
> @@ -1708,7 +1734,7 @@ static int its_vlpi_get(struct irq_data *d, struct 
> its_cmd_info *info)
>       *info->map = *map;
> 
>   out:
> -    raw_spin_unlock(&its_dev->event_map.vlpi_lock);
> +    raw_spin_unlock(&its_dev->event_map.map_lock);
>       return ret;
>   }
> 
> @@ -1718,7 +1744,7 @@ static int its_vlpi_unmap(struct irq_data *d)
>       u32 event = its_get_event_id(d);
>       int ret = 0;
> 
> -    raw_spin_lock(&its_dev->event_map.vlpi_lock);
> +    raw_spin_lock(&its_dev->event_map.map_lock);
> 
>       if (!its_dev->event_map.vm || !irqd_is_forwarded_to_vcpu(d)) {
>           ret = -EINVAL;
> @@ -1748,7 +1774,7 @@ static int its_vlpi_unmap(struct irq_data *d)
>       }
> 
>   out:
> -    raw_spin_unlock(&its_dev->event_map.vlpi_lock);
> +    raw_spin_unlock(&its_dev->event_map.map_lock);
>       return ret;
>   }
> 
> @@ -3193,7 +3219,7 @@ static struct its_device *its_create_device(struct 
> its_node *its, u32 dev_id,
>       dev->event_map.col_map = col_map;
>       dev->event_map.lpi_base = lpi_base;
>       dev->event_map.nr_lpis = nr_lpis;
> -    raw_spin_lock_init(&dev->event_map.vlpi_lock);
> +    raw_spin_lock_init(&dev->event_map.map_lock);
>       dev->device_id = dev_id;
>       INIT_LIST_HEAD(&dev->entry);
> 
> @@ -3560,6 +3586,7 @@ static int its_vpe_set_affinity(struct irq_data *d,
>   {
>       struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
>       int from, cpu = cpumask_first(mask_val);
> +    unsigned long flags;
> 
>       /*
>        * Changing affinity is mega expensive, so let's be as lazy as
> @@ -3567,6 +3594,7 @@ static int its_vpe_set_affinity(struct irq_data *d,
>        * into the proxy device, we need to move the doorbell
>        * interrupt to its new location.
>        */
> +    raw_spin_lock_irqsave(&vpe->vpe_lock, flags);
>       if (vpe->col_idx == cpu)
>           goto out;
> 
> @@ -3586,6 +3614,7 @@ static int its_vpe_set_affinity(struct irq_data *d,
> 
>   out:
>       irq_data_update_effective_affinity(d, cpumask_of(cpu));
> +    raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);
> 
>       return IRQ_SET_MASK_OK_DONE;
>   }
> @@ -3695,11 +3724,15 @@ static void its_vpe_send_inv(struct irq_data *d)
> 
>       if (gic_rdists->has_direct_lpi) {
>           void __iomem *rdbase;
> +        unsigned long flags;
> +        int cpu;
> 
>           /* Target the redistributor this VPE is currently known on */
> -        rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
> +        cpu = irq_to_cpuid_lock(d, &flags);
> +        rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
>           gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR);
>           wait_for_syncr(rdbase);
> +        irq_to_cpuid_unlock(d, flags);
>       } else {
>           its_vpe_send_cmd(vpe, its_send_inv);
>       }

Do we really need to grab the vpe_lock for those which are belong to
the same irqchip with its_vpe_set_affinity()? The IRQ core code should
already ensure the mutual exclusion among them, wrong?

> @@ -3735,14 +3768,18 @@ static int its_vpe_set_irqchip_state(struct 
> irq_data *d,
> 
>       if (gic_rdists->has_direct_lpi) {
>           void __iomem *rdbase;
> +        unsigned long flags;
> +        int cpu;
> 
> -        rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
> +        cpu = irq_to_cpuid_lock(d, &flags);
> +        rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
>           if (state) {
>               gic_write_lpir(vpe->vpe_db_lpi, rdbase + GICR_SETLPIR);
>           } else {
>               gic_write_lpir(vpe->vpe_db_lpi, rdbase + GICR_CLRLPIR);
>               wait_for_syncr(rdbase);
>           }
> +        irq_to_cpuid_unlock(d, flags);
>       } else {
>           if (state)
>               its_vpe_send_cmd(vpe, its_send_int);
> @@ -3854,14 +3891,17 @@ static void its_vpe_4_1_deschedule(struct 
> its_vpe *vpe,
>   static void its_vpe_4_1_invall(struct its_vpe *vpe)
>   {
>       void __iomem *rdbase;
> +    unsigned long flags;
>       u64 val;
> 
>       val  = GICR_INVALLR_V;
>       val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);
> 
>       /* Target the redistributor this vPE is currently known on */
> +    raw_spin_lock_irqsave(&vpe->vpe_lock, flags);
>       rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base;
>       gic_write_lpir(val, rdbase + GICR_INVALLR);
> +    raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);
>   }
> 
>   static int its_vpe_4_1_set_vcpu_affinity(struct irq_data *d, void 
> *vcpu_info)
> @@ -3960,13 +4000,17 @@ static int its_sgi_get_irqchip_state(struct 
> irq_data *d,
>                        enum irqchip_irq_state which, bool *val)
>   {
>       struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
> -    void __iomem *base = gic_data_rdist_cpu(vpe->col_idx)->rd_base + 
> SZ_128K;
> +    void __iomem *base;
> +    unsigned long flags;
>       u32 count = 1000000;    /* 1s! */
>       u32 status;
> +    int cpu;
> 
>       if (which != IRQCHIP_STATE_PENDING)
>           return -EINVAL;
> 
> +    cpu = irq_to_cpuid_lock(d, &flags);
> +    base = gic_data_rdist_cpu(cpu)->rd_base + SZ_128K;
>       writel_relaxed(vpe->vpe_id, base + GICR_VSGIR);
>       do {
>           status = readl_relaxed(base + GICR_VSGIPENDR);
> @@ -3983,6 +4027,7 @@ static int its_sgi_get_irqchip_state(struct 
> irq_data *d,
>       } while(count);
> 
>   out:
> +    irq_to_cpuid_unlock(d, flags);
>       *val = !!(status & (1 << d->hwirq));
> 
>       return 0;
> @@ -4102,6 +4147,7 @@ static int its_vpe_init(struct its_vpe *vpe)
>           return -ENOMEM;
>       }
> 
> +    raw_spin_lock_init(&vpe->vpe_lock);
>       vpe->vpe_id = vpe_id;
>       vpe->vpt_page = vpt_page;
>       if (gic_rdists->has_rvpeid)
> diff --git a/include/linux/irqchip/arm-gic-v4.h 
> b/include/linux/irqchip/arm-gic-v4.h
> index 46c167a6349f..fc43a63875a3 100644
> --- a/include/linux/irqchip/arm-gic-v4.h
> +++ b/include/linux/irqchip/arm-gic-v4.h
> @@ -60,6 +60,7 @@ struct its_vpe {
>           };
>       };
> 
> +    raw_spinlock_t        vpe_lock;
>       /*
>        * This collection ID is used to indirect the target
>        * redistributor for this VPE. The ID itself isn't involved in

I'm not sure if it's good enough, it may gets much clearer after
splitting.


Thanks,
Zenghui


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 04/20] irqchip/gic-v4.1: Map the ITS SGIR register page
  2020-02-14 14:57 ` [PATCH v4 04/20] irqchip/gic-v4.1: Map the ITS SGIR register page Marc Zyngier
@ 2020-02-20  3:17   ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-02-20  3:17 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

On 2020/2/14 22:57, Marc Zyngier wrote:
> One of the new features of GICv4.1 is to allow virtual SGIs to be
> directly signaled to a VPE. For that, the ITS has grown a new
> 64kB page containing only a single register that is used to
> signal a SGI to a given VPE.
> 
> Add a second mapping covering this new 64kB range, and take this
> opportunity to limit the original mapping to 64kB, which is enough
> to cover the span of the ITS registers.

Yes, no need to do ioremap for the translation register.

> 
> Signed-off-by: Marc Zyngier<maz@kernel.org>

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 05/20] irqchip/gic-v4.1: Plumb skeletal VSGI irqchip
  2020-02-14 14:57 ` [PATCH v4 05/20] irqchip/gic-v4.1: Plumb skeletal VSGI irqchip Marc Zyngier
@ 2020-02-20  3:21   ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-02-20  3:21 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry



On 2020/2/14 22:57, Marc Zyngier wrote:
> Since GICv4.1 has the capability to inject 16 SGIs into each VPE,
> and that I'm keen not to invent too many specific interfaces to
> manupulate these interrupts, let's pretend that each of these SGIs
manipulate?

> is an actual Linux interrupt.
> 
> For that matter, let's introduce a minimal irqchip and irqdomain
> setup that will get fleshed up in the following patches.
> 
> Signed-off-by: Marc Zyngier<maz@kernel.org>

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 06/20] irqchip/gic-v4.1: Add initial SGI configuration
  2020-02-18  9:46     ` Marc Zyngier
@ 2020-02-20  3:25       ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-02-20  3:25 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Marc,

On 2020/2/18 17:46, Marc Zyngier wrote:
> Hi Zenghui,
> 
> On 2020-02-18 07:25, Zenghui Yu wrote:
>> Hi Marc,
> 
> [...]
> 
>>>     static void its_sgi_irq_domain_deactivate(struct irq_domain *domain,
>>>                         struct irq_data *d)
>>>   {
>>> -    /* Nothing to do */
>>> +    struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
>>> +
>>> +    vpe->sgi_config[d->hwirq].enabled = false;
>>> +    its_configure_sgi(d, true);
>>
>> The spec says, when C==1, VSGI clears the pending state of the vSGI,
>> leaving the configuration unchanged.  So should we first clear the
>> pending state and then disable vSGI (let E==0)?
> 
> Right you are again. We need two commands, not just one (the pseudocode is
> pretty explicit).

With that change,

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>


Thanks


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 07/20] irqchip/gic-v4.1: Plumb mask/unmask SGI callbacks
  2020-02-14 14:57 ` [PATCH v4 07/20] irqchip/gic-v4.1: Plumb mask/unmask SGI callbacks Marc Zyngier
@ 2020-02-20  3:32   ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-02-20  3:32 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

Hi Marc,

On 2020/2/14 22:57, Marc Zyngier wrote:
> Implement mask/unmask for virtual SGIs by calling into the
> configuration helper.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 09/20] irqchip/gic-v4.1: Plumb set_vcpu_affinity SGI callbacks
  2020-02-14 14:57 ` [PATCH v4 09/20] irqchip/gic-v4.1: Plumb set_vcpu_affinity " Marc Zyngier
@ 2020-02-20  3:37   ` Zenghui Yu
  2020-02-28 19:00     ` Marc Zyngier
  0 siblings, 1 reply; 47+ messages in thread
From: Zenghui Yu @ 2020-02-20  3:37 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

Hi Marc,

On 2020/2/14 22:57, Marc Zyngier wrote:
> As for VLPIs, there is a number of configuration bits that cannot

As for vSGIs,

> be directly communicated through the normal irqchip API, and we
> have to use our good old friend set_vcpu_affinity.
> 
> This is used to configure group and priority for a given vSGI.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>

> ---
>   drivers/irqchip/irq-gic-v3-its.c   | 18 ++++++++++++++++++
>   include/linux/irqchip/arm-gic-v4.h |  5 +++++
>   2 files changed, 23 insertions(+)
> 
> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
> index a9753435c4ff..a2e824eae43f 100644
> --- a/drivers/irqchip/irq-gic-v3-its.c
> +++ b/drivers/irqchip/irq-gic-v3-its.c
> @@ -3969,6 +3969,23 @@ static int its_sgi_get_irqchip_state(struct irq_data *d,
>   	return 0;
>   }
>   
> +static int its_sgi_set_vcpu_affinity(struct irq_data *d, void *vcpu_info)
> +{
> +	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
> +	struct its_cmd_info *info = vcpu_info;
> +
> +	switch (info->cmd_type) {
> +	case PROP_UPDATE_SGI:
> +		vpe->sgi_config[d->hwirq].priority = info->priority;
> +		vpe->sgi_config[d->hwirq].group = info->group;
> +		its_configure_sgi(d, false);
> +		return 0;
> +
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
>   static struct irq_chip its_sgi_irq_chip = {
>   	.name			= "GICv4.1-sgi",
>   	.irq_mask		= its_sgi_mask_irq,
> @@ -3976,6 +3993,7 @@ static struct irq_chip its_sgi_irq_chip = {
>   	.irq_set_affinity	= its_sgi_set_affinity,
>   	.irq_set_irqchip_state	= its_sgi_set_irqchip_state,
>   	.irq_get_irqchip_state	= its_sgi_get_irqchip_state,
> +	.irq_set_vcpu_affinity	= its_sgi_set_vcpu_affinity,
>   };
>   
>   static int its_sgi_irq_domain_alloc(struct irq_domain *domain,
> diff --git a/include/linux/irqchip/arm-gic-v4.h b/include/linux/irqchip/arm-gic-v4.h
> index 30b4855bf766..a1a9d40266f5 100644
> --- a/include/linux/irqchip/arm-gic-v4.h
> +++ b/include/linux/irqchip/arm-gic-v4.h
> @@ -98,6 +98,7 @@ enum its_vcpu_info_cmd_type {
>   	SCHEDULE_VPE,
>   	DESCHEDULE_VPE,
>   	INVALL_VPE,
> +	PROP_UPDATE_SGI,

Maybe better to use 'PROP_UPDATE_VSGI'?


Thanks,
Zenghui

>   };
>   
>   struct its_cmd_info {
> @@ -110,6 +111,10 @@ struct its_cmd_info {
>   			bool		g0en;
>   			bool		g1en;
>   		};
> +		struct {
> +			u8		priority;
> +			bool		group;
> +		};
>   	};
>   };
>   
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 16/20] KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts
  2020-02-14 14:57 ` [PATCH v4 16/20] KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts Marc Zyngier
@ 2020-02-20  3:55   ` Zenghui Yu
  2020-02-28 19:16     ` Marc Zyngier
  0 siblings, 1 reply; 47+ messages in thread
From: Zenghui Yu @ 2020-02-20  3:55 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, linux-kernel
  Cc: Lorenzo Pieralisi, Jason Cooper, Suzuki K Poulose, Eric Auger,
	Robert Richter, James Morse, Thomas Gleixner, Julien Thierry

Hi Marc,

On 2020/2/14 22:57, Marc Zyngier wrote:
> In order to let a guest buy in the new, active-less SGIs, we
> need to be able to switch between the two modes.
> 
> Handle this by stopping all guest activity, transfer the state
> from one mode to the other, and resume the guest.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

[...]

> diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
> index 1bc09b523486..2c9fc13e2c59 100644
> --- a/virt/kvm/arm/vgic/vgic-v3.c
> +++ b/virt/kvm/arm/vgic/vgic-v3.c
> @@ -540,6 +540,8 @@ int vgic_v3_map_resources(struct kvm *kvm)
>   		goto out;
>   	}
>   
> +	if (kvm_vgic_global_state.has_gicv4_1)
> +		vgic_v4_configure_vsgis(kvm);
>   	dist->ready = true;
>   
>   out:

Is there any reason to invoke vgic_v4_configure_vsgis() here?
This is called on the first VCPU run, through kvm_vgic_map_resources().
Shouldn't the vSGI configuration only driven by a GICD_CTLR.nASSGIreq
writing (from guest, or from userspace maybe)?


Thanks,
Zenghui


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 09/20] irqchip/gic-v4.1: Plumb set_vcpu_affinity SGI callbacks
  2020-02-20  3:37   ` Zenghui Yu
@ 2020-02-28 19:00     ` Marc Zyngier
  0 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-02-28 19:00 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Zenghui,

On 2020-02-20 03:37, Zenghui Yu wrote:
> Hi Marc,
> 
> On 2020/2/14 22:57, Marc Zyngier wrote:
>> As for VLPIs, there is a number of configuration bits that cannot
> 
> As for vSGIs,

No, I think this is correct, if a bit ambiguous. What I'm trying to say
here is that vSGIs have the same requirements as vLPIs, in the sense
that some of the configuration aspects cannot be expressed in terms of
the irqchip API.

> 
>> be directly communicated through the normal irqchip API, and we
>> have to use our good old friend set_vcpu_affinity.
>> 
>> This is used to configure group and priority for a given vSGI.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
> 
>> ---
>>   drivers/irqchip/irq-gic-v3-its.c   | 18 ++++++++++++++++++
>>   include/linux/irqchip/arm-gic-v4.h |  5 +++++
>>   2 files changed, 23 insertions(+)
>> 
>> diff --git a/drivers/irqchip/irq-gic-v3-its.c 
>> b/drivers/irqchip/irq-gic-v3-its.c
>> index a9753435c4ff..a2e824eae43f 100644
>> --- a/drivers/irqchip/irq-gic-v3-its.c
>> +++ b/drivers/irqchip/irq-gic-v3-its.c
>> @@ -3969,6 +3969,23 @@ static int its_sgi_get_irqchip_state(struct 
>> irq_data *d,
>>   	return 0;
>>   }
>>   +static int its_sgi_set_vcpu_affinity(struct irq_data *d, void 
>> *vcpu_info)
>> +{
>> +	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
>> +	struct its_cmd_info *info = vcpu_info;
>> +
>> +	switch (info->cmd_type) {
>> +	case PROP_UPDATE_SGI:
>> +		vpe->sgi_config[d->hwirq].priority = info->priority;
>> +		vpe->sgi_config[d->hwirq].group = info->group;
>> +		its_configure_sgi(d, false);
>> +		return 0;
>> +
>> +	default:
>> +		return -EINVAL;
>> +	}
>> +}
>> +
>>   static struct irq_chip its_sgi_irq_chip = {
>>   	.name			= "GICv4.1-sgi",
>>   	.irq_mask		= its_sgi_mask_irq,
>> @@ -3976,6 +3993,7 @@ static struct irq_chip its_sgi_irq_chip = {
>>   	.irq_set_affinity	= its_sgi_set_affinity,
>>   	.irq_set_irqchip_state	= its_sgi_set_irqchip_state,
>>   	.irq_get_irqchip_state	= its_sgi_get_irqchip_state,
>> +	.irq_set_vcpu_affinity	= its_sgi_set_vcpu_affinity,
>>   };
>>     static int its_sgi_irq_domain_alloc(struct irq_domain *domain,
>> diff --git a/include/linux/irqchip/arm-gic-v4.h 
>> b/include/linux/irqchip/arm-gic-v4.h
>> index 30b4855bf766..a1a9d40266f5 100644
>> --- a/include/linux/irqchip/arm-gic-v4.h
>> +++ b/include/linux/irqchip/arm-gic-v4.h
>> @@ -98,6 +98,7 @@ enum its_vcpu_info_cmd_type {
>>   	SCHEDULE_VPE,
>>   	DESCHEDULE_VPE,
>>   	INVALL_VPE,
>> +	PROP_UPDATE_SGI,
> 
> Maybe better to use 'PROP_UPDATE_VSGI'?

That's indeed better.

Thanks,

          M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 16/20] KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts
  2020-02-20  3:55   ` Zenghui Yu
@ 2020-02-28 19:16     ` Marc Zyngier
  2020-03-02  2:40       ` Zenghui Yu
  0 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-28 19:16 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Zenghui,

On 2020-02-20 03:55, Zenghui Yu wrote:
> Hi Marc,
> 
> On 2020/2/14 22:57, Marc Zyngier wrote:
>> In order to let a guest buy in the new, active-less SGIs, we
>> need to be able to switch between the two modes.
>> 
>> Handle this by stopping all guest activity, transfer the state
>> from one mode to the other, and resume the guest.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> [...]
> 
>> diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
>> index 1bc09b523486..2c9fc13e2c59 100644
>> --- a/virt/kvm/arm/vgic/vgic-v3.c
>> +++ b/virt/kvm/arm/vgic/vgic-v3.c
>> @@ -540,6 +540,8 @@ int vgic_v3_map_resources(struct kvm *kvm)
>>   		goto out;
>>   	}
>>   +	if (kvm_vgic_global_state.has_gicv4_1)
>> +		vgic_v4_configure_vsgis(kvm);
>>   	dist->ready = true;
>>     out:
> 
> Is there any reason to invoke vgic_v4_configure_vsgis() here?
> This is called on the first VCPU run, through kvm_vgic_map_resources().
> Shouldn't the vSGI configuration only driven by a GICD_CTLR.nASSGIreq
> writing (from guest, or from userspace maybe)?

What I'm trying to catch here is the guest that has been restored with
nASSGIreq set. At the moment, we don't do anything on the userspace
side, because the vmm could decide to write that particular bit
multiple times, and switching between the two modes is expensive (not
to mention that all the vcpus may not have been created yet).

Moving it to the first run makes all these pitfalls go away (we have the
final nASSSGIreq value, and all the vcpus are accounted for).

Does this make sense to you?

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-20  3:11         ` Zenghui Yu
@ 2020-02-28 19:37           ` Marc Zyngier
  2020-03-01 19:00             ` Marc Zyngier
  0 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-02-28 19:37 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

On 2020-02-20 03:11, Zenghui Yu wrote:
> Hi Marc,
> 
> On 2020/2/18 23:31, Marc Zyngier wrote:
>> diff --git a/drivers/irqchip/irq-gic-v3-its.c 
>> b/drivers/irqchip/irq-gic-v3-its.c
>> index 7656b353a95f..0ed286dba827 100644
>> --- a/drivers/irqchip/irq-gic-v3-its.c
>> +++ b/drivers/irqchip/irq-gic-v3-its.c
>> @@ -144,7 +144,7 @@ struct event_lpi_map {
>>       u16            *col_map;
>>       irq_hw_number_t        lpi_base;
>>       int            nr_lpis;
>> -    raw_spinlock_t        vlpi_lock;
>> +    raw_spinlock_t        map_lock;
> 
> So we use map_lock to protect both LPI's and VLPI's mapping affinity of
> a device, and use vpe_lock to protect vPE's affinity, OK.
> 
>>       struct its_vm        *vm;
>>       struct its_vlpi_map    *vlpi_maps;
>>       int            nr_vlpis;
>> @@ -240,15 +240,33 @@ static struct its_vlpi_map *get_vlpi_map(struct 
>> irq_data *d)
>>       return NULL;
>>   }
>> 
>> -static int irq_to_cpuid(struct irq_data *d)
>> +static int irq_to_cpuid_lock(struct irq_data *d, unsigned long 
>> *flags)
>>   {
>> -    struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>>       struct its_vlpi_map *map = get_vlpi_map(d);
>> +    int cpu;
>> 
>> -    if (map)
>> -        return map->vpe->col_idx;
>> +    if (map) {
>> +        raw_spin_lock_irqsave(&map->vpe->vpe_lock, *flags);
>> +        cpu = map->vpe->col_idx;
>> +    } else {
>> +        struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>> +        raw_spin_lock_irqsave(&its_dev->event_map.map_lock, *flags);
>> +        cpu = its_dev->event_map.col_map[its_get_event_id(d)];
>> +    }
>> 
>> -    return its_dev->event_map.col_map[its_get_event_id(d)];
>> +    return cpu;
>> +}
> 
> This helper is correct for normal LPIs and VLPIs, but wrong for per-vPE
> IRQ (doorbell) and vSGIs. irq_data_get_irq_chip_data() gets confused by
> both of them.

Yes, I've fixed that in the current state of the tree last week. Do have 
a
look if you can, but it seems to survive on both the model with v4.1 and
my D05.

[...]

>> -        rdbase = per_cpu_ptr(gic_rdists->rdist, 
>> vpe->col_idx)->rd_base;
>> +        cpu = irq_to_cpuid_lock(d, &flags);
>> +        rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
>>           gic_write_lpir(d->parent_data->hwirq, rdbase + 
>> GICR_INVLPIR);
>>           wait_for_syncr(rdbase);
>> +        irq_to_cpuid_unlock(d, flags);
>>       } else {
>>           its_vpe_send_cmd(vpe, its_send_inv);
>>       }
> 
> Do we really need to grab the vpe_lock for those which are belong to
> the same irqchip with its_vpe_set_affinity()? The IRQ core code should
> already ensure the mutual exclusion among them, wrong?

I've been trying to think about that, but jet-lag keeps getting in the 
way.
I empirically think that you are right, but I need to go and check the 
various
code paths to be sure. Hopefully I'll have a bit more brain space next 
week.

For sure this patch tries to do too many things at once...

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-02-28 19:37           ` Marc Zyngier
@ 2020-03-01 19:00             ` Marc Zyngier
  2020-03-02  8:18               ` Zenghui Yu
  0 siblings, 1 reply; 47+ messages in thread
From: Marc Zyngier @ 2020-03-01 19:00 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

On 2020-02-28 19:37, Marc Zyngier wrote:
> On 2020-02-20 03:11, Zenghui Yu wrote:

>> Do we really need to grab the vpe_lock for those which are belong to
>> the same irqchip with its_vpe_set_affinity()? The IRQ core code should
>> already ensure the mutual exclusion among them, wrong?
> 
> I've been trying to think about that, but jet-lag keeps getting in the 
> way.
> I empirically think that you are right, but I need to go and check the 
> various
> code paths to be sure. Hopefully I'll have a bit more brain space next 
> week.

So I slept on it and came back to my senses. The only case we actually 
need
to deal with is when an affinity change impacts *another* interrupt.

There is only two instances of this issue:

- vLPIs have their *physical* affinity impacted by the affinity of the
   vPE. Their virtual affinity is of course unchanged, but the physical
   one becomes important with direct invalidation. Taking a per-VPE lock
   in such context should address the issue.

- vSGIs have the exact same issue, plus the matter of requiring some
   *extra* one when reading the pending state, which requires a RMW
   on two different registers. This requires an extra per-RD lock.

My original patch was stupidly complex, and the irq_desc lock is
perfectly enough to deal with anything that only affects the interrupt
state itself.

GICv4 + direct invalidation for vLPIs breaks this by bypassing the
serialization initially provided by the ITS, as the RD is completely
out of band. The per-vPE lock brings back this serialization.

I've updated the branch, which seems to run OK on D05. I still need
to run the usual tests on the FVP model though.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 16/20] KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts
  2020-02-28 19:16     ` Marc Zyngier
@ 2020-03-02  2:40       ` Zenghui Yu
  0 siblings, 0 replies; 47+ messages in thread
From: Zenghui Yu @ 2020-03-02  2:40 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Marc,

On 2020/2/29 3:16, Marc Zyngier wrote:
> Hi Zenghui,
> 
> On 2020-02-20 03:55, Zenghui Yu wrote:
>> Hi Marc,
>>
>> On 2020/2/14 22:57, Marc Zyngier wrote:
>>> In order to let a guest buy in the new, active-less SGIs, we
>>> need to be able to switch between the two modes.
>>>
>>> Handle this by stopping all guest activity, transfer the state
>>> from one mode to the other, and resume the guest.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>
>> [...]
>>
>>> diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
>>> index 1bc09b523486..2c9fc13e2c59 100644
>>> --- a/virt/kvm/arm/vgic/vgic-v3.c
>>> +++ b/virt/kvm/arm/vgic/vgic-v3.c
>>> @@ -540,6 +540,8 @@ int vgic_v3_map_resources(struct kvm *kvm)
>>>           goto out;
>>>       }
>>>   +    if (kvm_vgic_global_state.has_gicv4_1)
>>> +        vgic_v4_configure_vsgis(kvm);
>>>       dist->ready = true;
>>>     out:
>>
>> Is there any reason to invoke vgic_v4_configure_vsgis() here?
>> This is called on the first VCPU run, through kvm_vgic_map_resources().
>> Shouldn't the vSGI configuration only driven by a GICD_CTLR.nASSGIreq
>> writing (from guest, or from userspace maybe)?
> 
> What I'm trying to catch here is the guest that has been restored with
> nASSGIreq set. At the moment, we don't do anything on the userspace
> side, because the vmm could decide to write that particular bit
> multiple times, and switching between the two modes is expensive (not
> to mention that all the vcpus may not have been created yet).
> 
> Moving it to the first run makes all these pitfalls go away (we have the
> final nASSSGIreq value, and all the vcpus are accounted for).

So what will happen on restoration is (roughly):

  - for GICR_ISPENR0: We will restore the pending status of vSGIs into
    software pending_latch, just like what we've done for normal SGIs.
  - for GICD_CTLR.nASSGIreq: We will only record the written value.
    (Note to myself: No invocation of configure_vsgis() in uaccess_write
     callback, I previously mixed it up with the guest write callback.)
  - Finally, you choose the first vcpu run as the appropriate point to
    potentially flush the pending status to HW according to the final
    nASSGIreq value.

> 
> Does this make sense to you?

Yeah, it sounds like a good idea! And please ignore what I've replied to
patch #15, I obviously missed your intention at that time, sorry...

But can we move this hunk to some places more appropriate, for example,
put it together with the GICD_CTLR's uaccess_write change? It might make
things a bit clearer for other reviewers. :-)


Thanks,
Zenghui


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-03-01 19:00             ` Marc Zyngier
@ 2020-03-02  8:18               ` Zenghui Yu
  2020-03-02 12:09                 ` Marc Zyngier
  0 siblings, 1 reply; 47+ messages in thread
From: Zenghui Yu @ 2020-03-02  8:18 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

On 2020/3/2 3:00, Marc Zyngier wrote:
> On 2020-02-28 19:37, Marc Zyngier wrote:
>> On 2020-02-20 03:11, Zenghui Yu wrote:
> 
>>> Do we really need to grab the vpe_lock for those which are belong to
>>> the same irqchip with its_vpe_set_affinity()? The IRQ core code should
>>> already ensure the mutual exclusion among them, wrong?
>>
>> I've been trying to think about that, but jet-lag keeps getting in the 
>> way.
>> I empirically think that you are right, but I need to go and check the 
>> various
>> code paths to be sure. Hopefully I'll have a bit more brain space next 
>> week.
> 
> So I slept on it and came back to my senses. The only case we actually need
> to deal with is when an affinity change impacts *another* interrupt.
> 
> There is only two instances of this issue:
> 
> - vLPIs have their *physical* affinity impacted by the affinity of the
>    vPE. Their virtual affinity is of course unchanged, but the physical
>    one becomes important with direct invalidation. Taking a per-VPE lock
>    in such context should address the issue.
> 
> - vSGIs have the exact same issue, plus the matter of requiring some
>    *extra* one when reading the pending state, which requires a RMW
>    on two different registers. This requires an extra per-RD lock.

Agreed with both!

> 
> My original patch was stupidly complex, and the irq_desc lock is
> perfectly enough to deal with anything that only affects the interrupt
> state itself.
> 
> GICv4 + direct invalidation for vLPIs breaks this by bypassing the
> serialization initially provided by the ITS, as the RD is completely
> out of band. The per-vPE lock brings back this serialization.
> 
> I've updated the branch, which seems to run OK on D05. I still need
> to run the usual tests on the FVP model though.

I have pulled the latest branch and it looks good to me, except for
one remaining concern:

GICR_INV{LPI, ALL}R + GICR_SYNCR can also be accessed concurrently
by multiple direct invalidation, should we also use the per-RD lock
to ensure mutual exclusion?  It looks not so harmful though, as this
will only increase one's polling time against the Busy bit (in my view).

But I point it out again for confirmation.


Thanks,
Zenghui


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state SGI callbacks
  2020-03-02  8:18               ` Zenghui Yu
@ 2020-03-02 12:09                 ` Marc Zyngier
  0 siblings, 0 replies; 47+ messages in thread
From: Marc Zyngier @ 2020-03-02 12:09 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: Lorenzo Pieralisi, Jason Cooper, kvm, Suzuki K Poulose,
	linux-kernel, Eric Auger, Robert Richter, James Morse,
	Julien Thierry, Thomas Gleixner, kvmarm, linux-arm-kernel

Hi Zenghui,

On 2020-03-02 08:18, Zenghui Yu wrote:
> On 2020/3/2 3:00, Marc Zyngier wrote:
>> On 2020-02-28 19:37, Marc Zyngier wrote:
>>> On 2020-02-20 03:11, Zenghui Yu wrote:
>> 
>>>> Do we really need to grab the vpe_lock for those which are belong to
>>>> the same irqchip with its_vpe_set_affinity()? The IRQ core code 
>>>> should
>>>> already ensure the mutual exclusion among them, wrong?
>>> 
>>> I've been trying to think about that, but jet-lag keeps getting in 
>>> the way.
>>> I empirically think that you are right, but I need to go and check 
>>> the various
>>> code paths to be sure. Hopefully I'll have a bit more brain space 
>>> next week.
>> 
>> So I slept on it and came back to my senses. The only case we actually 
>> need
>> to deal with is when an affinity change impacts *another* interrupt.
>> 
>> There is only two instances of this issue:
>> 
>> - vLPIs have their *physical* affinity impacted by the affinity of the
>>    vPE. Their virtual affinity is of course unchanged, but the 
>> physical
>>    one becomes important with direct invalidation. Taking a per-VPE 
>> lock
>>    in such context should address the issue.
>> 
>> - vSGIs have the exact same issue, plus the matter of requiring some
>>    *extra* one when reading the pending state, which requires a RMW
>>    on two different registers. This requires an extra per-RD lock.
> 
> Agreed with both!
> 
>> 
>> My original patch was stupidly complex, and the irq_desc lock is
>> perfectly enough to deal with anything that only affects the interrupt
>> state itself.
>> 
>> GICv4 + direct invalidation for vLPIs breaks this by bypassing the
>> serialization initially provided by the ITS, as the RD is completely
>> out of band. The per-vPE lock brings back this serialization.
>> 
>> I've updated the branch, which seems to run OK on D05. I still need
>> to run the usual tests on the FVP model though.
> 
> I have pulled the latest branch and it looks good to me, except for
> one remaining concern:
> 
> GICR_INV{LPI, ALL}R + GICR_SYNCR can also be accessed concurrently
> by multiple direct invalidation, should we also use the per-RD lock
> to ensure mutual exclusion?  It looks not so harmful though, as this
> will only increase one's polling time against the Busy bit (in my 
> view).
> 
> But I point it out again for confirmation.

I was about to say that it doesn't really matter because it is only a
performance optimisation (and we're noty quite there yet), until I 
spotted
this great nugget in the spec:

<quote>
Writing GICR_INVLPIR or GICR_INVALLR when GICR_SYNCR.Busy==1 is 
CONSTRAINED
UNPREDICTABLE:
- The write is IGNORED .
- The invalidate specified by the write is performed.
</quote>

So we really need some form of mutual exclusion on a per-RD basis to 
ensure
that no two invalidations occur at the same time, ensuring that Busy 
clears
between the two.

Thanks for the heads up,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2020-03-02 12:09 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-14 14:57 [PATCH v4 00/20] irqchip/gic-v4: GICv4.1 architecture support Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 01/20] irqchip/gic-v4.1: Skip absent CPUs while iterating over redistributors Marc Zyngier
2020-02-17  9:11   ` Zenghui Yu
2020-02-14 14:57 ` [PATCH v4 02/20] irqchip/gic-v3: Use SGIs without active state if offered Marc Zyngier
2020-02-17  9:18   ` Zenghui Yu
2020-02-14 14:57 ` [PATCH v4 03/20] irqchip/gic-v4.1: Advertise support v4.1 to KVM Marc Zyngier
2020-02-17  9:09   ` Zenghui Yu
2020-02-14 14:57 ` [PATCH v4 04/20] irqchip/gic-v4.1: Map the ITS SGIR register page Marc Zyngier
2020-02-20  3:17   ` Zenghui Yu
2020-02-14 14:57 ` [PATCH v4 05/20] irqchip/gic-v4.1: Plumb skeletal VSGI irqchip Marc Zyngier
2020-02-20  3:21   ` Zenghui Yu
2020-02-14 14:57 ` [PATCH v4 06/20] irqchip/gic-v4.1: Add initial SGI configuration Marc Zyngier
2020-02-18  7:25   ` Zenghui Yu
2020-02-18  9:46     ` Marc Zyngier
2020-02-20  3:25       ` Zenghui Yu
2020-02-14 14:57 ` [PATCH v4 07/20] irqchip/gic-v4.1: Plumb mask/unmask SGI callbacks Marc Zyngier
2020-02-20  3:32   ` Zenghui Yu
2020-02-14 14:57 ` [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state " Marc Zyngier
2020-02-18  7:00   ` Zenghui Yu
2020-02-18  9:27     ` Marc Zyngier
2020-02-18 15:31       ` Marc Zyngier
2020-02-19 11:50         ` Zenghui Yu
2020-02-19 15:18           ` Zenghui Yu
2020-02-20  3:11         ` Zenghui Yu
2020-02-28 19:37           ` Marc Zyngier
2020-03-01 19:00             ` Marc Zyngier
2020-03-02  8:18               ` Zenghui Yu
2020-03-02 12:09                 ` Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 09/20] irqchip/gic-v4.1: Plumb set_vcpu_affinity " Marc Zyngier
2020-02-20  3:37   ` Zenghui Yu
2020-02-28 19:00     ` Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 10/20] irqchip/gic-v4.1: Move doorbell management to the GICv4 abstraction layer Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 11/20] irqchip/gic-v4.1: Add VSGI allocation/teardown Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 12/20] irqchip/gic-v4.1: Add VSGI property setup Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 13/20] irqchip/gic-v4.1: Eagerly vmap vPEs Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 14/20] KVM: arm64: GICv4.1: Let doorbells be auto-enabled Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 15/20] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers Marc Zyngier
2020-02-18  8:46   ` Zenghui Yu
2020-02-18  9:41     ` Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 16/20] KVM: arm64: GICv4.1: Allow SGIs to switch between HW and SW interrupts Marc Zyngier
2020-02-20  3:55   ` Zenghui Yu
2020-02-28 19:16     ` Marc Zyngier
2020-03-02  2:40       ` Zenghui Yu
2020-02-14 14:57 ` [PATCH v4 17/20] KVM: arm64: GICv4.1: Plumb SGI implementation selection in the distributor Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 18/20] KVM: arm64: GICv4.1: Reload VLPI configuration on distributor enable/disable Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 19/20] KVM: arm64: GICv4.1: Allow non-trapping WFI when using HW SGIs Marc Zyngier
2020-02-14 14:57 ` [PATCH v4 20/20] KVM: arm64: GICv4.1: Expose HW-based SGIs in debugfs Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).