From: Thomas Gleixner <tglx@linutronix.de> To: Anup Patel <apatel@ventanamicro.com>, Palmer Dabbelt <palmer@dabbelt.com>, Paul Walmsley <paul.walmsley@sifive.com>, Marc Zyngier <maz@kernel.org>, Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Atish Patra <atishp@atishpatra.org>, Alistair Francis <Alistair.Francis@wdc.com>, Anup Patel <anup@brainfault.org>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel <apatel@ventanamicro.com> Subject: Re: [PATCH v14 3/8] genirq: Add mechanism to multiplex a single HW IPI Date: Thu, 01 Dec 2022 18:20:25 +0100 [thread overview] Message-ID: <87v8mvqbvq.ffs@tglx> (raw) In-Reply-To: <20221201130135.1115380-4-apatel@ventanamicro.com> On Thu, Dec 01 2022 at 18:31, Anup Patel wrote: > All RISC-V platforms have a single HW IPI provided by the INTC local > interrupt controller. The HW method to trigger INTC IPI can be through > external irqchip (e.g. RISC-V AIA), through platform specific device > (e.g. SiFive CLINT timer), or through firmware (e.g. SBI IPI call). > > To support multiple IPIs on RISC-V, we add a generic IPI multiplexing s/we// > mechanism which help us create multiple virtual IPIs using a single > HW IPI. This generic IPI multiplexing is inspired from the Apple AIC s/from/by/ > irqchip driver and it is shared by various RISC-V irqchip drivers. Sure, but now we have two copies of this. One in the Apple AIC and one here. The obvious thing to do is: 1) Provide generic infrastructure 2) Convert AIC to use it 3) Add RISCV users No? > +static void ipi_mux_mask(struct irq_data *d) > +{ > + struct ipi_mux_cpu *icpu = this_cpu_ptr(ipi_mux_pcpu); > + > + atomic_andnot(BIT(irqd_to_hwirq(d)), &icpu->enable); > +} > + > +static void ipi_mux_unmask(struct irq_data *d) > +{ > + u32 ibit = BIT(irqd_to_hwirq(d)); > + struct ipi_mux_cpu *icpu = this_cpu_ptr(ipi_mux_pcpu); The AIC code got the variable ordering correct ... https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#variable-declarations > + atomic_or(ibit, &icpu->enable); > + > + /* > + * The atomic_or() above must complete before the atomic_read() > + * below to avoid racing ipi_mux_send_mask(). > + */ > + smp_mb__after_atomic(); > + > + /* If a pending IPI was unmasked, raise a parent IPI immediately. */ > + if (atomic_read(&icpu->bits) & ibit) > + ipi_mux_send(smp_processor_id()); > +} > + > +static void ipi_mux_send_mask(struct irq_data *d, const struct cpumask *mask) > +{ > + u32 ibit = BIT(irqd_to_hwirq(d)); > + struct ipi_mux_cpu *icpu = this_cpu_ptr(ipi_mux_pcpu); > + unsigned long pending; > + int cpu; > + > + for_each_cpu(cpu, mask) { > + icpu = per_cpu_ptr(ipi_mux_pcpu, cpu); > + pending = atomic_fetch_or_release(ibit, &icpu->bits); > + > + /* > + * The atomic_fetch_or_release() above must complete > + * before the atomic_read() below to avoid racing with > + * ipi_mux_unmask(). > + */ > + smp_mb__after_atomic(); > + > + /* > + * The flag writes must complete before the physical IPI is > + * issued to another CPU. This is implied by the control > + * dependency on the result of atomic_read() below, which is > + * itself already ordered after the vIPI flag write. > + */ > + if (!(pending & ibit) && (atomic_read(&icpu->enable) & ibit)) > + ipi_mux_send(cpu); > + } > +} > + > +static const struct irq_chip ipi_mux_chip = { > + .name = "IPI Mux", > + .irq_mask = ipi_mux_mask, > + .irq_unmask = ipi_mux_unmask, > + .ipi_send_mask = ipi_mux_send_mask, > +}; > + > +static int ipi_mux_domain_alloc(struct irq_domain *d, unsigned int virq, > + unsigned int nr_irqs, void *arg) > +{ > + int i; > + > + for (i = 0; i < nr_irqs; i++) { > + irq_set_percpu_devid(virq + i); > + irq_domain_set_info(d, virq + i, i, &ipi_mux_chip, NULL, > + handle_percpu_devid_irq, NULL, NULL); > + } > + > + return 0; > +} > + > +static const struct irq_domain_ops ipi_mux_domain_ops = { > + .alloc = ipi_mux_domain_alloc, > + .free = irq_domain_free_irqs_top, > +}; > + > +/** > + * ipi_mux_process - Process multiplexed virtual IPIs > + */ > +void ipi_mux_process(void) > +{ > + struct ipi_mux_cpu *icpu = this_cpu_ptr(ipi_mux_pcpu); > + irq_hw_number_t hwirq; > + unsigned long ipis; > + unsigned int en; > + > + /* > + * Reading enable mask does not need to be ordered as long as > + * this function called from interrupt handler because only > + * the CPU itself can change it's own enable mask. > + */ > + en = atomic_read(&icpu->enable); > + > + /* > + * Clear the IPIs we are about to handle. This pairs with the > + * atomic_fetch_or_release() in ipi_mux_send_mask(). The comments in the AIC code where you copied from are definitely better... Thanks, tglx
WARNING: multiple messages have this Message-ID (diff)
From: Thomas Gleixner <tglx@linutronix.de> To: Anup Patel <apatel@ventanamicro.com>, Palmer Dabbelt <palmer@dabbelt.com>, Paul Walmsley <paul.walmsley@sifive.com>, Marc Zyngier <maz@kernel.org>, Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Atish Patra <atishp@atishpatra.org>, Alistair Francis <Alistair.Francis@wdc.com>, Anup Patel <anup@brainfault.org>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel <apatel@ventanamicro.com> Subject: Re: [PATCH v14 3/8] genirq: Add mechanism to multiplex a single HW IPI Date: Thu, 01 Dec 2022 18:20:25 +0100 [thread overview] Message-ID: <87v8mvqbvq.ffs@tglx> (raw) In-Reply-To: <20221201130135.1115380-4-apatel@ventanamicro.com> On Thu, Dec 01 2022 at 18:31, Anup Patel wrote: > All RISC-V platforms have a single HW IPI provided by the INTC local > interrupt controller. The HW method to trigger INTC IPI can be through > external irqchip (e.g. RISC-V AIA), through platform specific device > (e.g. SiFive CLINT timer), or through firmware (e.g. SBI IPI call). > > To support multiple IPIs on RISC-V, we add a generic IPI multiplexing s/we// > mechanism which help us create multiple virtual IPIs using a single > HW IPI. This generic IPI multiplexing is inspired from the Apple AIC s/from/by/ > irqchip driver and it is shared by various RISC-V irqchip drivers. Sure, but now we have two copies of this. One in the Apple AIC and one here. The obvious thing to do is: 1) Provide generic infrastructure 2) Convert AIC to use it 3) Add RISCV users No? > +static void ipi_mux_mask(struct irq_data *d) > +{ > + struct ipi_mux_cpu *icpu = this_cpu_ptr(ipi_mux_pcpu); > + > + atomic_andnot(BIT(irqd_to_hwirq(d)), &icpu->enable); > +} > + > +static void ipi_mux_unmask(struct irq_data *d) > +{ > + u32 ibit = BIT(irqd_to_hwirq(d)); > + struct ipi_mux_cpu *icpu = this_cpu_ptr(ipi_mux_pcpu); The AIC code got the variable ordering correct ... https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#variable-declarations > + atomic_or(ibit, &icpu->enable); > + > + /* > + * The atomic_or() above must complete before the atomic_read() > + * below to avoid racing ipi_mux_send_mask(). > + */ > + smp_mb__after_atomic(); > + > + /* If a pending IPI was unmasked, raise a parent IPI immediately. */ > + if (atomic_read(&icpu->bits) & ibit) > + ipi_mux_send(smp_processor_id()); > +} > + > +static void ipi_mux_send_mask(struct irq_data *d, const struct cpumask *mask) > +{ > + u32 ibit = BIT(irqd_to_hwirq(d)); > + struct ipi_mux_cpu *icpu = this_cpu_ptr(ipi_mux_pcpu); > + unsigned long pending; > + int cpu; > + > + for_each_cpu(cpu, mask) { > + icpu = per_cpu_ptr(ipi_mux_pcpu, cpu); > + pending = atomic_fetch_or_release(ibit, &icpu->bits); > + > + /* > + * The atomic_fetch_or_release() above must complete > + * before the atomic_read() below to avoid racing with > + * ipi_mux_unmask(). > + */ > + smp_mb__after_atomic(); > + > + /* > + * The flag writes must complete before the physical IPI is > + * issued to another CPU. This is implied by the control > + * dependency on the result of atomic_read() below, which is > + * itself already ordered after the vIPI flag write. > + */ > + if (!(pending & ibit) && (atomic_read(&icpu->enable) & ibit)) > + ipi_mux_send(cpu); > + } > +} > + > +static const struct irq_chip ipi_mux_chip = { > + .name = "IPI Mux", > + .irq_mask = ipi_mux_mask, > + .irq_unmask = ipi_mux_unmask, > + .ipi_send_mask = ipi_mux_send_mask, > +}; > + > +static int ipi_mux_domain_alloc(struct irq_domain *d, unsigned int virq, > + unsigned int nr_irqs, void *arg) > +{ > + int i; > + > + for (i = 0; i < nr_irqs; i++) { > + irq_set_percpu_devid(virq + i); > + irq_domain_set_info(d, virq + i, i, &ipi_mux_chip, NULL, > + handle_percpu_devid_irq, NULL, NULL); > + } > + > + return 0; > +} > + > +static const struct irq_domain_ops ipi_mux_domain_ops = { > + .alloc = ipi_mux_domain_alloc, > + .free = irq_domain_free_irqs_top, > +}; > + > +/** > + * ipi_mux_process - Process multiplexed virtual IPIs > + */ > +void ipi_mux_process(void) > +{ > + struct ipi_mux_cpu *icpu = this_cpu_ptr(ipi_mux_pcpu); > + irq_hw_number_t hwirq; > + unsigned long ipis; > + unsigned int en; > + > + /* > + * Reading enable mask does not need to be ordered as long as > + * this function called from interrupt handler because only > + * the CPU itself can change it's own enable mask. > + */ > + en = atomic_read(&icpu->enable); > + > + /* > + * Clear the IPIs we are about to handle. This pairs with the > + * atomic_fetch_or_release() in ipi_mux_send_mask(). The comments in the AIC code where you copied from are definitely better... Thanks, tglx _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2022-12-01 17:20 UTC|newest] Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-12-01 13:01 [PATCH v14 0/8] RISC-V IPI Improvements Anup Patel 2022-12-01 13:01 ` Anup Patel 2022-12-01 13:01 ` [PATCH v14 1/8] RISC-V: Clear SIP bit only when using SBI IPI operations Anup Patel 2022-12-01 13:01 ` Anup Patel 2022-12-01 13:01 ` [PATCH v14 2/8] irqchip/riscv-intc: Allow drivers to directly discover INTC hwnode Anup Patel 2022-12-01 13:01 ` Anup Patel 2022-12-01 13:01 ` [PATCH v14 3/8] genirq: Add mechanism to multiplex a single HW IPI Anup Patel 2022-12-01 13:01 ` Anup Patel 2022-12-01 17:20 ` Thomas Gleixner [this message] 2022-12-01 17:20 ` Thomas Gleixner 2022-12-01 18:00 ` Anup Patel 2022-12-01 18:00 ` Anup Patel 2022-12-02 2:09 ` Thomas Gleixner 2022-12-02 2:09 ` Thomas Gleixner 2022-12-02 5:04 ` Anup Patel 2022-12-02 5:04 ` Anup Patel 2022-12-01 13:01 ` [PATCH v14 4/8] RISC-V: Treat IPIs as normal Linux IRQs Anup Patel 2022-12-01 13:01 ` Anup Patel 2022-12-01 13:01 ` [PATCH v14 5/8] RISC-V: Allow marking IPIs as suitable for remote FENCEs Anup Patel 2022-12-01 13:01 ` Anup Patel 2022-12-01 13:01 ` [PATCH v14 6/8] RISC-V: Use IPIs for remote TLB flush when possible Anup Patel 2022-12-01 13:01 ` Anup Patel 2022-12-01 13:01 ` [PATCH v14 7/8] RISC-V: Use IPIs for remote icache " Anup Patel 2022-12-01 13:01 ` Anup Patel 2022-12-01 13:01 ` [PATCH v14 8/8] irqchip/riscv-intc: Add empty irq_eoi() for chained irq handlers Anup Patel 2022-12-01 13:01 ` Anup Patel
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=87v8mvqbvq.ffs@tglx \ --to=tglx@linutronix.de \ --cc=Alistair.Francis@wdc.com \ --cc=anup@brainfault.org \ --cc=apatel@ventanamicro.com \ --cc=atishp@atishpatra.org \ --cc=daniel.lezcano@linaro.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-riscv@lists.infradead.org \ --cc=maz@kernel.org \ --cc=palmer@dabbelt.com \ --cc=paul.walmsley@sifive.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.