All of lore.kernel.org
 help / color / mirror / Atom feed
From: Anup Patel <apatel@ventanamicro.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: "Palmer Dabbelt" <palmer@dabbelt.com>,
	"Paul Walmsley" <paul.walmsley@sifive.com>,
	"Rob Herring" <robh+dt@kernel.org>,
	"Krzysztof Kozlowski" <krzysztof.kozlowski+dt@linaro.org>,
	"Frank Rowand" <frowand.list@gmail.com>,
	"Conor Dooley" <conor+dt@kernel.org>,
	"Marc Zyngier" <maz@kernel.org>, "Björn Töpel" <bjorn@kernel.org>,
	"Atish Patra" <atishp@atishpatra.org>,
	"Andrew Jones" <ajones@ventanamicro.com>,
	"Sunil V L" <sunilvl@ventanamicro.com>,
	"Saravana Kannan" <saravanak@google.com>,
	"Anup Patel" <anup@brainfault.org>,
	linux-riscv@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, devicetree@vger.kernel.org
Subject: Re: [PATCH v12 18/25] irqchip: Add RISC-V incoming MSI controller early driver
Date: Sun, 18 Feb 2024 18:46:14 +0530	[thread overview]
Message-ID: <CAK9=C2XvYB8qrTvxKvtXspdfiKUro89t-oLRVzeMqczDbG7nrQ@mail.gmail.com> (raw)
In-Reply-To: <87h6i8ckwg.ffs@tglx>

On Sat, Feb 17, 2024 at 12:10 AM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Sat, Jan 27 2024 at 21:47, Anup Patel wrote:
> > +
> > +#ifdef CONFIG_SMP
> > +static irqreturn_t imsic_local_sync_handler(int irq, void *data)
> > +{
> > +     imsic_local_sync();
> > +     return IRQ_HANDLED;
> > +}
> > +
> > +static void imsic_ipi_send(unsigned int cpu)
> > +{
> > +     struct imsic_local_config *local =
> > +                             per_cpu_ptr(imsic->global.local, cpu);
>
> Let it stick out. We switched to line length 100 quite some time
> ago. Applies to the rest of the series too.

Okay, I will update.

>
> > +     writel_relaxed(IMSIC_IPI_ID, local->msi_va);
> > +}
> > +
> > +static void imsic_ipi_starting_cpu(void)
> > +{
> > +     /* Enable IPIs for current CPU. */
> > +     __imsic_id_set_enable(IMSIC_IPI_ID);
> > +
> > +     /* Enable virtual IPI used for IMSIC ID synchronization */
> > +     enable_percpu_irq(imsic->ipi_virq, 0);
> > +}
> > +
> > +static void imsic_ipi_dying_cpu(void)
> > +{
> > +     /*
> > +      * Disable virtual IPI used for IMSIC ID synchronization so
> > +      * that we don't receive ID synchronization requests.
> > +      */
> > +     disable_percpu_irq(imsic->ipi_virq);
>
> Shouldn't this disable the hardware too, i.e.
>
>           __imsic_id_clear_enable()
>
> ?

Yes, it should but somehow I missed and never saw any issue.

I will update.

>
> > +}
> > +
> > +static int __init imsic_ipi_domain_init(void)
> > +{
> > +     int virq;
> > +
> > +     /* Create IMSIC IPI multiplexing */
> > +     virq = ipi_mux_create(IMSIC_NR_IPI, imsic_ipi_send);
> > +     if (virq <= 0)
> > +             return (virq < 0) ? virq : -ENOMEM;
> > +     imsic->ipi_virq = virq;
> > +
> > +     /* First vIRQ is used for IMSIC ID synchronization */
> > +     virq = request_percpu_irq(imsic->ipi_virq, imsic_local_sync_handler,
> > +                               "riscv-imsic-lsync", imsic->global.local);
> > +     if (virq)
> > +             return virq;
>
> Please use a separate 'ret' variable. I had to read this 3 times to make
> sense of it.

Okay, I will update.

>
> > +     irq_set_status_flags(imsic->ipi_virq, IRQ_HIDDEN);
> > +     imsic->ipi_lsync_desc = irq_to_desc(imsic->ipi_virq);
>
> What's so special about this particular IPI that it can't be handled
> like all the other IPIs?

We are using this special under-the-hood IPI for synchronization
of IRQ enable/disable and IRQ movement across CPUs.

x86 has a more lazy approach of using a per-CPU timer so in
the next revision I will move to a similar approach. This means
both "ipi_virq" and "ipi_lsync_desc" will go away.

>
> > +static int __init imsic_early_probe(struct fwnode_handle *fwnode)
> > +{
> > +     int rc;
> > +     struct irq_domain *domain;
>
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#variable-declarations

Okay, I will update.

>
> > +
> > +     /* Find parent domain and register chained handler */
> > +     domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(),
> > +                                       DOMAIN_BUS_ANY);
> > +     if (!domain) {
> > +             pr_err("%pfwP: Failed to find INTC domain\n", fwnode);
> > +             return -ENOENT;
> > +     }
> > +     imsic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT);
> > +     if (!imsic_parent_irq) {
> > +             pr_err("%pfwP: Failed to create INTC mapping\n", fwnode);
> > +             return -ENOENT;
> > +     }
> > +     irq_set_chained_handler(imsic_parent_irq, imsic_handle_irq);
> > +
> > +     /* Initialize IPI domain */
> > +     rc = imsic_ipi_domain_init();
> > +     if (rc) {
> > +             pr_err("%pfwP: Failed to initialize IPI domain\n", fwnode);
> > +             return rc;
>
> Leaves the chained handler around and enabled.

Okay, I will set the chained hander after imsic_ipi_domain_init().

>
> > diff --git a/drivers/irqchip/irq-riscv-imsic-state.c b/drivers/irqchip/irq-riscv-imsic-state.c
> > +
> > +#define imsic_csr_write(__c, __v)            \
> > +do {                                         \
> > +     csr_write(CSR_ISELECT, __c);            \
> > +     csr_write(CSR_IREG, __v);               \
> > +} while (0)
>
> Any reason why these macros can't be inlines?

No particular reason. I am fine with both maros and inline functions.

I will update in the next revision.

>
> > +const struct imsic_global_config *imsic_get_global_config(void)
> > +{
> > +     return imsic ? &imsic->global : NULL;
> > +}
> > +EXPORT_SYMBOL_GPL(imsic_get_global_config);
>
> Why is this exported?

This is for the KVM RISC-V module. We have follow up
KVM RISC-V patchs which need to know the IMSIC global
configuration so that it can assign IMSIC guest files to a
Guest/VM.

>
> > +#define __imsic_id_read_clear_enabled(__id)          \
> > +     __imsic_eix_read_clear((__id), false)
> > +#define __imsic_id_read_clear_pending(__id)          \
> > +     __imsic_eix_read_clear((__id), true)
>
> Please use inlines.

Okay, I will update.

>
> > +void __imsic_eix_update(unsigned long base_id,
> > +                     unsigned long num_id, bool pend, bool val)
> > +{
> > +     unsigned long i, isel, ireg;
> > +     unsigned long id = base_id, last_id = base_id + num_id;
> > +
> > +     while (id < last_id) {
> > +             isel = id / BITS_PER_LONG;
> > +             isel *= BITS_PER_LONG / IMSIC_EIPx_BITS;
> > +             isel += (pend) ? IMSIC_EIP0 : IMSIC_EIE0;
> > +
> > +             ireg = 0;
> > +             for (i = id & (__riscv_xlen - 1);
> > +                  (id < last_id) && (i < __riscv_xlen); i++) {
> > +                     ireg |= BIT(i);
> > +                     id++;
> > +             }
>
> This lacks a comment what this is doing.

Okay, I will add a comment block.

>
> > +
> > +             /*
> > +              * The IMSIC EIEx and EIPx registers are indirectly
> > +              * accessed via using ISELECT and IREG CSRs so we
> > +              * need to access these CSRs without getting preempted.
> > +              *
> > +              * All existing users of this function call this
> > +              * function with local IRQs disabled so we don't
> > +              * need to do anything special here.
> > +              */
> > +             if (val)
> > +                     imsic_csr_set(isel, ireg);
> > +             else
> > +                     imsic_csr_clear(isel, ireg);
> > +     }
> > +}
> > +
> > +void imsic_local_sync(void)
> > +{
> > +     struct imsic_local_priv *lpriv = this_cpu_ptr(imsic->lpriv);
> > +     struct imsic_local_config *mlocal;
> > +     struct imsic_vector *mvec;
> > +     unsigned long flags;
> > +     int i;
> > +
> > +     raw_spin_lock_irqsave(&lpriv->ids_lock, flags);
> > +     for (i = 1; i <= imsic->global.nr_ids; i++) {
> > +             if (i == IMSIC_IPI_ID)
> > +                     continue;
> > +
> > +             if (test_bit(i, lpriv->ids_enabled_bitmap))
> > +                     __imsic_id_set_enable(i);
> > +             else
> > +                     __imsic_id_clear_enable(i);
> > +
> > +             mvec = lpriv->ids_move[i];
> > +             lpriv->ids_move[i] = NULL;
> > +             if (mvec) {
> > +                     if (__imsic_id_read_clear_pending(i)) {
> > +                             mlocal = per_cpu_ptr(imsic->global.local,
> > +                                                  mvec->cpu);
> > +                             writel_relaxed(mvec->local_id, mlocal->msi_va);
> > +                     }
> > +
> > +                     imsic_vector_free(&lpriv->vectors[i]);
> > +             }
>
> Again an uncommented piece of magic which you will have forgotten what
> it does 3 month down the road :)

Sure, I will add a comment block.

>
> > +
> > +     }
> > +     raw_spin_unlock_irqrestore(&lpriv->ids_lock, flags);
> > +}
> > +
> > +void imsic_local_delivery(bool enable)
> > +{
> > +     if (enable) {
> > +             imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_ENABLE_EITHRESHOLD);
> > +             imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_ENABLE_EIDELIVERY);
> > +             return;
> > +     }
> > +
> > +     imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_DISABLE_EIDELIVERY);
> > +     imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_DISABLE_EITHRESHOLD);
> > +}
> > +
> > +#ifdef CONFIG_SMP
> > +static void imsic_remote_sync(unsigned int cpu)
> > +{
> > +     /*
> > +      * We simply inject ID synchronization IPI to a target CPU
> > +      * if it is not same as the current CPU. The ipi_send_mask()
> > +      * implementation of IPI mux will inject ID synchronization
> > +      * IPI only for CPUs that have enabled it so offline CPUs
> > +      * won't receive IPI. An offline CPU will unconditionally
> > +      * synchronize IDs through imsic_starting_cpu() when the
> > +      * CPU is brought up.
> > +      */
> > +     if (cpu_online(cpu)) {
> > +             if (cpu != smp_processor_id())
> > +                     __ipi_send_mask(imsic->ipi_lsync_desc, cpumask_of(cpu));
>
> Still wondering why this can't use the regular API. There might be a
> reason, but then it wants to be documented.

As mentioned above, the "ipi_virq" and "irq_lsync_desc" will
be replaced by a per-CPU timer in the next revision.

>
> > +             else
> > +                     imsic_local_sync();
> > +     }
> > +}
> > +#else
> > +static inline void imsic_remote_sync(unsigned int cpu)
> > +{
> > +     imsic_local_sync();
> > +}
> > +#endif
> > +
> > +void imsic_vector_mask(struct imsic_vector *vec)
> > +{
> > +     struct imsic_local_priv *lpriv;
> > +     unsigned long flags;
> > +
> > +     lpriv = per_cpu_ptr(imsic->lpriv, vec->cpu);
> > +     if (WARN_ON(&lpriv->vectors[vec->local_id] != vec))
> > +             return;
> > +
> > +     raw_spin_lock_irqsave(&lpriv->ids_lock, flags);
>
> AFAICT, this is used from an irqchip callback:
>
> static void imsic_irq_mask(struct irq_data *d)
> {
>         imsic_vector_mask(irq_data_get_irq_chip_data(d));
> }
>
> So no need to use irqsave() here. Those callbacks run always with
> interrupts disabled when called from the core.

Okay, I will update.

>
> > +void imsic_vector_move(struct imsic_vector *old_vec,
> > +                     struct imsic_vector *new_vec)
> > +{
> > +     struct imsic_local_priv *old_lpriv, *new_lpriv;
> > +     unsigned long flags, flags1;
> > +
> > +     if (WARN_ON(old_vec->cpu == new_vec->cpu))
> > +             return;
> > +
> > +     old_lpriv = per_cpu_ptr(imsic->lpriv, old_vec->cpu);
> > +     if (WARN_ON(&old_lpriv->vectors[old_vec->local_id] != old_vec))
> > +             return;
> > +
> > +     new_lpriv = per_cpu_ptr(imsic->lpriv, new_vec->cpu);
> > +     if (WARN_ON(&new_lpriv->vectors[new_vec->local_id] != new_vec))
> > +             return;
> > +
> > +     raw_spin_lock_irqsave(&old_lpriv->ids_lock, flags);
> > +     raw_spin_lock_irqsave(&new_lpriv->ids_lock, flags1);
>
> Lockdep should yell at you for this, rightfully so. And not only because
> of the missing nested() annotation.
>
> Assume there are two CPUs setting affinity for two different interrupts.
>
> CPU0 moves an interrupt to CPU1 and CPU1 moves another interrupt to
> CPU0. The resulting lock order is:
>
> CPU0                     CPU1
> lock(lpriv[CPU0]);       lock(lpriv[CPU1]);
> lock(lpriv[CPU1]);       lock(lpriv[CPU0]);
>
> a classic ABBA deadlock.
>
> You need to take those locks always in the same order. Look at
> double_raw_lock() in kernel/sched/sched.h.

I have simplified the locking to avoid this nested locks so this
will be much simpler without any lock nesting.

>
> > +     /* Unmask the new vector entry */
> > +     if (test_bit(old_vec->local_id, old_lpriv->ids_enabled_bitmap))
> > +             bitmap_set(new_lpriv->ids_enabled_bitmap,
> > +                        new_vec->local_id, 1);
>
> Either make that one line or please add brackets. See:
>
>   https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#bracket-rules

Okay, I will update.

>
> > +static int __init imsic_local_init(void)
> > +{
> > +     struct imsic_global_config *global = &imsic->global;
> > +     struct imsic_local_priv *lpriv;
> > +     struct imsic_vector *vec;
> > +     int cpu, i;
> > +
> > +     /* Allocate per-CPU private state */
> > +     imsic->lpriv = alloc_percpu(typeof(*(imsic->lpriv)));
> > +     if (!imsic->lpriv)
> > +             return -ENOMEM;
> > +
> > +     /* Setup per-CPU private state */
> > +     for_each_possible_cpu(cpu) {
> > +             lpriv = per_cpu_ptr(imsic->lpriv, cpu);
> > +
> > +             raw_spin_lock_init(&lpriv->ids_lock);
> > +
> > +             /* Allocate enabled bitmap */
> > +             lpriv->ids_enabled_bitmap = bitmap_zalloc(global->nr_ids + 1,
> > +                                                       GFP_KERNEL);
> > +             if (!lpriv->ids_enabled_bitmap) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
> > +             }
> > +
> > +             /* Allocate move array */
> > +             lpriv->ids_move = kcalloc(global->nr_ids + 1,
> > +                                     sizeof(*lpriv->ids_move), GFP_KERNEL);
> > +             if (!lpriv->ids_move) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
> > +             }
> > +
> > +             /* Allocate vector array */
> > +             lpriv->vectors = kcalloc(global->nr_ids + 1,
> > +                                      sizeof(*lpriv->vectors), GFP_KERNEL);
> > +             if (!lpriv->vectors) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
>
> Third instance of the same pattern. goto cleanup; perhaps?

Okay, I will add goto here.

>
> > +struct imsic_vector *imsic_vector_alloc(unsigned int hwirq,
> > +                                     const struct cpumask *mask)
> > +{
> > +     struct imsic_vector *vec = NULL;
> > +     struct imsic_local_priv *lpriv;
> > +     unsigned long flags;
> > +     unsigned int cpu;
> > +     int local_id;
> > +
> > +     raw_spin_lock_irqsave(&imsic->matrix_lock, flags);
> > +     local_id = irq_matrix_alloc(imsic->matrix, mask, false, &cpu);
> > +     raw_spin_unlock_irqrestore(&imsic->matrix_lock, flags);
> > +     if (local_id < 0)
> > +             return NULL;
> > +
> > +     lpriv = per_cpu_ptr(imsic->lpriv, cpu);
> > +     vec = &lpriv->vectors[local_id];
> > +     vec->hwirq = hwirq;
> > +
> > +     return vec;
> > +}
>
> ...
>
> > +int imsic_hwirq_alloc(void)
> > +{
> > +     int ret;
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave(&imsic->hwirqs_lock, flags);
> > +     ret = bitmap_find_free_region(imsic->hwirqs_used_bitmap,
> > +                                   imsic->nr_hwirqs, 0);
> > +     raw_spin_unlock_irqrestore(&imsic->hwirqs_lock, flags);
> > +
> > +     return ret;
> > +}
>
> This part is just to create a unique hwirq number, right?

Yes, this is only for unique hwirq. We can directly use virq
instead of hwirq so this hwirq allocation/management will
go away in the next revision.

>
> > +
> > +     /* Find number of guest index bits in MSI address */
> > +     rc = of_property_read_u32(to_of_node(fwnode),
> > +                               "riscv,guest-index-bits",
> > +                               &global->guest_index_bits);
> > +     if (rc)
> > +             global->guest_index_bits = 0;
>
> So here you get the index bits, but then 50 lines further down you do
> sanity checking. Wouldn't it make sense to do that right here?
>
> Same for the other bits.

This is intentional because we already have a AIA ACPI series
where this helps to reduce the number of "if (acpi_disabled)"
checks.

>
> > +
> > +/*
> > + * The IMSIC driver uses 1 IPI for ID synchronization and
> > + * arch/riscv/kernel/smp.c require 6 IPIs so we fix the
> > + * total number of IPIs to 8.
> > + */
> > +#define IMSIC_IPI_ID                         1
> > +#define IMSIC_NR_IPI                         8
> > +
> > +struct imsic_vector {
> > +     /* Fixed details of the vector */
> > +     unsigned int cpu;
> > +     unsigned int local_id;
> > +     /* Details saved by driver in the vector */
> > +     unsigned int hwirq;
> > +};
> > +
> > +struct imsic_local_priv {
> > +     /* Local state of interrupt identities */
> > +     raw_spinlock_t ids_lock;
> > +     unsigned long *ids_enabled_bitmap;
> > +     struct imsic_vector **ids_move;
> > +
> > +     /* Local vector table */
> > +     struct imsic_vector *vectors;
>
> Please make those structs tabular:
>
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#struct-declarations-and-initializers

Okay, I will update.

>
> > +void __imsic_eix_update(unsigned long base_id,
> > +                     unsigned long num_id, bool pend, bool val);
> > +
> > +#define __imsic_id_set_enable(__id)          \
> > +     __imsic_eix_update((__id), 1, false, true)
> > +#define __imsic_id_clear_enable(__id)        \
> > +     __imsic_eix_update((__id), 1, false, false)
>
> inlines please.

Okay, I will update.

Regards,
Anup

WARNING: multiple messages have this Message-ID (diff)
From: Anup Patel <apatel@ventanamicro.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: devicetree@vger.kernel.org, "Conor Dooley" <conor+dt@kernel.org>,
	"Saravana Kannan" <saravanak@google.com>,
	"Marc Zyngier" <maz@kernel.org>,
	"Anup Patel" <anup@brainfault.org>,
	"Atish Patra" <atishp@atishpatra.org>,
	linux-kernel@vger.kernel.org, "Björn Töpel" <bjorn@kernel.org>,
	"Rob Herring" <robh+dt@kernel.org>,
	"Palmer Dabbelt" <palmer@dabbelt.com>,
	"Krzysztof Kozlowski" <krzysztof.kozlowski+dt@linaro.org>,
	"Paul Walmsley" <paul.walmsley@sifive.com>,
	linux-riscv@lists.infradead.org,
	"Frank Rowand" <frowand.list@gmail.com>,
	linux-arm-kernel@lists.infradead.org,
	"Andrew Jones" <ajones@ventanamicro.com>
Subject: Re: [PATCH v12 18/25] irqchip: Add RISC-V incoming MSI controller early driver
Date: Sun, 18 Feb 2024 18:46:14 +0530	[thread overview]
Message-ID: <CAK9=C2XvYB8qrTvxKvtXspdfiKUro89t-oLRVzeMqczDbG7nrQ@mail.gmail.com> (raw)
In-Reply-To: <87h6i8ckwg.ffs@tglx>

On Sat, Feb 17, 2024 at 12:10 AM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Sat, Jan 27 2024 at 21:47, Anup Patel wrote:
> > +
> > +#ifdef CONFIG_SMP
> > +static irqreturn_t imsic_local_sync_handler(int irq, void *data)
> > +{
> > +     imsic_local_sync();
> > +     return IRQ_HANDLED;
> > +}
> > +
> > +static void imsic_ipi_send(unsigned int cpu)
> > +{
> > +     struct imsic_local_config *local =
> > +                             per_cpu_ptr(imsic->global.local, cpu);
>
> Let it stick out. We switched to line length 100 quite some time
> ago. Applies to the rest of the series too.

Okay, I will update.

>
> > +     writel_relaxed(IMSIC_IPI_ID, local->msi_va);
> > +}
> > +
> > +static void imsic_ipi_starting_cpu(void)
> > +{
> > +     /* Enable IPIs for current CPU. */
> > +     __imsic_id_set_enable(IMSIC_IPI_ID);
> > +
> > +     /* Enable virtual IPI used for IMSIC ID synchronization */
> > +     enable_percpu_irq(imsic->ipi_virq, 0);
> > +}
> > +
> > +static void imsic_ipi_dying_cpu(void)
> > +{
> > +     /*
> > +      * Disable virtual IPI used for IMSIC ID synchronization so
> > +      * that we don't receive ID synchronization requests.
> > +      */
> > +     disable_percpu_irq(imsic->ipi_virq);
>
> Shouldn't this disable the hardware too, i.e.
>
>           __imsic_id_clear_enable()
>
> ?

Yes, it should but somehow I missed and never saw any issue.

I will update.

>
> > +}
> > +
> > +static int __init imsic_ipi_domain_init(void)
> > +{
> > +     int virq;
> > +
> > +     /* Create IMSIC IPI multiplexing */
> > +     virq = ipi_mux_create(IMSIC_NR_IPI, imsic_ipi_send);
> > +     if (virq <= 0)
> > +             return (virq < 0) ? virq : -ENOMEM;
> > +     imsic->ipi_virq = virq;
> > +
> > +     /* First vIRQ is used for IMSIC ID synchronization */
> > +     virq = request_percpu_irq(imsic->ipi_virq, imsic_local_sync_handler,
> > +                               "riscv-imsic-lsync", imsic->global.local);
> > +     if (virq)
> > +             return virq;
>
> Please use a separate 'ret' variable. I had to read this 3 times to make
> sense of it.

Okay, I will update.

>
> > +     irq_set_status_flags(imsic->ipi_virq, IRQ_HIDDEN);
> > +     imsic->ipi_lsync_desc = irq_to_desc(imsic->ipi_virq);
>
> What's so special about this particular IPI that it can't be handled
> like all the other IPIs?

We are using this special under-the-hood IPI for synchronization
of IRQ enable/disable and IRQ movement across CPUs.

x86 has a more lazy approach of using a per-CPU timer so in
the next revision I will move to a similar approach. This means
both "ipi_virq" and "ipi_lsync_desc" will go away.

>
> > +static int __init imsic_early_probe(struct fwnode_handle *fwnode)
> > +{
> > +     int rc;
> > +     struct irq_domain *domain;
>
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#variable-declarations

Okay, I will update.

>
> > +
> > +     /* Find parent domain and register chained handler */
> > +     domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(),
> > +                                       DOMAIN_BUS_ANY);
> > +     if (!domain) {
> > +             pr_err("%pfwP: Failed to find INTC domain\n", fwnode);
> > +             return -ENOENT;
> > +     }
> > +     imsic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT);
> > +     if (!imsic_parent_irq) {
> > +             pr_err("%pfwP: Failed to create INTC mapping\n", fwnode);
> > +             return -ENOENT;
> > +     }
> > +     irq_set_chained_handler(imsic_parent_irq, imsic_handle_irq);
> > +
> > +     /* Initialize IPI domain */
> > +     rc = imsic_ipi_domain_init();
> > +     if (rc) {
> > +             pr_err("%pfwP: Failed to initialize IPI domain\n", fwnode);
> > +             return rc;
>
> Leaves the chained handler around and enabled.

Okay, I will set the chained hander after imsic_ipi_domain_init().

>
> > diff --git a/drivers/irqchip/irq-riscv-imsic-state.c b/drivers/irqchip/irq-riscv-imsic-state.c
> > +
> > +#define imsic_csr_write(__c, __v)            \
> > +do {                                         \
> > +     csr_write(CSR_ISELECT, __c);            \
> > +     csr_write(CSR_IREG, __v);               \
> > +} while (0)
>
> Any reason why these macros can't be inlines?

No particular reason. I am fine with both maros and inline functions.

I will update in the next revision.

>
> > +const struct imsic_global_config *imsic_get_global_config(void)
> > +{
> > +     return imsic ? &imsic->global : NULL;
> > +}
> > +EXPORT_SYMBOL_GPL(imsic_get_global_config);
>
> Why is this exported?

This is for the KVM RISC-V module. We have follow up
KVM RISC-V patchs which need to know the IMSIC global
configuration so that it can assign IMSIC guest files to a
Guest/VM.

>
> > +#define __imsic_id_read_clear_enabled(__id)          \
> > +     __imsic_eix_read_clear((__id), false)
> > +#define __imsic_id_read_clear_pending(__id)          \
> > +     __imsic_eix_read_clear((__id), true)
>
> Please use inlines.

Okay, I will update.

>
> > +void __imsic_eix_update(unsigned long base_id,
> > +                     unsigned long num_id, bool pend, bool val)
> > +{
> > +     unsigned long i, isel, ireg;
> > +     unsigned long id = base_id, last_id = base_id + num_id;
> > +
> > +     while (id < last_id) {
> > +             isel = id / BITS_PER_LONG;
> > +             isel *= BITS_PER_LONG / IMSIC_EIPx_BITS;
> > +             isel += (pend) ? IMSIC_EIP0 : IMSIC_EIE0;
> > +
> > +             ireg = 0;
> > +             for (i = id & (__riscv_xlen - 1);
> > +                  (id < last_id) && (i < __riscv_xlen); i++) {
> > +                     ireg |= BIT(i);
> > +                     id++;
> > +             }
>
> This lacks a comment what this is doing.

Okay, I will add a comment block.

>
> > +
> > +             /*
> > +              * The IMSIC EIEx and EIPx registers are indirectly
> > +              * accessed via using ISELECT and IREG CSRs so we
> > +              * need to access these CSRs without getting preempted.
> > +              *
> > +              * All existing users of this function call this
> > +              * function with local IRQs disabled so we don't
> > +              * need to do anything special here.
> > +              */
> > +             if (val)
> > +                     imsic_csr_set(isel, ireg);
> > +             else
> > +                     imsic_csr_clear(isel, ireg);
> > +     }
> > +}
> > +
> > +void imsic_local_sync(void)
> > +{
> > +     struct imsic_local_priv *lpriv = this_cpu_ptr(imsic->lpriv);
> > +     struct imsic_local_config *mlocal;
> > +     struct imsic_vector *mvec;
> > +     unsigned long flags;
> > +     int i;
> > +
> > +     raw_spin_lock_irqsave(&lpriv->ids_lock, flags);
> > +     for (i = 1; i <= imsic->global.nr_ids; i++) {
> > +             if (i == IMSIC_IPI_ID)
> > +                     continue;
> > +
> > +             if (test_bit(i, lpriv->ids_enabled_bitmap))
> > +                     __imsic_id_set_enable(i);
> > +             else
> > +                     __imsic_id_clear_enable(i);
> > +
> > +             mvec = lpriv->ids_move[i];
> > +             lpriv->ids_move[i] = NULL;
> > +             if (mvec) {
> > +                     if (__imsic_id_read_clear_pending(i)) {
> > +                             mlocal = per_cpu_ptr(imsic->global.local,
> > +                                                  mvec->cpu);
> > +                             writel_relaxed(mvec->local_id, mlocal->msi_va);
> > +                     }
> > +
> > +                     imsic_vector_free(&lpriv->vectors[i]);
> > +             }
>
> Again an uncommented piece of magic which you will have forgotten what
> it does 3 month down the road :)

Sure, I will add a comment block.

>
> > +
> > +     }
> > +     raw_spin_unlock_irqrestore(&lpriv->ids_lock, flags);
> > +}
> > +
> > +void imsic_local_delivery(bool enable)
> > +{
> > +     if (enable) {
> > +             imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_ENABLE_EITHRESHOLD);
> > +             imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_ENABLE_EIDELIVERY);
> > +             return;
> > +     }
> > +
> > +     imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_DISABLE_EIDELIVERY);
> > +     imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_DISABLE_EITHRESHOLD);
> > +}
> > +
> > +#ifdef CONFIG_SMP
> > +static void imsic_remote_sync(unsigned int cpu)
> > +{
> > +     /*
> > +      * We simply inject ID synchronization IPI to a target CPU
> > +      * if it is not same as the current CPU. The ipi_send_mask()
> > +      * implementation of IPI mux will inject ID synchronization
> > +      * IPI only for CPUs that have enabled it so offline CPUs
> > +      * won't receive IPI. An offline CPU will unconditionally
> > +      * synchronize IDs through imsic_starting_cpu() when the
> > +      * CPU is brought up.
> > +      */
> > +     if (cpu_online(cpu)) {
> > +             if (cpu != smp_processor_id())
> > +                     __ipi_send_mask(imsic->ipi_lsync_desc, cpumask_of(cpu));
>
> Still wondering why this can't use the regular API. There might be a
> reason, but then it wants to be documented.

As mentioned above, the "ipi_virq" and "irq_lsync_desc" will
be replaced by a per-CPU timer in the next revision.

>
> > +             else
> > +                     imsic_local_sync();
> > +     }
> > +}
> > +#else
> > +static inline void imsic_remote_sync(unsigned int cpu)
> > +{
> > +     imsic_local_sync();
> > +}
> > +#endif
> > +
> > +void imsic_vector_mask(struct imsic_vector *vec)
> > +{
> > +     struct imsic_local_priv *lpriv;
> > +     unsigned long flags;
> > +
> > +     lpriv = per_cpu_ptr(imsic->lpriv, vec->cpu);
> > +     if (WARN_ON(&lpriv->vectors[vec->local_id] != vec))
> > +             return;
> > +
> > +     raw_spin_lock_irqsave(&lpriv->ids_lock, flags);
>
> AFAICT, this is used from an irqchip callback:
>
> static void imsic_irq_mask(struct irq_data *d)
> {
>         imsic_vector_mask(irq_data_get_irq_chip_data(d));
> }
>
> So no need to use irqsave() here. Those callbacks run always with
> interrupts disabled when called from the core.

Okay, I will update.

>
> > +void imsic_vector_move(struct imsic_vector *old_vec,
> > +                     struct imsic_vector *new_vec)
> > +{
> > +     struct imsic_local_priv *old_lpriv, *new_lpriv;
> > +     unsigned long flags, flags1;
> > +
> > +     if (WARN_ON(old_vec->cpu == new_vec->cpu))
> > +             return;
> > +
> > +     old_lpriv = per_cpu_ptr(imsic->lpriv, old_vec->cpu);
> > +     if (WARN_ON(&old_lpriv->vectors[old_vec->local_id] != old_vec))
> > +             return;
> > +
> > +     new_lpriv = per_cpu_ptr(imsic->lpriv, new_vec->cpu);
> > +     if (WARN_ON(&new_lpriv->vectors[new_vec->local_id] != new_vec))
> > +             return;
> > +
> > +     raw_spin_lock_irqsave(&old_lpriv->ids_lock, flags);
> > +     raw_spin_lock_irqsave(&new_lpriv->ids_lock, flags1);
>
> Lockdep should yell at you for this, rightfully so. And not only because
> of the missing nested() annotation.
>
> Assume there are two CPUs setting affinity for two different interrupts.
>
> CPU0 moves an interrupt to CPU1 and CPU1 moves another interrupt to
> CPU0. The resulting lock order is:
>
> CPU0                     CPU1
> lock(lpriv[CPU0]);       lock(lpriv[CPU1]);
> lock(lpriv[CPU1]);       lock(lpriv[CPU0]);
>
> a classic ABBA deadlock.
>
> You need to take those locks always in the same order. Look at
> double_raw_lock() in kernel/sched/sched.h.

I have simplified the locking to avoid this nested locks so this
will be much simpler without any lock nesting.

>
> > +     /* Unmask the new vector entry */
> > +     if (test_bit(old_vec->local_id, old_lpriv->ids_enabled_bitmap))
> > +             bitmap_set(new_lpriv->ids_enabled_bitmap,
> > +                        new_vec->local_id, 1);
>
> Either make that one line or please add brackets. See:
>
>   https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#bracket-rules

Okay, I will update.

>
> > +static int __init imsic_local_init(void)
> > +{
> > +     struct imsic_global_config *global = &imsic->global;
> > +     struct imsic_local_priv *lpriv;
> > +     struct imsic_vector *vec;
> > +     int cpu, i;
> > +
> > +     /* Allocate per-CPU private state */
> > +     imsic->lpriv = alloc_percpu(typeof(*(imsic->lpriv)));
> > +     if (!imsic->lpriv)
> > +             return -ENOMEM;
> > +
> > +     /* Setup per-CPU private state */
> > +     for_each_possible_cpu(cpu) {
> > +             lpriv = per_cpu_ptr(imsic->lpriv, cpu);
> > +
> > +             raw_spin_lock_init(&lpriv->ids_lock);
> > +
> > +             /* Allocate enabled bitmap */
> > +             lpriv->ids_enabled_bitmap = bitmap_zalloc(global->nr_ids + 1,
> > +                                                       GFP_KERNEL);
> > +             if (!lpriv->ids_enabled_bitmap) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
> > +             }
> > +
> > +             /* Allocate move array */
> > +             lpriv->ids_move = kcalloc(global->nr_ids + 1,
> > +                                     sizeof(*lpriv->ids_move), GFP_KERNEL);
> > +             if (!lpriv->ids_move) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
> > +             }
> > +
> > +             /* Allocate vector array */
> > +             lpriv->vectors = kcalloc(global->nr_ids + 1,
> > +                                      sizeof(*lpriv->vectors), GFP_KERNEL);
> > +             if (!lpriv->vectors) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
>
> Third instance of the same pattern. goto cleanup; perhaps?

Okay, I will add goto here.

>
> > +struct imsic_vector *imsic_vector_alloc(unsigned int hwirq,
> > +                                     const struct cpumask *mask)
> > +{
> > +     struct imsic_vector *vec = NULL;
> > +     struct imsic_local_priv *lpriv;
> > +     unsigned long flags;
> > +     unsigned int cpu;
> > +     int local_id;
> > +
> > +     raw_spin_lock_irqsave(&imsic->matrix_lock, flags);
> > +     local_id = irq_matrix_alloc(imsic->matrix, mask, false, &cpu);
> > +     raw_spin_unlock_irqrestore(&imsic->matrix_lock, flags);
> > +     if (local_id < 0)
> > +             return NULL;
> > +
> > +     lpriv = per_cpu_ptr(imsic->lpriv, cpu);
> > +     vec = &lpriv->vectors[local_id];
> > +     vec->hwirq = hwirq;
> > +
> > +     return vec;
> > +}
>
> ...
>
> > +int imsic_hwirq_alloc(void)
> > +{
> > +     int ret;
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave(&imsic->hwirqs_lock, flags);
> > +     ret = bitmap_find_free_region(imsic->hwirqs_used_bitmap,
> > +                                   imsic->nr_hwirqs, 0);
> > +     raw_spin_unlock_irqrestore(&imsic->hwirqs_lock, flags);
> > +
> > +     return ret;
> > +}
>
> This part is just to create a unique hwirq number, right?

Yes, this is only for unique hwirq. We can directly use virq
instead of hwirq so this hwirq allocation/management will
go away in the next revision.

>
> > +
> > +     /* Find number of guest index bits in MSI address */
> > +     rc = of_property_read_u32(to_of_node(fwnode),
> > +                               "riscv,guest-index-bits",
> > +                               &global->guest_index_bits);
> > +     if (rc)
> > +             global->guest_index_bits = 0;
>
> So here you get the index bits, but then 50 lines further down you do
> sanity checking. Wouldn't it make sense to do that right here?
>
> Same for the other bits.

This is intentional because we already have a AIA ACPI series
where this helps to reduce the number of "if (acpi_disabled)"
checks.

>
> > +
> > +/*
> > + * The IMSIC driver uses 1 IPI for ID synchronization and
> > + * arch/riscv/kernel/smp.c require 6 IPIs so we fix the
> > + * total number of IPIs to 8.
> > + */
> > +#define IMSIC_IPI_ID                         1
> > +#define IMSIC_NR_IPI                         8
> > +
> > +struct imsic_vector {
> > +     /* Fixed details of the vector */
> > +     unsigned int cpu;
> > +     unsigned int local_id;
> > +     /* Details saved by driver in the vector */
> > +     unsigned int hwirq;
> > +};
> > +
> > +struct imsic_local_priv {
> > +     /* Local state of interrupt identities */
> > +     raw_spinlock_t ids_lock;
> > +     unsigned long *ids_enabled_bitmap;
> > +     struct imsic_vector **ids_move;
> > +
> > +     /* Local vector table */
> > +     struct imsic_vector *vectors;
>
> Please make those structs tabular:
>
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#struct-declarations-and-initializers

Okay, I will update.

>
> > +void __imsic_eix_update(unsigned long base_id,
> > +                     unsigned long num_id, bool pend, bool val);
> > +
> > +#define __imsic_id_set_enable(__id)          \
> > +     __imsic_eix_update((__id), 1, false, true)
> > +#define __imsic_id_clear_enable(__id)        \
> > +     __imsic_eix_update((__id), 1, false, false)
>
> inlines please.

Okay, I will update.

Regards,
Anup

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Anup Patel <apatel@ventanamicro.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: "Palmer Dabbelt" <palmer@dabbelt.com>,
	"Paul Walmsley" <paul.walmsley@sifive.com>,
	"Rob Herring" <robh+dt@kernel.org>,
	"Krzysztof Kozlowski" <krzysztof.kozlowski+dt@linaro.org>,
	"Frank Rowand" <frowand.list@gmail.com>,
	"Conor Dooley" <conor+dt@kernel.org>,
	"Marc Zyngier" <maz@kernel.org>, "Björn Töpel" <bjorn@kernel.org>,
	"Atish Patra" <atishp@atishpatra.org>,
	"Andrew Jones" <ajones@ventanamicro.com>,
	"Sunil V L" <sunilvl@ventanamicro.com>,
	"Saravana Kannan" <saravanak@google.com>,
	"Anup Patel" <anup@brainfault.org>,
	linux-riscv@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, devicetree@vger.kernel.org
Subject: Re: [PATCH v12 18/25] irqchip: Add RISC-V incoming MSI controller early driver
Date: Sun, 18 Feb 2024 18:46:14 +0530	[thread overview]
Message-ID: <CAK9=C2XvYB8qrTvxKvtXspdfiKUro89t-oLRVzeMqczDbG7nrQ@mail.gmail.com> (raw)
In-Reply-To: <87h6i8ckwg.ffs@tglx>

On Sat, Feb 17, 2024 at 12:10 AM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Sat, Jan 27 2024 at 21:47, Anup Patel wrote:
> > +
> > +#ifdef CONFIG_SMP
> > +static irqreturn_t imsic_local_sync_handler(int irq, void *data)
> > +{
> > +     imsic_local_sync();
> > +     return IRQ_HANDLED;
> > +}
> > +
> > +static void imsic_ipi_send(unsigned int cpu)
> > +{
> > +     struct imsic_local_config *local =
> > +                             per_cpu_ptr(imsic->global.local, cpu);
>
> Let it stick out. We switched to line length 100 quite some time
> ago. Applies to the rest of the series too.

Okay, I will update.

>
> > +     writel_relaxed(IMSIC_IPI_ID, local->msi_va);
> > +}
> > +
> > +static void imsic_ipi_starting_cpu(void)
> > +{
> > +     /* Enable IPIs for current CPU. */
> > +     __imsic_id_set_enable(IMSIC_IPI_ID);
> > +
> > +     /* Enable virtual IPI used for IMSIC ID synchronization */
> > +     enable_percpu_irq(imsic->ipi_virq, 0);
> > +}
> > +
> > +static void imsic_ipi_dying_cpu(void)
> > +{
> > +     /*
> > +      * Disable virtual IPI used for IMSIC ID synchronization so
> > +      * that we don't receive ID synchronization requests.
> > +      */
> > +     disable_percpu_irq(imsic->ipi_virq);
>
> Shouldn't this disable the hardware too, i.e.
>
>           __imsic_id_clear_enable()
>
> ?

Yes, it should but somehow I missed and never saw any issue.

I will update.

>
> > +}
> > +
> > +static int __init imsic_ipi_domain_init(void)
> > +{
> > +     int virq;
> > +
> > +     /* Create IMSIC IPI multiplexing */
> > +     virq = ipi_mux_create(IMSIC_NR_IPI, imsic_ipi_send);
> > +     if (virq <= 0)
> > +             return (virq < 0) ? virq : -ENOMEM;
> > +     imsic->ipi_virq = virq;
> > +
> > +     /* First vIRQ is used for IMSIC ID synchronization */
> > +     virq = request_percpu_irq(imsic->ipi_virq, imsic_local_sync_handler,
> > +                               "riscv-imsic-lsync", imsic->global.local);
> > +     if (virq)
> > +             return virq;
>
> Please use a separate 'ret' variable. I had to read this 3 times to make
> sense of it.

Okay, I will update.

>
> > +     irq_set_status_flags(imsic->ipi_virq, IRQ_HIDDEN);
> > +     imsic->ipi_lsync_desc = irq_to_desc(imsic->ipi_virq);
>
> What's so special about this particular IPI that it can't be handled
> like all the other IPIs?

We are using this special under-the-hood IPI for synchronization
of IRQ enable/disable and IRQ movement across CPUs.

x86 has a more lazy approach of using a per-CPU timer so in
the next revision I will move to a similar approach. This means
both "ipi_virq" and "ipi_lsync_desc" will go away.

>
> > +static int __init imsic_early_probe(struct fwnode_handle *fwnode)
> > +{
> > +     int rc;
> > +     struct irq_domain *domain;
>
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#variable-declarations

Okay, I will update.

>
> > +
> > +     /* Find parent domain and register chained handler */
> > +     domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(),
> > +                                       DOMAIN_BUS_ANY);
> > +     if (!domain) {
> > +             pr_err("%pfwP: Failed to find INTC domain\n", fwnode);
> > +             return -ENOENT;
> > +     }
> > +     imsic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT);
> > +     if (!imsic_parent_irq) {
> > +             pr_err("%pfwP: Failed to create INTC mapping\n", fwnode);
> > +             return -ENOENT;
> > +     }
> > +     irq_set_chained_handler(imsic_parent_irq, imsic_handle_irq);
> > +
> > +     /* Initialize IPI domain */
> > +     rc = imsic_ipi_domain_init();
> > +     if (rc) {
> > +             pr_err("%pfwP: Failed to initialize IPI domain\n", fwnode);
> > +             return rc;
>
> Leaves the chained handler around and enabled.

Okay, I will set the chained hander after imsic_ipi_domain_init().

>
> > diff --git a/drivers/irqchip/irq-riscv-imsic-state.c b/drivers/irqchip/irq-riscv-imsic-state.c
> > +
> > +#define imsic_csr_write(__c, __v)            \
> > +do {                                         \
> > +     csr_write(CSR_ISELECT, __c);            \
> > +     csr_write(CSR_IREG, __v);               \
> > +} while (0)
>
> Any reason why these macros can't be inlines?

No particular reason. I am fine with both maros and inline functions.

I will update in the next revision.

>
> > +const struct imsic_global_config *imsic_get_global_config(void)
> > +{
> > +     return imsic ? &imsic->global : NULL;
> > +}
> > +EXPORT_SYMBOL_GPL(imsic_get_global_config);
>
> Why is this exported?

This is for the KVM RISC-V module. We have follow up
KVM RISC-V patchs which need to know the IMSIC global
configuration so that it can assign IMSIC guest files to a
Guest/VM.

>
> > +#define __imsic_id_read_clear_enabled(__id)          \
> > +     __imsic_eix_read_clear((__id), false)
> > +#define __imsic_id_read_clear_pending(__id)          \
> > +     __imsic_eix_read_clear((__id), true)
>
> Please use inlines.

Okay, I will update.

>
> > +void __imsic_eix_update(unsigned long base_id,
> > +                     unsigned long num_id, bool pend, bool val)
> > +{
> > +     unsigned long i, isel, ireg;
> > +     unsigned long id = base_id, last_id = base_id + num_id;
> > +
> > +     while (id < last_id) {
> > +             isel = id / BITS_PER_LONG;
> > +             isel *= BITS_PER_LONG / IMSIC_EIPx_BITS;
> > +             isel += (pend) ? IMSIC_EIP0 : IMSIC_EIE0;
> > +
> > +             ireg = 0;
> > +             for (i = id & (__riscv_xlen - 1);
> > +                  (id < last_id) && (i < __riscv_xlen); i++) {
> > +                     ireg |= BIT(i);
> > +                     id++;
> > +             }
>
> This lacks a comment what this is doing.

Okay, I will add a comment block.

>
> > +
> > +             /*
> > +              * The IMSIC EIEx and EIPx registers are indirectly
> > +              * accessed via using ISELECT and IREG CSRs so we
> > +              * need to access these CSRs without getting preempted.
> > +              *
> > +              * All existing users of this function call this
> > +              * function with local IRQs disabled so we don't
> > +              * need to do anything special here.
> > +              */
> > +             if (val)
> > +                     imsic_csr_set(isel, ireg);
> > +             else
> > +                     imsic_csr_clear(isel, ireg);
> > +     }
> > +}
> > +
> > +void imsic_local_sync(void)
> > +{
> > +     struct imsic_local_priv *lpriv = this_cpu_ptr(imsic->lpriv);
> > +     struct imsic_local_config *mlocal;
> > +     struct imsic_vector *mvec;
> > +     unsigned long flags;
> > +     int i;
> > +
> > +     raw_spin_lock_irqsave(&lpriv->ids_lock, flags);
> > +     for (i = 1; i <= imsic->global.nr_ids; i++) {
> > +             if (i == IMSIC_IPI_ID)
> > +                     continue;
> > +
> > +             if (test_bit(i, lpriv->ids_enabled_bitmap))
> > +                     __imsic_id_set_enable(i);
> > +             else
> > +                     __imsic_id_clear_enable(i);
> > +
> > +             mvec = lpriv->ids_move[i];
> > +             lpriv->ids_move[i] = NULL;
> > +             if (mvec) {
> > +                     if (__imsic_id_read_clear_pending(i)) {
> > +                             mlocal = per_cpu_ptr(imsic->global.local,
> > +                                                  mvec->cpu);
> > +                             writel_relaxed(mvec->local_id, mlocal->msi_va);
> > +                     }
> > +
> > +                     imsic_vector_free(&lpriv->vectors[i]);
> > +             }
>
> Again an uncommented piece of magic which you will have forgotten what
> it does 3 month down the road :)

Sure, I will add a comment block.

>
> > +
> > +     }
> > +     raw_spin_unlock_irqrestore(&lpriv->ids_lock, flags);
> > +}
> > +
> > +void imsic_local_delivery(bool enable)
> > +{
> > +     if (enable) {
> > +             imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_ENABLE_EITHRESHOLD);
> > +             imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_ENABLE_EIDELIVERY);
> > +             return;
> > +     }
> > +
> > +     imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_DISABLE_EIDELIVERY);
> > +     imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_DISABLE_EITHRESHOLD);
> > +}
> > +
> > +#ifdef CONFIG_SMP
> > +static void imsic_remote_sync(unsigned int cpu)
> > +{
> > +     /*
> > +      * We simply inject ID synchronization IPI to a target CPU
> > +      * if it is not same as the current CPU. The ipi_send_mask()
> > +      * implementation of IPI mux will inject ID synchronization
> > +      * IPI only for CPUs that have enabled it so offline CPUs
> > +      * won't receive IPI. An offline CPU will unconditionally
> > +      * synchronize IDs through imsic_starting_cpu() when the
> > +      * CPU is brought up.
> > +      */
> > +     if (cpu_online(cpu)) {
> > +             if (cpu != smp_processor_id())
> > +                     __ipi_send_mask(imsic->ipi_lsync_desc, cpumask_of(cpu));
>
> Still wondering why this can't use the regular API. There might be a
> reason, but then it wants to be documented.

As mentioned above, the "ipi_virq" and "irq_lsync_desc" will
be replaced by a per-CPU timer in the next revision.

>
> > +             else
> > +                     imsic_local_sync();
> > +     }
> > +}
> > +#else
> > +static inline void imsic_remote_sync(unsigned int cpu)
> > +{
> > +     imsic_local_sync();
> > +}
> > +#endif
> > +
> > +void imsic_vector_mask(struct imsic_vector *vec)
> > +{
> > +     struct imsic_local_priv *lpriv;
> > +     unsigned long flags;
> > +
> > +     lpriv = per_cpu_ptr(imsic->lpriv, vec->cpu);
> > +     if (WARN_ON(&lpriv->vectors[vec->local_id] != vec))
> > +             return;
> > +
> > +     raw_spin_lock_irqsave(&lpriv->ids_lock, flags);
>
> AFAICT, this is used from an irqchip callback:
>
> static void imsic_irq_mask(struct irq_data *d)
> {
>         imsic_vector_mask(irq_data_get_irq_chip_data(d));
> }
>
> So no need to use irqsave() here. Those callbacks run always with
> interrupts disabled when called from the core.

Okay, I will update.

>
> > +void imsic_vector_move(struct imsic_vector *old_vec,
> > +                     struct imsic_vector *new_vec)
> > +{
> > +     struct imsic_local_priv *old_lpriv, *new_lpriv;
> > +     unsigned long flags, flags1;
> > +
> > +     if (WARN_ON(old_vec->cpu == new_vec->cpu))
> > +             return;
> > +
> > +     old_lpriv = per_cpu_ptr(imsic->lpriv, old_vec->cpu);
> > +     if (WARN_ON(&old_lpriv->vectors[old_vec->local_id] != old_vec))
> > +             return;
> > +
> > +     new_lpriv = per_cpu_ptr(imsic->lpriv, new_vec->cpu);
> > +     if (WARN_ON(&new_lpriv->vectors[new_vec->local_id] != new_vec))
> > +             return;
> > +
> > +     raw_spin_lock_irqsave(&old_lpriv->ids_lock, flags);
> > +     raw_spin_lock_irqsave(&new_lpriv->ids_lock, flags1);
>
> Lockdep should yell at you for this, rightfully so. And not only because
> of the missing nested() annotation.
>
> Assume there are two CPUs setting affinity for two different interrupts.
>
> CPU0 moves an interrupt to CPU1 and CPU1 moves another interrupt to
> CPU0. The resulting lock order is:
>
> CPU0                     CPU1
> lock(lpriv[CPU0]);       lock(lpriv[CPU1]);
> lock(lpriv[CPU1]);       lock(lpriv[CPU0]);
>
> a classic ABBA deadlock.
>
> You need to take those locks always in the same order. Look at
> double_raw_lock() in kernel/sched/sched.h.

I have simplified the locking to avoid this nested locks so this
will be much simpler without any lock nesting.

>
> > +     /* Unmask the new vector entry */
> > +     if (test_bit(old_vec->local_id, old_lpriv->ids_enabled_bitmap))
> > +             bitmap_set(new_lpriv->ids_enabled_bitmap,
> > +                        new_vec->local_id, 1);
>
> Either make that one line or please add brackets. See:
>
>   https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#bracket-rules

Okay, I will update.

>
> > +static int __init imsic_local_init(void)
> > +{
> > +     struct imsic_global_config *global = &imsic->global;
> > +     struct imsic_local_priv *lpriv;
> > +     struct imsic_vector *vec;
> > +     int cpu, i;
> > +
> > +     /* Allocate per-CPU private state */
> > +     imsic->lpriv = alloc_percpu(typeof(*(imsic->lpriv)));
> > +     if (!imsic->lpriv)
> > +             return -ENOMEM;
> > +
> > +     /* Setup per-CPU private state */
> > +     for_each_possible_cpu(cpu) {
> > +             lpriv = per_cpu_ptr(imsic->lpriv, cpu);
> > +
> > +             raw_spin_lock_init(&lpriv->ids_lock);
> > +
> > +             /* Allocate enabled bitmap */
> > +             lpriv->ids_enabled_bitmap = bitmap_zalloc(global->nr_ids + 1,
> > +                                                       GFP_KERNEL);
> > +             if (!lpriv->ids_enabled_bitmap) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
> > +             }
> > +
> > +             /* Allocate move array */
> > +             lpriv->ids_move = kcalloc(global->nr_ids + 1,
> > +                                     sizeof(*lpriv->ids_move), GFP_KERNEL);
> > +             if (!lpriv->ids_move) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
> > +             }
> > +
> > +             /* Allocate vector array */
> > +             lpriv->vectors = kcalloc(global->nr_ids + 1,
> > +                                      sizeof(*lpriv->vectors), GFP_KERNEL);
> > +             if (!lpriv->vectors) {
> > +                     imsic_local_cleanup();
> > +                     return -ENOMEM;
>
> Third instance of the same pattern. goto cleanup; perhaps?

Okay, I will add goto here.

>
> > +struct imsic_vector *imsic_vector_alloc(unsigned int hwirq,
> > +                                     const struct cpumask *mask)
> > +{
> > +     struct imsic_vector *vec = NULL;
> > +     struct imsic_local_priv *lpriv;
> > +     unsigned long flags;
> > +     unsigned int cpu;
> > +     int local_id;
> > +
> > +     raw_spin_lock_irqsave(&imsic->matrix_lock, flags);
> > +     local_id = irq_matrix_alloc(imsic->matrix, mask, false, &cpu);
> > +     raw_spin_unlock_irqrestore(&imsic->matrix_lock, flags);
> > +     if (local_id < 0)
> > +             return NULL;
> > +
> > +     lpriv = per_cpu_ptr(imsic->lpriv, cpu);
> > +     vec = &lpriv->vectors[local_id];
> > +     vec->hwirq = hwirq;
> > +
> > +     return vec;
> > +}
>
> ...
>
> > +int imsic_hwirq_alloc(void)
> > +{
> > +     int ret;
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave(&imsic->hwirqs_lock, flags);
> > +     ret = bitmap_find_free_region(imsic->hwirqs_used_bitmap,
> > +                                   imsic->nr_hwirqs, 0);
> > +     raw_spin_unlock_irqrestore(&imsic->hwirqs_lock, flags);
> > +
> > +     return ret;
> > +}
>
> This part is just to create a unique hwirq number, right?

Yes, this is only for unique hwirq. We can directly use virq
instead of hwirq so this hwirq allocation/management will
go away in the next revision.

>
> > +
> > +     /* Find number of guest index bits in MSI address */
> > +     rc = of_property_read_u32(to_of_node(fwnode),
> > +                               "riscv,guest-index-bits",
> > +                               &global->guest_index_bits);
> > +     if (rc)
> > +             global->guest_index_bits = 0;
>
> So here you get the index bits, but then 50 lines further down you do
> sanity checking. Wouldn't it make sense to do that right here?
>
> Same for the other bits.

This is intentional because we already have a AIA ACPI series
where this helps to reduce the number of "if (acpi_disabled)"
checks.

>
> > +
> > +/*
> > + * The IMSIC driver uses 1 IPI for ID synchronization and
> > + * arch/riscv/kernel/smp.c require 6 IPIs so we fix the
> > + * total number of IPIs to 8.
> > + */
> > +#define IMSIC_IPI_ID                         1
> > +#define IMSIC_NR_IPI                         8
> > +
> > +struct imsic_vector {
> > +     /* Fixed details of the vector */
> > +     unsigned int cpu;
> > +     unsigned int local_id;
> > +     /* Details saved by driver in the vector */
> > +     unsigned int hwirq;
> > +};
> > +
> > +struct imsic_local_priv {
> > +     /* Local state of interrupt identities */
> > +     raw_spinlock_t ids_lock;
> > +     unsigned long *ids_enabled_bitmap;
> > +     struct imsic_vector **ids_move;
> > +
> > +     /* Local vector table */
> > +     struct imsic_vector *vectors;
>
> Please make those structs tabular:
>
> https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#struct-declarations-and-initializers

Okay, I will update.

>
> > +void __imsic_eix_update(unsigned long base_id,
> > +                     unsigned long num_id, bool pend, bool val);
> > +
> > +#define __imsic_id_set_enable(__id)          \
> > +     __imsic_eix_update((__id), 1, false, true)
> > +#define __imsic_id_clear_enable(__id)        \
> > +     __imsic_eix_update((__id), 1, false, false)
>
> inlines please.

Okay, I will update.

Regards,
Anup

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2024-02-18 13:16 UTC|newest]

Thread overview: 277+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-27 16:17 [PATCH v12 00/25] Linux RISC-V AIA Support Anup Patel
2024-01-27 16:17 ` Anup Patel
2024-01-27 16:17 ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 01/25] irqchip/gic-v3: Make gic_irq_domain_select() robust for zero parameter count Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 11:47   ` Marc Zyngier
2024-02-15 11:47     ` Marc Zyngier
2024-02-15 11:47     ` Marc Zyngier
2024-02-15 19:57   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 02/25] genirq/irqdomain: Remove the param count restriction from select() Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:57   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-02-19 15:50     ` Biju Das
2024-02-19 15:56   ` Marc Zyngier
2024-02-19 16:39     ` Biju Das
2024-02-19 17:39       ` Biju Das
2024-02-20  8:50     ` Thomas Gleixner
2024-02-20 16:33       ` [tip: irq/msi] irqchip/imx-intmux: Handle pure domain searches correctly tip-bot2 for Thomas Gleixner
2024-02-22 13:01   ` [PATCH v12 02/25] genirq/irqdomain: Remove the param count restriction from select() Aishwarya TCV
2024-02-22 13:01     ` Aishwarya TCV
2024-02-22 13:01     ` Aishwarya TCV
2024-02-22 16:28     ` Marc Zyngier
2024-02-22 16:28       ` Marc Zyngier
2024-02-22 16:28       ` Marc Zyngier
2024-02-22 22:59       ` Aishwarya TCV
2024-02-22 22:59         ` Aishwarya TCV
2024-02-22 22:59         ` Aishwarya TCV
     [not found]   ` <CGME20240223102258eucas1p119f38e40f769c883c0a502e9e26be888@eucas1p1.samsung.com>
2024-02-23 10:22     ` Marek Szyprowski
2024-02-23 10:22       ` Marek Szyprowski
2024-02-23 10:22       ` Marek Szyprowski
2024-02-23 10:45       ` Biju Das
2024-02-23 10:45         ` Biju Das
2024-02-23 10:45         ` Biju Das
2024-02-23 10:56         ` Marek Szyprowski
2024-02-23 10:56           ` Marek Szyprowski
2024-02-23 10:56           ` Marek Szyprowski
2024-02-23 11:01           ` Biju Das
2024-02-23 11:01             ` Biju Das
2024-02-23 11:01             ` Biju Das
2024-01-27 16:17 ` [PATCH v12 03/25] genirq/msi: Extend msi_parent_ops Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:57   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 04/25] genirq/irqdomain: Add DOMAIN_BUS_DEVICE_IMS Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 11:54   ` Marc Zyngier
2024-02-15 11:54     ` Marc Zyngier
2024-02-15 11:54     ` Marc Zyngier
2024-02-15 15:01     ` Thomas Gleixner
2024-02-15 15:01       ` Thomas Gleixner
2024-02-15 15:01       ` Thomas Gleixner
2024-02-15 19:57   ` [tip: irq/msi] genirq/irqdomain: Add DOMAIN_BUS_DEVICE_MSI tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 05/25] platform-msi: Prepare for real per device domains Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:57   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 06/25] irqchip: Convert all platform MSI users to the new API Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:57   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 07/25] genirq/msi: Provide optional translation op Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:57   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 08/25] genirq/msi: Split msi_domain_alloc_irq_at() Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:56   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 09/25] genirq/msi: Provide DOMAIN_BUS_WIRED_TO_MSI Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:56   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 10/25] genirq/msi: Optionally use dev->fwnode for device domain Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:56   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 11/25] genirq/msi: Provide allocation/free functions for "wired" MSI interrupts Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:56   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 12/25] genirq/irqdomain: Reroute device MSI create_mapping Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:56   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 13/25] genirq/msi: Provide MSI_FLAG_PARENT_PM_DEV Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-15 19:56   ` [tip: irq/msi] " tip-bot2 for Thomas Gleixner
2024-01-27 16:17 ` [PATCH v12 14/25] irqchip/sifive-plic: Convert PLIC driver into a platform driver Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-16 15:33   ` Thomas Gleixner
2024-02-16 15:33     ` Thomas Gleixner
2024-02-16 15:33     ` Thomas Gleixner
2024-02-16 17:11     ` Anup Patel
2024-02-16 17:11       ` Anup Patel
2024-02-16 17:11       ` Anup Patel
2024-02-16 20:22       ` Thomas Gleixner
2024-02-16 20:22         ` Thomas Gleixner
2024-02-16 20:22         ` Thomas Gleixner
2024-02-17  5:42         ` Anup Patel
2024-02-17  5:42           ` Anup Patel
2024-02-17  5:42           ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 15/25] irqchip/riscv-intc: Add support for RISC-V AIA Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 16/25] dt-bindings: interrupt-controller: Add RISC-V incoming MSI controller Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 17/25] genirq/matrix: Dynamic bitmap allocation Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 18/25] irqchip: Add RISC-V incoming MSI controller early driver Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-07  9:43   ` Björn Töpel
2024-02-07  9:43     ` Björn Töpel
2024-02-07  9:43     ` Björn Töpel
2024-02-16 18:40   ` Thomas Gleixner
2024-02-16 18:40     ` Thomas Gleixner
2024-02-16 18:40     ` Thomas Gleixner
2024-02-18 13:16     ` Anup Patel [this message]
2024-02-18 13:16       ` Anup Patel
2024-02-18 13:16       ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 19/25] irqchip/riscv-imsic: Add device MSI domain support for platform devices Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-06 15:36   ` Björn Töpel
2024-02-06 15:36     ` Björn Töpel
2024-02-06 15:36     ` Björn Töpel
2024-02-16 20:12   ` Thomas Gleixner
2024-02-16 20:12     ` Thomas Gleixner
2024-02-16 20:12     ` Thomas Gleixner
2024-02-19  4:10     ` Anup Patel
2024-02-19  4:10       ` Anup Patel
2024-02-19  4:10       ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 20/25] irqchip/riscv-imsic: Add device MSI domain support for PCI devices Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-16 20:14   ` Thomas Gleixner
2024-02-16 20:14     ` Thomas Gleixner
2024-02-16 20:14     ` Thomas Gleixner
2024-02-19  4:41     ` Anup Patel
2024-02-19  4:41       ` Anup Patel
2024-02-19  4:41       ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 21/25] dt-bindings: interrupt-controller: Add RISC-V advanced PLIC Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 22/25] irqchip: Add RISC-V advanced PLIC driver for direct-mode Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-01  6:39   ` Andy Chiu
2024-02-01  6:39     ` Andy Chiu
2024-02-01  6:39     ` Andy Chiu
2024-02-19 10:28     ` Anup Patel
2024-02-19 10:28       ` Anup Patel
2024-02-19 10:28       ` Anup Patel
2024-02-02  9:29   ` Clément Léger
2024-02-02  9:29     ` Clément Léger
2024-02-02  9:29     ` Clément Léger
2024-02-02 10:30     ` Anup Patel
2024-02-02 10:30       ` Anup Patel
2024-02-02 10:30       ` Anup Patel
2024-02-02 10:33       ` Clément Léger
2024-02-02 10:33         ` Clément Léger
2024-02-02 10:33         ` Clément Léger
2024-02-16 20:50   ` Thomas Gleixner
2024-02-16 20:50     ` Thomas Gleixner
2024-02-16 20:50     ` Thomas Gleixner
2024-02-19  9:35     ` Anup Patel
2024-02-19  9:35       ` Anup Patel
2024-02-19  9:35       ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 23/25] irqchip/riscv-aplic: Add support for MSI-mode Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-02-16 21:04   ` Thomas Gleixner
2024-02-16 21:04     ` Thomas Gleixner
2024-02-16 21:04     ` Thomas Gleixner
2024-02-19  9:45     ` Anup Patel
2024-02-19  9:45       ` Anup Patel
2024-02-19  9:45       ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 24/25] RISC-V: Select APLIC and IMSIC drivers Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17 ` [PATCH v12 25/25] MAINTAINERS: Add entry for RISC-V AIA drivers Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:17   ` Anup Patel
2024-01-27 16:20 ` [PATCH v12 00/25] Linux RISC-V AIA Support Anup Patel
2024-01-27 16:20   ` Anup Patel
2024-01-27 16:20   ` Anup Patel
2024-02-14 19:54   ` Thomas Gleixner
2024-02-14 19:54     ` Thomas Gleixner
2024-02-14 19:54     ` Thomas Gleixner
2024-02-15  5:48     ` Anup Patel
2024-02-15  5:48       ` Anup Patel
2024-02-15  5:48       ` Anup Patel
2024-02-15 19:59       ` Thomas Gleixner
2024-02-15 19:59         ` Thomas Gleixner
2024-02-15 19:59         ` Thomas Gleixner
2024-02-16 21:05         ` Thomas Gleixner
2024-02-16 21:05           ` Thomas Gleixner
2024-02-16 21:05           ` Thomas Gleixner
2024-02-20  6:12           ` Anup Patel
2024-02-20  6:12             ` Anup Patel
2024-02-20  6:12             ` Anup Patel
2024-02-15 11:57     ` Marc Zyngier
2024-02-15 11:57       ` Marc Zyngier
2024-02-15 11:57       ` Marc Zyngier
2024-01-30  7:16 ` Björn Töpel
2024-01-30  7:16   ` Björn Töpel
2024-01-30  7:16   ` Björn Töpel
2024-01-30  7:52   ` Björn Töpel
2024-01-30  7:52     ` Björn Töpel
2024-01-30  7:52     ` Björn Töpel
2024-01-30 10:02     ` Anup Patel
2024-01-30 10:02       ` Anup Patel
2024-01-30 10:02       ` Anup Patel
2024-01-30 11:05       ` Björn Töpel
2024-01-30 11:05         ` Björn Töpel
2024-01-30 11:05         ` Björn Töpel
2024-01-30 10:23     ` Anup Patel
2024-01-30 10:23       ` Anup Patel
2024-01-30 10:23       ` Anup Patel
2024-01-30 11:46       ` Björn Töpel
2024-01-30 11:46         ` Björn Töpel
2024-01-30 11:46         ` Björn Töpel
2024-01-30 14:48         ` Björn Töpel
2024-01-30 14:48           ` Björn Töpel
2024-01-30 14:48           ` Björn Töpel
2024-01-30 15:19           ` Anup Patel
2024-01-30 15:19             ` Anup Patel
2024-01-30 15:19             ` Anup Patel
2024-01-30 15:48           ` Anup Patel
2024-01-30 15:48             ` Anup Patel
2024-01-30 15:48             ` Anup Patel
2024-01-30 17:49             ` Björn Töpel
2024-01-30 17:49               ` Björn Töpel
2024-01-30 17:49               ` Björn Töpel
2024-02-01 15:07               ` Anup Patel
2024-02-01 15:07                 ` Anup Patel
2024-02-01 15:07                 ` Anup Patel
2024-02-01 18:45                 ` Björn Töpel
2024-02-01 18:45                   ` Björn Töpel
2024-02-01 18:45                   ` Björn Töpel
2024-02-06 15:39 ` Björn Töpel
2024-02-06 15:39   ` Björn Töpel
2024-02-06 15:39   ` Björn Töpel
2024-02-06 17:39   ` Anup Patel
2024-02-06 17:39     ` Anup Patel
2024-02-06 17:39     ` Anup Patel
2024-02-07  7:27     ` Björn Töpel
2024-02-07  7:27       ` Björn Töpel
2024-02-07  7:27       ` Björn Töpel
2024-02-07  9:18       ` Anup Patel
2024-02-07  9:18         ` Anup Patel
2024-02-07  9:18         ` Anup Patel
2024-02-07  9:37         ` Björn Töpel
2024-02-07  9:37           ` Björn Töpel
2024-02-07  9:37           ` Björn Töpel
2024-02-07 12:55           ` Björn Töpel
2024-02-07 12:55             ` Björn Töpel
2024-02-07 12:55             ` Björn Töpel
2024-02-07 13:08             ` Anup Patel
2024-02-07 13:08               ` Anup Patel
2024-02-07 13:08               ` Anup Patel
2024-02-07 13:10             ` Anup Patel
2024-02-07 13:10               ` Anup Patel
2024-02-07 13:10               ` Anup Patel
2024-02-08 10:10 ` Andrea Parri
2024-02-08 10:10   ` Andrea Parri
2024-02-08 10:10   ` Andrea Parri
2024-02-16 11:33   ` Anup Patel
2024-02-16 11:33     ` Anup Patel
2024-02-16 11:33     ` Anup Patel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAK9=C2XvYB8qrTvxKvtXspdfiKUro89t-oLRVzeMqczDbG7nrQ@mail.gmail.com' \
    --to=apatel@ventanamicro.com \
    --cc=ajones@ventanamicro.com \
    --cc=anup@brainfault.org \
    --cc=atishp@atishpatra.org \
    --cc=bjorn@kernel.org \
    --cc=conor+dt@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=frowand.list@gmail.com \
    --cc=krzysztof.kozlowski+dt@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=robh+dt@kernel.org \
    --cc=saravanak@google.com \
    --cc=sunilvl@ventanamicro.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.