linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Zong Li <zong.li@sifive.com>
To: Anup Patel <anup@brainfault.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"linux-kernel@vger.kernel.org List"
	<linux-kernel@vger.kernel.org>,
	linux-riscv <linux-riscv@lists.infradead.org>,
	David Abdurachmanov <david.abdurachmanov@sifive.com>
Subject: Re: [PATCH] irqchip/sifive-plic: allow many cores to handle IRQs
Date: Sun, 26 Apr 2020 21:19:13 +0800	[thread overview]
Message-ID: <CANXhq0qW9ORoZ5qc5g8ikO9QdeYX=p0fwoP8pyFFkk02a7imnw@mail.gmail.com> (raw)
In-Reply-To: <CAAhSdy3FCdzLV-nH03T=PBxB2tdZXhRrugcC2NcoA=22qpv+Lw@mail.gmail.com>

On Sun, Apr 26, 2020 at 8:47 PM Anup Patel <anup@brainfault.org> wrote:
>
> On Sun, Apr 26, 2020 at 4:37 PM Zong Li <zong.li@sifive.com> wrote:
> >
> > Currently, driver forces the IRQs to be handled by only one core. This
> > patch provides the way to enable others cores to handle IRQs if needed,
> > so users could decide how many cores they wanted on default by boot
> > argument.
> >
> > Use 'irqaffinity' boot argument to determine affinity. If there is no
> > irqaffinity in dts or kernel configuration, use irq default affinity,
> > so all harts would try to claim IRQ.
> >
> > For example, add irqaffinity=0 in chosen node to set irq affinity to
> > hart 0. It also supports more than one harts to handle irq, such as set
> > irqaffinity=0,3,4.
> >
> > You can change IRQ affinity from user-space using procfs. For example,
> > you can make CPU0 and CPU2 serve IRQ together by the following command:
> >
> > echo 4 > /proc/irq/<x>/smp_affinity
> >
> > Signed-off-by: Zong Li <zong.li@sifive.com>
> > ---
> >  drivers/irqchip/irq-sifive-plic.c | 21 +++++++--------------
> >  1 file changed, 7 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c
> > index d0a71febdadc..bc1440d54185 100644
> > --- a/drivers/irqchip/irq-sifive-plic.c
> > +++ b/drivers/irqchip/irq-sifive-plic.c
> > @@ -111,15 +111,12 @@ static inline void plic_irq_toggle(const struct cpumask *mask,
> >  static void plic_irq_unmask(struct irq_data *d)
> >  {
> >         struct cpumask amask;
> > -       unsigned int cpu;
> >         struct plic_priv *priv = irq_get_chip_data(d->irq);
> >
> >         cpumask_and(&amask, &priv->lmask, cpu_online_mask);
> > -       cpu = cpumask_any_and(irq_data_get_affinity_mask(d),
> > -                                          &amask);
> > -       if (WARN_ON_ONCE(cpu >= nr_cpu_ids))
> > -               return;
> > -       plic_irq_toggle(cpumask_of(cpu), d, 1);
> > +       cpumask_and(&amask, &amask, irq_data_get_affinity_mask(d));
> > +
> > +       plic_irq_toggle(&amask, d, 1);
> >  }
> >
> >  static void plic_irq_mask(struct irq_data *d)
> > @@ -133,24 +130,20 @@ static void plic_irq_mask(struct irq_data *d)
> >  static int plic_set_affinity(struct irq_data *d,
> >                              const struct cpumask *mask_val, bool force)
> >  {
> > -       unsigned int cpu;
> >         struct cpumask amask;
> >         struct plic_priv *priv = irq_get_chip_data(d->irq);
> >
> >         cpumask_and(&amask, &priv->lmask, mask_val);
> >
> >         if (force)
> > -               cpu = cpumask_first(&amask);
> > +               cpumask_copy(&amask, mask_val);
> >         else
> > -               cpu = cpumask_any_and(&amask, cpu_online_mask);
> > -
> > -       if (cpu >= nr_cpu_ids)
> > -               return -EINVAL;
> > +               cpumask_and(&amask, &amask, cpu_online_mask);
> >
> >         plic_irq_toggle(&priv->lmask, d, 0);
> > -       plic_irq_toggle(cpumask_of(cpu), d, 1);
> > +       plic_irq_toggle(&amask, d, 1);
> >
> > -       irq_data_update_effective_affinity(d, cpumask_of(cpu));
> > +       irq_data_update_effective_affinity(d, &amask);
> >
> >         return IRQ_SET_MASK_OK_DONE;
> >  }
> > --
> > 2.26.1
> >
>
> I strongly oppose (NACK) this patch due to performance reasons.
>
> In PLIC, if we enable an IRQ X for N CPUs then when IRQ X occurs:
> 1) All N CPUs will take interrupt
> 2) All N CPUs will try to read PLIC CLAIM register
> 3) Only one of the CPUs will see IRQ X using the CLAIM register
> but other N - 1 CPUs will see no interrupt and return back to what
> they were doing. In other words, N - 1 CPUs will just waste CPU
> every time IRQ X occurs.
>
> Example1, one Application doing heavy network traffic will
> degrade performance of other applications because with every
> network RX/TX interrupt N-1 CPUs will waste CPU trying to
> process network interrupt.
>
> Example1, one Application doing heavy MMC/SD traffic will
> degrade performance of other applications because with every
> SPI read/write interrupt N-1 CPUs will waste CPU trying to
> process it.
>
> In fact, the current PLIC approach is actually a performance
> optimization. This implementation also works fine with in-kernel
> load-balancer and user space load balancer.
>

Yes, it's exactly, I know what you pointed out. But the idea of this
patch is just providing a way that users could enable other cores if
they wanted, it could still enable only one core by this change. The
purpose here is thinking of flexible use, rather than limitation.
Maybe it would be a happy medium that we make the default case enable
only one core? It is a good open discussion.

> Regards,
> Anup

  reply	other threads:[~2020-04-26 13:19 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-26 11:07 [PATCH] irqchip/sifive-plic: allow many cores to handle IRQs Zong Li
2020-04-26 12:46 ` Anup Patel
2020-04-26 13:19   ` Zong Li [this message]
2020-04-26 13:38     ` Anup Patel
2020-04-26 15:12       ` Zong Li
2020-04-26 15:21         ` Anup Patel
2020-04-26 15:35           ` Zong Li
2020-04-27  6:49             ` Greentime Hu
2020-04-27  6:59               ` Anup Patel
2020-04-27  8:27       ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANXhq0qW9ORoZ5qc5g8ikO9QdeYX=p0fwoP8pyFFkk02a7imnw@mail.gmail.com' \
    --to=zong.li@sifive.com \
    --cc=anup@brainfault.org \
    --cc=david.abdurachmanov@sifive.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).