linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Ray Jui <ray.jui@broadcom.com>
Cc: "Sandor Bodo-Merle" <sbodomerle@gmail.com>,
	"Pali Rohár" <pali@kernel.org>,
	linux-pci@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com
Subject: Re: pcie-iproc-msi.c: Bug in Multi-MSI support?
Date: Wed, 26 May 2021 08:57:21 +0100	[thread overview]
Message-ID: <874keqvsf2.wl-maz@kernel.org> (raw)
In-Reply-To: <13a7e409-646d-40a7-17a0-4e4be011efb2@broadcom.com>

On Tue, 25 May 2021 18:27:54 +0100,
Ray Jui <ray.jui@broadcom.com> wrote:
>
> On 5/24/2021 3:37 AM, Marc Zyngier wrote:
> > On Thu, 20 May 2021 18:11:32 +0100,
> > Ray Jui <ray.jui@broadcom.com> wrote:
> >>
> >> On 5/20/2021 7:22 AM, Sandor Bodo-Merle wrote:

[...]

> >> I guess I'm not too clear on what you mean by "multi-MSI interrupts
> >> needs to be aligned to number of requested interrupts.". Would you be
> >> able to plug this into the above explanation so we can have a more clear
> >> understanding of what you mean here?
> > 
> > That's a generic PCI requirement: if you are providing a Multi-MSI
> > configuration, the base vector number has to be size-aligned
> > (2-aligned for 2 MSIs, 4 aligned for 4, up to 32), and the end-point
> > supplies up to 5 bits that are orr-ed into the base vector number,
> > with a *single* doorbell address. You effectively provide a single MSI
> > number and a single address, and the device knows how to drive 2^n MSIs.
> > 
> > This is different from MSI-X, which defines multiple individual
> > vectors, each with their own doorbell address.
> > 
> > The main problem you have here (other than the broken allocation
> > mechanism) is that moving an interrupt from one core to another
> > implies moving the doorbell address to that of another MSI
> > group. This isn't possible for Multi-MSI, as all the MSIs must have
> > the same doorbell address. As far as I can see, there is no way to
> > support Multi-MSI together with affinity change on this HW, and you
> > should stop advertising support for this feature.
> > 
> 
> I was not aware of the fact that multi-MSI needs to use the same
> doorbell address (aka MSI posted write address?). Thank you for helping
> to point it out. In this case, yes, like you said, we cannot possibly
> support both multi-MSI and affinity at the same time, since supporting
> affinity requires us to move from one to another event queue (and irq)
> that will have different doorbell address.
> 
> Do you think it makes sense to do the following by only advertising
> multi-MSI capability in the single CPU core case (detected runtime via
> 'num_possible_cpus')? This will at least allow multi-MSI to work in
> platforms with single CPU core that Sandor and Pali use?

I don't think this makes much sense. Single-CPU machines are an oddity
these days, and I'd rather you simplify this (already pretty
complicated) driver.

> > There is also a more general problem here, which is the atomicity of
> > the update on affinity change. If you are moving an interrupt from one
> > CPU to the other, it seems you change both the vector number and the
> > target address. If that is the case, this isn't atomic, and you may
> > end-up with the device generating a message based on a half-applied
> > update.
> 
> Are you referring to the callback in 'irq_set_addinity" and
> 'irq_compose_msi_msg'? In such case, can you help to recommend a
> solution for it (or there's no solution based on such architecture)? It
> does not appear such atomy can be enforced from the irq framework level.

irq_compose_msi_msg() is only one part of the problem. The core of the
issue is that the programming of the end-point is not atomic (you need
to update a 32bit payload *and* a 64bit address).

A solution to workaround it would be to rework the way you allocate
the vectors, making them constant across all CPUs so that only the
address changes when changing the affinity.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

  parent reply	other threads:[~2021-05-26  7:57 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-20 12:00 pcie-iproc-msi.c: Bug in Multi-MSI support? Pali Rohár
2021-05-20 13:47 ` Sandor Bodo-Merle
2021-05-20 14:05   ` Pali Rohár
2021-05-20 14:22     ` Sandor Bodo-Merle
2021-05-20 17:11       ` Ray Jui
2021-05-20 19:31         ` Sandor Bodo-Merle
2021-05-21 11:39         ` Sandor Bodo-Merle
2021-05-21 14:00           ` Sandor Bodo-Merle
2021-05-24 10:37         ` Marc Zyngier
2021-05-25 17:27           ` Ray Jui
2021-05-25 17:35             ` Pali Rohár
2021-05-25 17:39               ` Ray Jui
2021-05-26  7:57             ` Marc Zyngier [this message]
2021-05-26 16:10               ` Sandor Bodo-Merle
2021-06-02  8:34                 ` Marc Zyngier
2021-06-03 16:59                   ` Ray Jui
2021-06-04  9:17                     ` Sandor Bodo-Merle
2021-06-04 16:47                       ` Ray Jui
2021-05-26 17:34               ` Ray Jui
2021-05-26 20:18                 ` Sandor Bodo-Merle

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=874keqvsf2.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=bcm-kernel-feedback-list@broadcom.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=pali@kernel.org \
    --cc=ray.jui@broadcom.com \
    --cc=sbodomerle@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).