linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com>
To: Andy Shevchenko <andy.shevchenko@gmail.com>,
	luojiaxing <luojiaxing@huawei.com>
Cc: Linus Walleij <linus.walleij@linaro.org>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Grygorii Strashko <grygorii.strashko@ti.com>,
	Santosh Shilimkar <ssantosh@kernel.org>,
	"Kevin Hilman" <khilman@kernel.org>,
	"open list:GPIO SUBSYSTEM" <linux-gpio@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"linuxarm@openeuler.org" <linuxarm@openeuler.org>
Subject: RE: [Linuxarm]  Re: [PATCH for next v1 0/2] gpio: few clean up patches to replace spin_lock_irqsave with spin_lock
Date: Wed, 10 Feb 2021 11:50:45 +0000	[thread overview]
Message-ID: <947bcef0d56a4d0c82729d6899394f4a@hisilicon.com> (raw)
In-Reply-To: <CAHp75VdrskuNkvFr4MPbbg8c8=VSug0GT+vs=cMRMOqLr+-f5A@mail.gmail.com>



> -----Original Message-----
> From: Andy Shevchenko [mailto:andy.shevchenko@gmail.com]
> Sent: Wednesday, February 10, 2021 11:51 PM
> To: luojiaxing <luojiaxing@huawei.com>
> Cc: Linus Walleij <linus.walleij@linaro.org>; Andy Shevchenko
> <andriy.shevchenko@linux.intel.com>; Grygorii Strashko
> <grygorii.strashko@ti.com>; Santosh Shilimkar <ssantosh@kernel.org>; Kevin
> Hilman <khilman@kernel.org>; open list:GPIO SUBSYSTEM
> <linux-gpio@vger.kernel.org>; Linux Kernel Mailing List
> <linux-kernel@vger.kernel.org>; linuxarm@openeuler.org
> Subject: [Linuxarm] Re: [PATCH for next v1 0/2] gpio: few clean up patches to
> replace spin_lock_irqsave with spin_lock
> 
> On Wed, Feb 10, 2021 at 5:43 AM luojiaxing <luojiaxing@huawei.com> wrote:
> > On 2021/2/9 17:42, Andy Shevchenko wrote:
> > > On Tue, Feb 9, 2021 at 11:24 AM luojiaxing <luojiaxing@huawei.com> wrote:
> > >> On 2021/2/8 21:28, Andy Shevchenko wrote:
> > >>> On Mon, Feb 8, 2021 at 11:11 AM luojiaxing <luojiaxing@huawei.com> wrote:
> > >>>> On 2021/2/8 16:56, Luo Jiaxing wrote:
> > >>>>> There is no need to use API with _irqsave in hard IRQ handler, So replace
> > >>>>> those with spin_lock.
> > >>> How do you know that another CPU in the system can't serve the
> > > The keyword here is: *another*.
> >
> > ooh, sorry, now I got your point.
> >
> > As to me, I don't think another CPU can serve the IRQ when one CPU
> > runing hard IRQ handler,
> 
> Why is it so?
> Each CPU can serve IRQs separately.
> 
> > except it's a per CPU interrupts.
> 
> I didn't get how it is related.
> 
> > The following is a simple call logic when IRQ come.
> >
> > elx_irq -> handle_arch_irq -> __handle_domain_irq -> desc->handle_irq ->
> > handle_irq_event
> 
> What is `elx_irq()`? I haven't found any mention of this in the kernel
> source tree.
> But okay, it shouldn't prevent our discussion.
> 
> > Assume that two CPUs receive the same IRQ and enter the preceding
> > process. Both of them will go to desc->handle_irq().
> 
> Ah, I'm talking about the same IRQ by number (like Linux IRQ number,
> means from the same source), but with different sequence number (means
> two consequent events).
> 
> > In handle_irq(), raw_spin_lock(&desc->lock) always be called first.
> > Therefore, even if two CPUs are running handle_irq(),
> >
> > only one can get the spin lock. Assume that CPU A obtains the spin lock.
> > Then CPU A will sets the status of irq_data to
> >
> > IRQD_IRQ_INPROGRESS in handle_irq_event() and releases the spin lock.
> > Even though CPU B gets the spin lock later and
> >
> > continue to run handle_irq(), but the check of irq_may_run(desc) causes
> > it to exit.
> >
> >
> > so, I think we don't own the situation that two CPU server the hard IRQ
> > handler at the same time.
> 
> Okay. Assuming your analysis is correct, have you considered the case
> when all IRQ handlers are threaded? (There is a kernel command line
> option to force this)
> 
> > >>> following interrupt from the hardware at the same time?
> > >> Yes, I have some question before.
> > >>
> > >> There are some similar discussion here,  please take a look, Song baohua
> > >> explained it more professionally.
> > >>
> > >>
> https://lore.kernel.org/lkml/e949a474a9284ac6951813bfc8b34945@hisilicon.co
> m/
> > >>
> > >> Here are some excerpts from the discussion:
> > >>
> > >> I think the code disabling irq in hardIRQ is simply wrong.
> > > Why?
> >
> >
> > I mention the following call before.
> >
> > elx_irq -> handle_arch_irq -> __handle_domain_irq -> desc->handle_irq ->
> > handle_irq_event
> >
> >
> > __handle_domain_irq() will call irq_enter(), it ensures that the IRQ
> > processing of the current CPU can not be preempted.
> >
> > So I think this is the reason why Song baohua said it's not need to
> > disable IRQ in hardIRQ handler.
> >
> > >> Since this commit
> > >>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> ?id=e58aa3d2d0cc
> > >> genirq: Run irq handlers with interrupts disabled
> > >>
> > >> interrupt handlers are definitely running in a irq-disabled context
> > >> unless irq handlers enable them explicitly in the handler to permit
> > >> other interrupts.
> > > This doesn't explain any changes in the behaviour on SMP.
> > > IRQ line can be disabled on a few stages:
> > >   a) on the source (IP that generates an event)
> > >   b) on IRQ router / controller
> > >   c) on CPU side
> >
> > yes, you are right.
> >
> > > The commit above is discussing (rightfully!) the problem when all
> > > interrupts are being served by a *single* core. Nobody prevents them
> > > from being served by *different* cores simultaneously. Also, see [1].
> > >
> > > [1]: https://www.kernel.org/doc/htmldocs/kernel-locking/cheatsheet.html
> >
> > I check [1], quite useful description about locking, thanks. But you can
> > see Table of locking Requirements
> >
> > Between IRQ handler A and IRQ handle A, it's no need for a SLIS.
> 
> Right, but it's not the case in the patches you provided.

The code still holds spin_lock. So if two cpus call same IRQ handler,
spin_lock makes them spin; and if interrupts are threaded, spin_lock
makes two threads run the same handler one by one.

> 
> --
> With Best Regards,
> Andy Shevchenko

Thanks
Barry


  reply	other threads:[~2021-02-10 11:56 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-08  8:56 [PATCH for next v1 0/2] gpio: few clean up patches to replace spin_lock_irqsave with spin_lock Luo Jiaxing
2021-02-08  8:56 ` [PATCH for next v1 1/2] gpio: omap: Replace raw_spin_lock_irqsave with raw_spin_lock in omap_gpio_irq_handler() Luo Jiaxing
2021-02-11 18:14   ` Grygorii Strashko
2021-02-11 19:39     ` Arnd Bergmann
2021-02-11 20:16       ` Grygorii Strashko
2021-02-12  5:05         ` [Linuxarm] " Song Bao Hua (Barry Song)
2021-02-12  9:45           ` Arnd Bergmann
2021-02-12 10:25             ` Song Bao Hua (Barry Song)
2021-02-12 10:27             ` Grygorii Strashko
2021-02-12 10:42               ` Song Bao Hua (Barry Song)
2021-02-12 10:57                 ` Andy Shevchenko
2021-02-12 11:29                   ` Song Bao Hua (Barry Song)
2021-02-12 11:53                     ` Grygorii Strashko
2021-02-12 13:12                       ` Song Bao Hua (Barry Song)
2021-02-12 14:08                         ` Grygorii Strashko
2021-02-12 20:06                           ` Song Bao Hua (Barry Song)
2021-02-12 20:23                       ` Arnd Bergmann
2021-02-12 20:49                         ` Song Bao Hua (Barry Song)
2021-02-12 10:59                 ` Arnd Bergmann
2021-02-12 11:35                   ` Andy Shevchenko
2021-02-08  9:11 ` [Linuxarm] [PATCH for next v1 0/2] gpio: few clean up patches to replace spin_lock_irqsave with spin_lock luojiaxing
2021-02-08 13:28   ` Andy Shevchenko
2021-02-09  9:24     ` luojiaxing
2021-02-09  9:42       ` Andy Shevchenko
2021-02-10  3:43         ` luojiaxing
2021-02-10 10:50           ` Andy Shevchenko
2021-02-10 11:50             ` Song Bao Hua (Barry Song) [this message]
2021-02-10 14:56               ` [Linuxarm] " Andy Shevchenko
2021-02-10 20:42                 ` Song Bao Hua (Barry Song)
2021-02-11  9:58                   ` Andy Shevchenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=947bcef0d56a4d0c82729d6899394f4a@hisilicon.com \
    --to=song.bao.hua@hisilicon.com \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=andy.shevchenko@gmail.com \
    --cc=grygorii.strashko@ti.com \
    --cc=khilman@kernel.org \
    --cc=linus.walleij@linaro.org \
    --cc=linux-gpio@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@openeuler.org \
    --cc=luojiaxing@huawei.com \
    --cc=ssantosh@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).