xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Meng Xu <xumengpanda@gmail.com>
Cc: "Xu, Quan" <quan.xu@intel.com>, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH v3 1/2] IOMMU/spinlock: Fix a bug found in AMD IOMMU initialization
Date: Fri, 11 Mar 2016 17:12:01 +0100	[thread overview]
Message-ID: <1457712721.3102.610.camel@citrix.com> (raw)
In-Reply-To: <CAENZ-+nD8ASNi=7DYAVkvMJhWbPGgLKVYP3KT6YVT5ad_Z5=aQ@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 3214 bytes --]

On Fri, 2016-03-11 at 09:41 -0500, Meng Xu wrote:
> Thank you very much for your explanation and education! They are
> really helpful! :-D
> 
> Let me summarize: ;-)
> > 
> >                                                      | spin_lock |
> spin_lock_irq | spin_lock_irqsave
> > 
> > Can run in irq context?                  | No            |  Yes
>         | Yes
> > 
> > Can run in irq disabled context?   |
> > No            |  No                | Yes
>
The table came out mangled, at least in my mail reader. For what I can
see, I think I see the point you're trying to make, and it's hard to
say whether the table itself is right or wrong...

Probably, a table like this, is just not the best way with which to try
to represent things.

For instance, you seem to be saying that spin_lock() can't be used if
interrupts are disabled, which is not true.

For instance, it is perfectly fine to do the following:

    CPU:
    .
    spin_lock_irq(l1);
    .
    .
[1] spin_lock(l2);
    .
    .
[2] spin_unlock(l2);
    .
    .
    spin_unlock_irq(l1);
    .

Even if l2 is also taken in an interrupt handler. In fact, what we want
is make sure that such an interrupt handler that may take l2, does not
interrupt CPU in the [1]--[2] time frame... But that can't happen
because interrupts are already disabled, so just using spin_lock() is
ok, and should be done, as that's cheaper than spin_lock_irqsave().

Fact is, what really matters is whether or not a lock is taken both
inside and outside of IRQ context. As a matter of fact, it is mostly
the case that, if a lock is ever taken inside an interrupt handler,
then spin_lock_irqsave() is what you want... but this does not justify
coming up with a table like the one above and claiming it to be
general.

> Why deadlock may occur if we mix the spin_lock and
> spin_lock_irq(save)?
> If we mix the spin_lock and spin_lock_irq(save), and a group of CPUs
> rendezvousing in an IPI handler, we will have deadlock. Because the
> CPU A that takes spin_lock will wait for the rendezvousing condition
> to be satisfied, while the CPU B that takes th spin_lock_irq(save)
> will not enter into the rendezvousing condition (since it disable the
> interrupt). Then,
> CPU A waits for CPU B to handle the IPI to get out of the
> rendezvousing condition (kind of synchrous point), which won't until
> it gets the spin_lock.
> CPU B waits for CPU A to release the spin_lock, which won't until it
> get out of the rendezvousing condition;
> 
> Are my understanding and summary correct?
> 
Yes, this looks a decent explanation of the rendezvous-related spinlock
problem... Although I prefer the wording that we do have already in
check_lock() and in start_secondary(). :-D

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-03-11 16:12 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-09 13:17 [PATCH v3 0/2] Make the pcidevs_lock a recursive one Quan Xu
2016-03-09 13:17 ` [PATCH v3 1/2] IOMMU/spinlock: Fix a bug found in AMD IOMMU initialization Quan Xu
2016-03-09 14:59   ` Dario Faggioli
2016-03-10  6:12     ` Xu, Quan
2016-03-11  3:24   ` Meng Xu
2016-03-11  6:54     ` Xu, Quan
2016-03-11 10:35       ` Dario Faggioli
2016-03-11 12:36         ` Xu, Quan
2016-03-11 13:58           ` Dario Faggioli
2016-03-11 14:49           ` Meng Xu
2016-03-11 15:55             ` Dario Faggioli
2016-03-11 17:17               ` Meng Xu
2016-03-11 14:41         ` Meng Xu
2016-03-11 16:12           ` Dario Faggioli [this message]
2016-03-09 13:17 ` [PATCH v3 2/2] IOMMU/spinlock: Make the pcidevs_lock a recursive one Quan Xu
2016-03-09 17:45   ` Dario Faggioli
2016-03-10  1:21     ` Xu, Quan
2016-03-10  9:52   ` Jan Beulich
2016-03-10 11:27     ` Xu, Quan
2016-03-10 13:06       ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1457712721.3102.610.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=quan.xu@intel.com \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=xen-devel@lists.xen.org \
    --cc=xumengpanda@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).