linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Srinath Mannam <srinath.mannam@broadcom.com>
To: bhelgaas@google.com
Cc: linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
	bcm-kernel-feedback-list@broadcom.com,
	Srinath Mannam <srinath.mannam@broadcom.com>
Subject: [RFC PATCH] pci: Concurrency issue in NVMe Init through PCIe switch
Date: Mon,  8 May 2017 20:39:50 +0530	[thread overview]
Message-ID: <1494256190-28993-1-git-send-email-srinath.mannam@broadcom.com> (raw)

We found a concurrency issue in NVMe Init when we initialize
multiple NVMe connected over PCIe switch.

Setup details:
 - SMP system with 8 ARMv8 cores running Linux kernel(4.11).
 - Two NVMe cards are connected to PCIe RC through bridge as shown
   in the below figure.

                   [RC]
                    |
                 [BRIDGE]
                    |
               -----------
              |           |
            [NVMe]      [NVMe]

Issue description:
After PCIe enumeration completed NVMe driver probe function called
for both the devices from two CPUS simultaneously.
>From nvme_probe, pci_enable_device_mem called for both the EPs. This
function called pci_enable_bridge enable recursively until RC.

Inside pci_enable_bridge function, at two places concurrency issue is
observed.

Place 1:
  CPU 0:
    1. Done Atomic increment dev->enable_cnt
       in pci_enable_device_flags
    2. Inside pci_enable_resources
    3. Completed pci_read_config_word(dev, PCI_COMMAND, &cmd)
    4. Ready to set PCI_COMMAND_MEMORY (0x2) in
       pci_write_config_word(dev, PCI_COMMAND, cmd)
  CPU 1:
    1. Check pci_is_enabled in function pci_enable_bridge
       and it is true
    2. Check (!dev->is_busmaster) also true
    3. Gone into pci_set_master
    4. Completed pci_read_config_word(dev, PCI_COMMAND, &old_cmd)
    5. Ready to set PCI_COMMAND_MASTER (0x4) in
       pci_write_config_word(dev, PCI_COMMAND, cmd)

By the time of last point for both the CPUs are read value 0 and
ready to write 2 and 4.
After last point final value in PCI_COMMAND register is 4 instead of 6.

Place 2:
  CPU 0:
    1. Done Atomic increment dev->enable_cnt in
       pci_enable_device_flags
    2. Inside pci_enable_resources
    3. Completed pci_read_config_word(dev, PCI_COMMAND, &cmd)
    4. Ready to set PCI_COMMAND_MEMORY (0x2) in
       pci_write_config_word(dev, PCI_COMMAND, cmd);
  CPU 1:
    1. Done Atomic increment dev->enable_cnt in function
       pci_enable_device_flag fail return 0 from there
    2. Gone into pci_set_master
    3. Completed pci_read_config_word(dev, PCI_COMMAND, &old_cmd)
    4. Ready to set PCI_COMMAND_MASTER (0x4) in
       pci_write_config_word(dev, PCI_COMMAND, cmd)

By the time of last point for both the CPUs are read value 0 and
ready to write 2 and 4.
After last point final value in PCI_COMMAND register is 4 instead of 6.

Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com>

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 7904d02..6c5744e 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -1345,21 +1345,39 @@ static void pci_enable_bridge(struct pci_dev *dev)
 {
 	struct pci_dev *bridge;
 	int retval;
+	int err;
+	int i;
+	unsigned int bars = 0;
+	unsigned long flags = IORESOURCE_MEM | IORESOURCE_IO;
 
 	bridge = pci_upstream_bridge(dev);
 	if (bridge)
 		pci_enable_bridge(bridge);
 
-	if (pci_is_enabled(dev)) {
-		if (!dev->is_busmaster)
-			pci_set_master(dev);
+	if (dev->pm_cap) {
+		u16 pmcsr;
+
+		pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
+		dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
+	}
+
+	if (atomic_inc_return(&dev->enable_cnt) > 1)
+		return;		/* already enabled */
+
+	/* only skip sriov related */
+	for (i = 0; i <= PCI_ROM_RESOURCE; i++)
+		if (dev->resource[i].flags & flags)
+			bars |= (1 << i);
+	for (i = PCI_BRIDGE_RESOURCES; i < DEVICE_COUNT_RESOURCE; i++)
+		if (dev->resource[i].flags & flags)
+			bars |= (1 << i);
+
+	err = do_pci_enable_device(dev, bars);
+	if (err < 0) {
+		atomic_dec(&dev->enable_cnt);
 		return;
 	}
 
-	retval = pci_enable_device(dev);
-	if (retval)
-		dev_err(&dev->dev, "Error enabling bridge (%d), continuing\n",
-			retval);
 	pci_set_master(dev);
 }
 
-- 
2.7.4

             reply	other threads:[~2017-05-08 15:10 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-08 15:09 Srinath Mannam [this message]
2017-05-26 19:27 ` [RFC PATCH] pci: Concurrency issue in NVMe Init through PCIe switch Bjorn Helgaas
2017-05-29  9:30   ` Srinath Mannam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1494256190-28993-1-git-send-email-srinath.mannam@broadcom.com \
    --to=srinath.mannam@broadcom.com \
    --cc=bcm-kernel-feedback-list@broadcom.com \
    --cc=bhelgaas@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).