All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yongxin Liu <yongxin.liu@windriver.com>
To: linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org
Cc: linux-nvdimm@lists.01.org, bigeasy@linutronix.de,
	rostedt@goodmis.org, paul.gortmaker@windriver.com,
	tglx@linutronix.de
Subject: [PATCH RT] nvdimm: make lane acquirement RT aware
Date: Wed, 6 Mar 2019 17:57:09 +0800	[thread overview]
Message-ID: <20190306095709.23138-1-yongxin.liu@windriver.com> (raw)

Currently, nvdimm driver isn't RT compatible.
nd_region_acquire_lane() disables preemption with get_cpu() which
causes "scheduling while atomic" spews on RT, when using fio to test
pmem as block device.

In this change, we replace get_cpu/put_cpu with local_lock_cpu/
local_unlock_cpu, and introduce per CPU variable "ndl_local_lock".
Due to preemption on RT, this lock can avoid race condition for the
same lane on the same CPU. When CPU number is greater than the lane
number, lane can be shared among CPUs. "ndl_lock->lock" is used to
protect the lane in this situation.

This patch is derived from Dan Williams and Pankaj Gupta's proposal from
https://www.mail-archive.com/linux-nvdimm@lists.01.org/msg13359.html
and https://www.spinics.net/lists/linux-rt-users/msg20280.html.
Many thanks to them.

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Pankaj Gupta <pagupta@redhat.com>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Cc: linux-nvdimm <linux-nvdimm@lists.01.org>
Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
---
 drivers/nvdimm/region_devs.c | 40 +++++++++++++++++++---------------------
 1 file changed, 19 insertions(+), 21 deletions(-)

diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index fa37afcd43ff..6c5388cf2477 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -18,9 +18,13 @@
 #include <linux/sort.h>
 #include <linux/io.h>
 #include <linux/nd.h>
+#include <linux/locallock.h>
 #include "nd-core.h"
 #include "nd.h"
 
+/* lock for tasks on the same CPU to sequence the access to the lane */
+static DEFINE_LOCAL_IRQ_LOCK(ndl_local_lock);
+
 /*
  * For readq() and writeq() on 32-bit builds, the hi-lo, lo-hi order is
  * irrelevant.
@@ -935,18 +939,15 @@ int nd_blk_region_init(struct nd_region *nd_region)
 unsigned int nd_region_acquire_lane(struct nd_region *nd_region)
 {
 	unsigned int cpu, lane;
+	struct nd_percpu_lane *ndl_lock, *ndl_count;
 
-	cpu = get_cpu();
-	if (nd_region->num_lanes < nr_cpu_ids) {
-		struct nd_percpu_lane *ndl_lock, *ndl_count;
+	cpu = local_lock_cpu(ndl_local_lock);
 
-		lane = cpu % nd_region->num_lanes;
-		ndl_count = per_cpu_ptr(nd_region->lane, cpu);
-		ndl_lock = per_cpu_ptr(nd_region->lane, lane);
-		if (ndl_count->count++ == 0)
-			spin_lock(&ndl_lock->lock);
-	} else
-		lane = cpu;
+	lane = cpu % nd_region->num_lanes;
+	ndl_count = per_cpu_ptr(nd_region->lane, cpu);
+	ndl_lock = per_cpu_ptr(nd_region->lane, lane);
+	if (ndl_count->count++ == 0)
+		spin_lock(&ndl_lock->lock);
 
 	return lane;
 }
@@ -954,17 +955,14 @@ EXPORT_SYMBOL(nd_region_acquire_lane);
 
 void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane)
 {
-	if (nd_region->num_lanes < nr_cpu_ids) {
-		unsigned int cpu = get_cpu();
-		struct nd_percpu_lane *ndl_lock, *ndl_count;
-
-		ndl_count = per_cpu_ptr(nd_region->lane, cpu);
-		ndl_lock = per_cpu_ptr(nd_region->lane, lane);
-		if (--ndl_count->count == 0)
-			spin_unlock(&ndl_lock->lock);
-		put_cpu();
-	}
-	put_cpu();
+	struct nd_percpu_lane *ndl_lock, *ndl_count;
+	unsigned int cpu = smp_processor_id();
+
+	ndl_count = per_cpu_ptr(nd_region->lane, cpu);
+	ndl_lock = per_cpu_ptr(nd_region->lane, lane);
+	if (--ndl_count->count == 0)
+		spin_unlock(&ndl_lock->lock);
+	local_unlock_cpu(ndl_local_lock);
 }
 EXPORT_SYMBOL(nd_region_release_lane);
 
-- 
2.14.4

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Yongxin Liu <yongxin.liu@windriver.com>
To: <linux-kernel@vger.kernel.org>, <linux-rt-users@vger.kernel.org>
Cc: <bigeasy@linutronix.de>, <tglx@linutronix.de>,
	<rostedt@goodmis.org>, <dan.j.williams@intel.com>,
	<pagupta@redhat.com>, <paul.gortmaker@windriver.com>,
	<linux-nvdimm@lists.01.org>
Subject: [PATCH RT] nvdimm: make lane acquirement RT aware
Date: Wed, 6 Mar 2019 17:57:09 +0800	[thread overview]
Message-ID: <20190306095709.23138-1-yongxin.liu@windriver.com> (raw)

Currently, nvdimm driver isn't RT compatible.
nd_region_acquire_lane() disables preemption with get_cpu() which
causes "scheduling while atomic" spews on RT, when using fio to test
pmem as block device.

In this change, we replace get_cpu/put_cpu with local_lock_cpu/
local_unlock_cpu, and introduce per CPU variable "ndl_local_lock".
Due to preemption on RT, this lock can avoid race condition for the
same lane on the same CPU. When CPU number is greater than the lane
number, lane can be shared among CPUs. "ndl_lock->lock" is used to
protect the lane in this situation.

This patch is derived from Dan Williams and Pankaj Gupta's proposal from
https://www.mail-archive.com/linux-nvdimm@lists.01.org/msg13359.html
and https://www.spinics.net/lists/linux-rt-users/msg20280.html.
Many thanks to them.

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Pankaj Gupta <pagupta@redhat.com>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Cc: linux-nvdimm <linux-nvdimm@lists.01.org>
Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
---
 drivers/nvdimm/region_devs.c | 40 +++++++++++++++++++---------------------
 1 file changed, 19 insertions(+), 21 deletions(-)

diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index fa37afcd43ff..6c5388cf2477 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -18,9 +18,13 @@
 #include <linux/sort.h>
 #include <linux/io.h>
 #include <linux/nd.h>
+#include <linux/locallock.h>
 #include "nd-core.h"
 #include "nd.h"
 
+/* lock for tasks on the same CPU to sequence the access to the lane */
+static DEFINE_LOCAL_IRQ_LOCK(ndl_local_lock);
+
 /*
  * For readq() and writeq() on 32-bit builds, the hi-lo, lo-hi order is
  * irrelevant.
@@ -935,18 +939,15 @@ int nd_blk_region_init(struct nd_region *nd_region)
 unsigned int nd_region_acquire_lane(struct nd_region *nd_region)
 {
 	unsigned int cpu, lane;
+	struct nd_percpu_lane *ndl_lock, *ndl_count;
 
-	cpu = get_cpu();
-	if (nd_region->num_lanes < nr_cpu_ids) {
-		struct nd_percpu_lane *ndl_lock, *ndl_count;
+	cpu = local_lock_cpu(ndl_local_lock);
 
-		lane = cpu % nd_region->num_lanes;
-		ndl_count = per_cpu_ptr(nd_region->lane, cpu);
-		ndl_lock = per_cpu_ptr(nd_region->lane, lane);
-		if (ndl_count->count++ == 0)
-			spin_lock(&ndl_lock->lock);
-	} else
-		lane = cpu;
+	lane = cpu % nd_region->num_lanes;
+	ndl_count = per_cpu_ptr(nd_region->lane, cpu);
+	ndl_lock = per_cpu_ptr(nd_region->lane, lane);
+	if (ndl_count->count++ == 0)
+		spin_lock(&ndl_lock->lock);
 
 	return lane;
 }
@@ -954,17 +955,14 @@ EXPORT_SYMBOL(nd_region_acquire_lane);
 
 void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane)
 {
-	if (nd_region->num_lanes < nr_cpu_ids) {
-		unsigned int cpu = get_cpu();
-		struct nd_percpu_lane *ndl_lock, *ndl_count;
-
-		ndl_count = per_cpu_ptr(nd_region->lane, cpu);
-		ndl_lock = per_cpu_ptr(nd_region->lane, lane);
-		if (--ndl_count->count == 0)
-			spin_unlock(&ndl_lock->lock);
-		put_cpu();
-	}
-	put_cpu();
+	struct nd_percpu_lane *ndl_lock, *ndl_count;
+	unsigned int cpu = smp_processor_id();
+
+	ndl_count = per_cpu_ptr(nd_region->lane, cpu);
+	ndl_lock = per_cpu_ptr(nd_region->lane, lane);
+	if (--ndl_count->count == 0)
+		spin_unlock(&ndl_lock->lock);
+	local_unlock_cpu(ndl_local_lock);
 }
 EXPORT_SYMBOL(nd_region_release_lane);
 
-- 
2.14.4


             reply	other threads:[~2019-03-06 10:05 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-06  9:57 Yongxin Liu [this message]
2019-03-06  9:57 ` [PATCH RT] nvdimm: make lane acquirement RT aware Yongxin Liu
2019-03-06 16:35 ` Dan Williams
2019-03-06 16:35   ` Dan Williams
2019-03-07 14:33 ` Sebastian Andrzej Siewior
2019-03-07 14:33   ` Sebastian Andrzej Siewior
2019-03-08  0:07   ` Liu, Yongxin
2019-03-08  0:07     ` Liu, Yongxin
2019-03-08  9:41     ` Sebastian Andrzej Siewior
2019-03-08  9:41       ` Sebastian Andrzej Siewior
2019-03-11  0:44       ` Liu, Yongxin
2019-03-11  0:44         ` Liu, Yongxin
2019-03-15 16:42         ` Sebastian Andrzej Siewior
2019-03-15 16:42           ` Sebastian Andrzej Siewior
2019-03-18  1:41           ` Liu, Yongxin
2019-03-18  1:41             ` Liu, Yongxin
2019-03-18 11:40             ` Sebastian Andrzej Siewior
2019-03-18 11:40               ` Sebastian Andrzej Siewior
2019-03-18 11:48               ` Liu, Yongxin
2019-03-18 11:48                 ` Liu, Yongxin
2019-03-28 17:38                 ` Sebastian Andrzej Siewior
2019-03-28 17:38                   ` Sebastian Andrzej Siewior
2019-03-08  6:31 ` Pankaj Gupta
2019-03-08  6:31   ` Pankaj Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190306095709.23138-1-yongxin.liu@windriver.com \
    --to=yongxin.liu@windriver.com \
    --cc=bigeasy@linutronix.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=paul.gortmaker@windriver.com \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.