linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCHv2 0/3] Support for HwSpinlock reserved locks
@ 2014-09-12 20:56 Suman Anna
  2014-09-12 20:56 ` [RFC PATCHv2 1/3] hwspinlock/core: prepare unregister code to support " Suman Anna
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Suman Anna @ 2014-09-12 20:56 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Ohad,

This series is an RFC patchset that adds the support for reserved locks
to the HwSpinlock core to restrict the dynamic hwspin_lock_request() API
from allocating reserved locks (specific locks requested by DT client
users). The patches are split away from the v5 hwspinlock/omap dt
series [1] to not block the dt support patches. The series builds
on top of the refreshed hwspinlock core/omap dt series [2], and are
baselined on 3.17-rc3.

Following are the summary of changes:
- Patches 1 and 2 are equivalent patches of Patch 9 and 10
  from [1]. Only change is an update to hwspinlock.txt documentation.
- Patch 3 is an updated version of the reworked patch 12 [3] from v5.
  Changes include some additional function documentation, switch to
  internal of_xlate function and a fix for of_node reference maintainance.

The validation logs on all the applicable OMAP SoCs are at:
  OMAP4  - http://pastebin.ubuntu.com/8329417
  OMAP5  - http://pastebin.ubuntu.com/8329420
  DRA74x - http://pastebin.ubuntu.com/8329519
  AM33xx - http://pastebin.ubuntu.com/8329454
  AM43xx - http://pastebin.ubuntu.com/8329400

The above logs are generated with some additional test patches staged
here for reference:
https://github.com/sumananna/omap-kernel/commits/hwspinlock/test/3.17-rc3-dtv6-rsrvdlocks
https://github.com/sumananna/omap-kernel/commits/hwspinlock/submit/3.17-rc3-dtv6-rsrvdlocks

regards
Suman

[1] http://marc.info/?l=linux-arm-kernel&m=139890465902747&w=2
[2] http://marc.info/?l=linux-arm-kernel&m=141055363113881&w=2
[3] http://marc.info/?l=linux-arm-kernel&m=139968477508013&w=2

Suman Anna (3):
  hwspinlock/core: prepare unregister code to support reserved locks
  hwspinlock/core: prepare core to support reserved locks
  hwspinlock/core: add support for reserved locks

 Documentation/hwspinlock.txt             |   2 +
 drivers/hwspinlock/hwspinlock_core.c     | 108 ++++++++++++++++++++++++-------
 drivers/hwspinlock/hwspinlock_internal.h |   2 +
 3 files changed, 89 insertions(+), 23 deletions(-)

-- 
2.0.4

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [RFC PATCHv2 1/3] hwspinlock/core: prepare unregister code to support reserved locks
  2014-09-12 20:56 [RFC PATCHv2 0/3] Support for HwSpinlock reserved locks Suman Anna
@ 2014-09-12 20:56 ` Suman Anna
  2014-09-12 20:56 ` [RFC PATCHv2 2/3] hwspinlock/core: prepare core " Suman Anna
  2014-09-12 20:56 ` [RFC PATCHv2 3/3] hwspinlock/core: add support for " Suman Anna
  2 siblings, 0 replies; 4+ messages in thread
From: Suman Anna @ 2014-09-12 20:56 UTC (permalink / raw)
  To: linux-arm-kernel

Rearrange the code between hwspin_lock_unregister() and the underlying
hwspin_lock_unregister_single() functions so that the semantics are
similar to the _register_ functions. This change prepares the hwspinlock
driver core to support unregistration of reserved locks better.

Signed-off-by: Suman Anna <s-anna@ti.com>
---
 drivers/hwspinlock/hwspinlock_core.c | 37 +++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
index 7d9f749..5fad292 100644
--- a/drivers/hwspinlock/hwspinlock_core.c
+++ b/drivers/hwspinlock/hwspinlock_core.c
@@ -409,29 +409,33 @@ out:
 	return 0;
 }
 
-static struct hwspinlock *hwspin_lock_unregister_single(unsigned int id)
+static int hwspin_lock_unregister_single(struct hwspinlock *hwlock, int id)
 {
-	struct hwspinlock *hwlock = NULL;
-	int ret;
+	struct hwspinlock *tmp = NULL;
+	int ret = 0;
 
 	mutex_lock(&hwspinlock_tree_lock);
 
 	/* make sure the hwspinlock is not in use (tag is set) */
-	ret = radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED);
-	if (ret == 0) {
+	if (!radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED)) {
 		pr_err("hwspinlock %d still in use (or not present)\n", id);
+		ret = -EBUSY;
 		goto out;
 	}
 
-	hwlock = radix_tree_delete(&hwspinlock_tree, id);
-	if (!hwlock) {
+	tmp = radix_tree_delete(&hwspinlock_tree, id);
+	if (!tmp) {
 		pr_err("failed to delete hwspinlock %d\n", id);
+		ret = -EIO;
 		goto out;
 	}
 
+	/* self-sanity check that should never fail */
+	WARN_ON(tmp != hwlock);
+
 out:
 	mutex_unlock(&hwspinlock_tree_lock);
-	return hwlock;
+	return ret;
 }
 
 /*
@@ -520,8 +524,10 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
 	return 0;
 
 reg_failed:
-	while (--i >= 0)
-		hwspin_lock_unregister_single(base_id + i);
+	while (--i >= 0) {
+		hwlock =  &bank->lock[i];
+		hwspin_lock_unregister_single(hwlock, base_id + i);
+	}
 	mutex_lock(&hwspinlock_tree_lock);
 	list_del(&bank->list);
 	mutex_unlock(&hwspinlock_tree_lock);
@@ -542,18 +548,15 @@ EXPORT_SYMBOL_GPL(hwspin_lock_register);
  */
 int hwspin_lock_unregister(struct hwspinlock_device *bank)
 {
-	struct hwspinlock *hwlock, *tmp;
-	int i;
+	struct hwspinlock *hwlock;
+	int i, ret;
 
 	for (i = 0; i < bank->num_locks; i++) {
 		hwlock = &bank->lock[i];
 
-		tmp = hwspin_lock_unregister_single(bank->base_id + i);
-		if (!tmp)
+		ret = hwspin_lock_unregister_single(hwlock, bank->base_id + i);
+		if (ret)
 			return -EBUSY;
-
-		/* self-sanity check that should never fail */
-		WARN_ON(tmp != hwlock);
 	}
 
 	mutex_lock(&hwspinlock_tree_lock);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [RFC PATCHv2 2/3] hwspinlock/core: prepare core to support reserved locks
  2014-09-12 20:56 [RFC PATCHv2 0/3] Support for HwSpinlock reserved locks Suman Anna
  2014-09-12 20:56 ` [RFC PATCHv2 1/3] hwspinlock/core: prepare unregister code to support " Suman Anna
@ 2014-09-12 20:56 ` Suman Anna
  2014-09-12 20:56 ` [RFC PATCHv2 3/3] hwspinlock/core: add support for " Suman Anna
  2 siblings, 0 replies; 4+ messages in thread
From: Suman Anna @ 2014-09-12 20:56 UTC (permalink / raw)
  To: linux-arm-kernel

The HwSpinlock core allows requesting either a specific lock or
an available normal lock. The specific locks are usually reserved
during board init time, while the normal available locks are
intended to be assigned at runtime.

This patch prepares the hwspinlock core to support this concept
of reserved locks. A new element is added to struct hwspinlock
to identify whether it is reserved to be allocated using the
hwspin_lock_request_specific() API or available for dynamic
allocation. A new tag name, HWSPINLOCK_RESERVED, is introduced
to mark the reserved locks as such.

Signed-off-by: Suman Anna <s-anna@ti.com>
---
 Documentation/hwspinlock.txt             |  2 ++
 drivers/hwspinlock/hwspinlock_core.c     | 14 ++++++++------
 drivers/hwspinlock/hwspinlock_internal.h |  2 ++
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/Documentation/hwspinlock.txt b/Documentation/hwspinlock.txt
index c15dc9f..cdcfdff 100644
--- a/Documentation/hwspinlock.txt
+++ b/Documentation/hwspinlock.txt
@@ -300,11 +300,13 @@ of which represents a single hardware lock:
  * struct hwspinlock - this struct represents a single hwspinlock instance
  * @bank: the hwspinlock_device structure which owns this lock
  * @lock: initialized and used by hwspinlock core
+ * @type: type of lock, used to distinguish regular locks from reserved locks
  * @priv: private data, owned by the underlying platform-specific hwspinlock drv
  */
 struct hwspinlock {
 	struct hwspinlock_device *bank;
 	spinlock_t lock;
+	unsigned int type;
 	void *priv;
 };
 
diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
index 5fad292..f0d8475 100644
--- a/drivers/hwspinlock/hwspinlock_core.c
+++ b/drivers/hwspinlock/hwspinlock_core.c
@@ -33,6 +33,7 @@
 
 /* radix tree tags */
 #define HWSPINLOCK_UNUSED	(0) /* tags an hwspinlock as unused */
+#define HWSPINLOCK_RESERVED	(1) /* tags an hwspinlock as reserved */
 
 /*
  * A radix tree is used to maintain the available hwspinlock instances.
@@ -399,7 +400,7 @@ static int hwspin_lock_register_single(struct hwspinlock *hwlock, int id)
 	}
 
 	/* mark this hwspinlock as available */
-	tmp = radix_tree_tag_set(&hwspinlock_tree, id, HWSPINLOCK_UNUSED);
+	tmp = radix_tree_tag_set(&hwspinlock_tree, id, hwlock->type);
 
 	/* self-sanity check which should never fail */
 	WARN_ON(tmp != hwlock);
@@ -417,7 +418,7 @@ static int hwspin_lock_unregister_single(struct hwspinlock *hwlock, int id)
 	mutex_lock(&hwspinlock_tree_lock);
 
 	/* make sure the hwspinlock is not in use (tag is set) */
-	if (!radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED)) {
+	if (!radix_tree_tag_get(&hwspinlock_tree, id, hwlock->type)) {
 		pr_err("hwspinlock %d still in use (or not present)\n", id);
 		ret = -EBUSY;
 		goto out;
@@ -515,6 +516,7 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
 
 		spin_lock_init(&hwlock->lock);
 		hwlock->bank = bank;
+		hwlock->type = HWSPINLOCK_UNUSED;
 
 		ret = hwspin_lock_register_single(hwlock, base_id + i);
 		if (ret)
@@ -599,7 +601,7 @@ static int __hwspin_lock_request(struct hwspinlock *hwlock)
 
 	/* mark hwspinlock as used, should not fail */
 	tmp = radix_tree_tag_clear(&hwspinlock_tree, hwlock_to_id(hwlock),
-							HWSPINLOCK_UNUSED);
+							hwlock->type);
 
 	/* self-sanity check that should never fail */
 	WARN_ON(tmp != hwlock);
@@ -698,7 +700,7 @@ struct hwspinlock *hwspin_lock_request_specific(unsigned int id)
 	WARN_ON(hwlock_to_id(hwlock) != id);
 
 	/* make sure this hwspinlock is unused */
-	ret = radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED);
+	ret = radix_tree_tag_get(&hwspinlock_tree, id, hwlock->type);
 	if (ret == 0) {
 		pr_warn("hwspinlock %u is already in use\n", id);
 		hwlock = NULL;
@@ -744,7 +746,7 @@ int hwspin_lock_free(struct hwspinlock *hwlock)
 
 	/* make sure the hwspinlock is used */
 	ret = radix_tree_tag_get(&hwspinlock_tree, hwlock_to_id(hwlock),
-							HWSPINLOCK_UNUSED);
+							hwlock->type);
 	if (ret == 1) {
 		dev_err(dev, "%s: hwlock is already free\n", __func__);
 		dump_stack();
@@ -759,7 +761,7 @@ int hwspin_lock_free(struct hwspinlock *hwlock)
 
 	/* mark this hwspinlock as available */
 	tmp = radix_tree_tag_set(&hwspinlock_tree, hwlock_to_id(hwlock),
-							HWSPINLOCK_UNUSED);
+							hwlock->type);
 
 	/* sanity check (this shouldn't happen) */
 	WARN_ON(tmp != hwlock);
diff --git a/drivers/hwspinlock/hwspinlock_internal.h b/drivers/hwspinlock/hwspinlock_internal.h
index aff560c..4ebed1a 100644
--- a/drivers/hwspinlock/hwspinlock_internal.h
+++ b/drivers/hwspinlock/hwspinlock_internal.h
@@ -43,11 +43,13 @@ struct hwspinlock_ops {
  * struct hwspinlock - this struct represents a single hwspinlock instance
  * @bank: the hwspinlock_device structure which owns this lock
  * @lock: initialized and used by hwspinlock core
+ * @type: type of lock, used to distinguish regular locks from reserved locks
  * @priv: private data, owned by the underlying platform-specific hwspinlock drv
  */
 struct hwspinlock {
 	struct hwspinlock_device *bank;
 	spinlock_t lock;
+	unsigned int type;
 	void *priv;
 };
 
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [RFC PATCHv2 3/3] hwspinlock/core: add support for reserved locks
  2014-09-12 20:56 [RFC PATCHv2 0/3] Support for HwSpinlock reserved locks Suman Anna
  2014-09-12 20:56 ` [RFC PATCHv2 1/3] hwspinlock/core: prepare unregister code to support " Suman Anna
  2014-09-12 20:56 ` [RFC PATCHv2 2/3] hwspinlock/core: prepare core " Suman Anna
@ 2014-09-12 20:56 ` Suman Anna
  2 siblings, 0 replies; 4+ messages in thread
From: Suman Anna @ 2014-09-12 20:56 UTC (permalink / raw)
  To: linux-arm-kernel

The HwSpinlock core allows requesting either a specific lock or an
available normal lock. The specific locks are usually reserved during
board init time, while the normal available locks are intended to be
assigned at runtime for clients that do not care about a specific
lock.

The HwSpinlock core has been enhanced to:
  1. mark certain locks as 'reserved' by parsing the DT blob for any
     locks used by client nodes.
  2. restrict the anonymous hwspin_lock_request() API to allocate only
     from non-reserved locks for DT boots.
  3. limit these reserved locks to be allocated only using the
     hwspin_lock_request_specific() API for DT boots.

Signed-off-by: Suman Anna <s-anna@ti.com>
---
 drivers/hwspinlock/hwspinlock_core.c | 61 ++++++++++++++++++++++++++++++++++--
 1 file changed, 59 insertions(+), 2 deletions(-)

diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
index f0d8475..d49a077 100644
--- a/drivers/hwspinlock/hwspinlock_core.c
+++ b/drivers/hwspinlock/hwspinlock_core.c
@@ -474,6 +474,53 @@ static int hwspinlock_device_add(struct hwspinlock_device *bank)
 }
 
 /**
+ * hwspin_mark_reserved_locks() - mark reserved locks
+ *
+ * This is an internal function that mark all the reserved locks within
+ * a hwspinlock device during the registration phase, and is applicable
+ * only for device-tree boots. The locks are marked by browsing through
+ * all the user nodes with the property "hwlocks", so that a separate
+ * property is not needed in the hwspinlock device itself. Note that it
+ * also marks locks used on disabled user nodes.
+ */
+static void hwspin_mark_reserved_locks(struct hwspinlock_device *bank)
+{
+	struct device_node *np = bank->dev->of_node;
+	const char *prop_name = "hwlocks";
+	const char *cells_name = "#hwlock-cells";
+	struct device_node *node = NULL;
+	struct of_phandle_args args;
+	struct hwspinlock *hwlock;
+	int i, id, count, ret;
+
+	for_each_node_with_property(node, prop_name) {
+		count = of_count_phandle_with_args(node, prop_name, cells_name);
+		if (count <= 0)
+			continue;
+
+		for (i = 0; i < count; i++, of_node_put(args.np)) {
+			args.np = NULL;
+			ret = of_parse_phandle_with_args(node, prop_name,
+							 cells_name, i, &args);
+			if (ret || np != args.np)
+				continue;
+
+			id = of_hwspin_lock_simple_xlate(bank, &args);
+			if (id < 0 || id >= bank->num_locks)
+				continue;
+
+			hwlock = &bank->lock[id];
+			if (hwlock->type == HWSPINLOCK_RESERVED) {
+				dev_err(bank->dev, "potential reuse of hwspinlock %d between multiple clients on %s\n",
+					id, np->full_name);
+				continue;
+			}
+			hwlock->type = HWSPINLOCK_RESERVED;
+		}
+	}
+}
+
+/**
  * hwspin_lock_register() - register a new hw spinlock device
  * @bank: the hwspinlock device, which usually provides numerous hw locks
  * @dev: the backing device
@@ -511,12 +558,16 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
 	if (ret)
 		return ret;
 
+	if (dev->of_node)
+		hwspin_mark_reserved_locks(bank);
+
 	for (i = 0; i < num_locks; i++) {
 		hwlock = &bank->lock[i];
 
 		spin_lock_init(&hwlock->lock);
 		hwlock->bank = bank;
-		hwlock->type = HWSPINLOCK_UNUSED;
+		if (hwlock->type != HWSPINLOCK_RESERVED)
+			hwlock->type = HWSPINLOCK_UNUSED;
 
 		ret = hwspin_lock_register_single(hwlock, base_id + i);
 		if (ret)
@@ -699,7 +750,13 @@ struct hwspinlock *hwspin_lock_request_specific(unsigned int id)
 	/* sanity check (this shouldn't happen) */
 	WARN_ON(hwlock_to_id(hwlock) != id);
 
-	/* make sure this hwspinlock is unused */
+	if (hwlock->bank->dev->of_node && hwlock->type != HWSPINLOCK_RESERVED) {
+		pr_warn("hwspinlock %u is not a reserved lock\n", id);
+		hwlock = NULL;
+		goto out;
+	}
+
+	/* make sure this hwspinlock is an unused reserved lock */
 	ret = radix_tree_tag_get(&hwspinlock_tree, id, hwlock->type);
 	if (ret == 0) {
 		pr_warn("hwspinlock %u is already in use\n", id);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-09-12 20:56 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-12 20:56 [RFC PATCHv2 0/3] Support for HwSpinlock reserved locks Suman Anna
2014-09-12 20:56 ` [RFC PATCHv2 1/3] hwspinlock/core: prepare unregister code to support " Suman Anna
2014-09-12 20:56 ` [RFC PATCHv2 2/3] hwspinlock/core: prepare core " Suman Anna
2014-09-12 20:56 ` [RFC PATCHv2 3/3] hwspinlock/core: add support for " Suman Anna

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).