linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] hwspinlock: add sunxi hardware spinlock support
@ 2020-11-18 10:01 Wilken Gottwalt
  2020-11-18 10:01 ` [PATCH 1/2] dt-bindings: hwlock: sunxi: add sunxi_hwspinlock documentation Wilken Gottwalt
  2020-11-18 10:02 ` [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support Wilken Gottwalt
  0 siblings, 2 replies; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-18 10:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang, Rob Herring,
	Maxime Ripard, Chen-Yu Tsai, Jernej Skrabec

Most of the Allwinner SoCs contain a spinlock unit which can be used for
synchronization/locking between different subsystems. According to the
several Allwinner datasheets most of the sun8i and sun50i SoCs share the
same hardware with at least 32 (32bit wide) registers for 32 spinlocks.
The implemented spinlock hardware can support 32, 64, 128 or 256
registers.

This driver supports all four register bank sizes and also provides some
additional information via debugfs. The driver can be build by setting
the HWSPINLOCK_SUNXI symbol and can be also compiled as module.

This patch adds the driver for this hardware and updates the hwlock
documentation on how to use it. According to the datasheets the H2+, H3,
H5 and H6 SoCs share exactly the same spinlock hardware. But I'm pretty
sure that the whole sun8i family has the same hardware. The sun50i family
may be a different story. The H616 is missing the whole spinlock part in
the datasheets, so I assume this is a sun50i part that does not support
hwspinlocks.

The driver itself is not yet enabled in the devicetree files of the H2/H3,
H5 and H6 SoCs, because for now I'm only able to test the driver against
a H2+ device (OrangePi Zero).

This patch adds:
- hwspinlock driver sunxi_hwspinlock
- hwspinlock dt bindings documentation
- updates MAINTAINERS

Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net>

Wilken Gottwalt (2):
  dt-bindings: hwlock: sunxi: add sunxi_hwspinlock documentation
  hwspinlock: add sunxi hardware spinlock support

 .../bindings/hwlock/sunxi-hwspinlock.yaml     |  64 ++++
 MAINTAINERS                                   |   6 +
 drivers/hwspinlock/Kconfig                    |   9 +
 drivers/hwspinlock/Makefile                   |   1 +
 drivers/hwspinlock/sunxi_hwspinlock.c         | 282 ++++++++++++++++++
 5 files changed, 362 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/hwlock/sunxi-hwspinlock.yaml
 create mode 100644 drivers/hwspinlock/sunxi_hwspinlock.c

-- 
2.29.2


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/2] dt-bindings: hwlock: sunxi: add sunxi_hwspinlock documentation
  2020-11-18 10:01 [PATCH 0/2] hwspinlock: add sunxi hardware spinlock support Wilken Gottwalt
@ 2020-11-18 10:01 ` Wilken Gottwalt
  2020-11-18 10:02 ` [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support Wilken Gottwalt
  1 sibling, 0 replies; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-18 10:01 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang, Rob Herring,
	Maxime Ripard, Chen-Yu Tsai, Jernej Skrabec

Adds documentation on how to use/enable the sunxi_hwspinlock driver.

Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net>
---
 .../bindings/hwlock/sunxi-hwspinlock.yaml     | 64 +++++++++++++++++++
 1 file changed, 64 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/hwlock/sunxi-hwspinlock.yaml

diff --git a/Documentation/devicetree/bindings/hwlock/sunxi-hwspinlock.yaml b/Documentation/devicetree/bindings/hwlock/sunxi-hwspinlock.yaml
new file mode 100644
index 000000000000..773eaa6b33ca
--- /dev/null
+++ b/Documentation/devicetree/bindings/hwlock/sunxi-hwspinlock.yaml
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/hwlock/sunxi-hwspinlock.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: SUNXI hardware spinlock for Allwinner based SoCs
+
+maintainers:
+  - Wilken Gottwalt <wilken.gottwalt@posteo.net>
+
+properties:
+  compatible:
+    enum:
+      - allwinner,sun8i-hwspinlock
+      - allwinner,sun50i-hwspinlock
+
+    reg:          # 0x01C18000 (H2+, H3, H5), 0x03004000 (H6), length 0x400 (max 256 * 32bit)
+      maxItems: 1
+
+    clocks:       # phandle to the reference clock
+      maxItems: 1
+
+    clock-names:  # name of the bus ("ahb")
+      maxItems: 1
+
+    resets:       # phandle to the reset control
+      maxItems: 1
+
+    reset-names:  # name of the bus ("ahb")
+      maxItems: 1
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - resets
+  - reset-names
+
+additionalProperties: false
+
+examples:
+
+  - |
+    /* H2+ based OrangePi Zero */
+    hwspinlock: hwspinlock@1C18000 {
+      compatible = "allwinner,sun8i-hwspinlock";
+      reg = <0x01c18000 0x400>;
+      clocks = <&ccu CLK_BUS_SPINLOCK>;
+      clock-names = "ahb";
+      resets = <&ccu RST_BUS_SPINLOCK>;
+      reset-names = "ahb";
+    };
+
+    /* H6 based OrangePi 3 */
+    hwspinlock: hwspinlock@3004000 {
+      compatible = "allwinner,sun50i-hwspinlock";
+      reg = <0x03004000 0x400>;
+      clocks = <&ccu CLK_BUS_SPINLOCK>;
+      clock-names = "ahb";
+      resets = <&ccu RST_BUS_SPINLOCK>;
+      reset-names = "ahb";
+    };
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-18 10:01 [PATCH 0/2] hwspinlock: add sunxi hardware spinlock support Wilken Gottwalt
  2020-11-18 10:01 ` [PATCH 1/2] dt-bindings: hwlock: sunxi: add sunxi_hwspinlock documentation Wilken Gottwalt
@ 2020-11-18 10:02 ` Wilken Gottwalt
  2020-11-18 15:37   ` Maxime Ripard
  2020-11-22  5:19   ` Bjorn Andersson
  1 sibling, 2 replies; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-18 10:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang, Rob Herring,
	Maxime Ripard, Chen-Yu Tsai, Jernej Skrabec

Adds the sunxi_hwspinlock driver and updates makefiles/maintainers.

Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net>
---
 MAINTAINERS                           |   6 +
 drivers/hwspinlock/Kconfig            |   9 +
 drivers/hwspinlock/Makefile           |   1 +
 drivers/hwspinlock/sunxi_hwspinlock.c | 282 ++++++++++++++++++++++++++
 4 files changed, 298 insertions(+)
 create mode 100644 drivers/hwspinlock/sunxi_hwspinlock.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 5cc595ac7b28..7765da172f10 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -722,6 +722,12 @@ L:	linux-crypto@vger.kernel.org
 S:	Maintained
 F:	drivers/crypto/allwinner/
 
+ALLWINNER HARDWARE SPINLOCK SUPPORT
+M:	Wilken Gottwalt <wilken.gottwalt@posteo.net>
+S:	Maintained
+F:	Documentation/devicetree/bindings/hwlock/sunxi-hwspinlock.yaml
+F:	drivers/hwspinlock/sunxi_hwspinlock.c
+
 ALLWINNER THERMAL DRIVER
 M:	Vasily Khoruzhick <anarsoul@gmail.com>
 M:	Yangtao Li <tiny.windzz@gmail.com>
diff --git a/drivers/hwspinlock/Kconfig b/drivers/hwspinlock/Kconfig
index 32cd26352f38..27b6e22126f7 100644
--- a/drivers/hwspinlock/Kconfig
+++ b/drivers/hwspinlock/Kconfig
@@ -55,6 +55,15 @@ config HWSPINLOCK_STM32
 
 	  If unsure, say N.
 
+config HWSPINLOCK_SUNXI
+	tristate "SUNXI Hardware Spinlock device"
+	depends on ARCH_SUNXI || COMPILE_TEST
+	help
+	  Say y here to support the Allwinner hardware mutex device available
+	  in the H2+, H3, H5 and H6 SoCs.
+
+	  If unsure, say N.
+
 config HSEM_U8500
 	tristate "STE Hardware Semaphore functionality"
 	depends on ARCH_U8500 || COMPILE_TEST
diff --git a/drivers/hwspinlock/Makefile b/drivers/hwspinlock/Makefile
index ed053e3f02be..bf46bee95226 100644
--- a/drivers/hwspinlock/Makefile
+++ b/drivers/hwspinlock/Makefile
@@ -9,4 +9,5 @@ obj-$(CONFIG_HWSPINLOCK_QCOM)		+= qcom_hwspinlock.o
 obj-$(CONFIG_HWSPINLOCK_SIRF)		+= sirf_hwspinlock.o
 obj-$(CONFIG_HWSPINLOCK_SPRD)		+= sprd_hwspinlock.o
 obj-$(CONFIG_HWSPINLOCK_STM32)		+= stm32_hwspinlock.o
+obj-$(CONFIG_HWSPINLOCK_SUNXI)		+= sunxi_hwspinlock.o
 obj-$(CONFIG_HSEM_U8500)		+= u8500_hsem.o
diff --git a/drivers/hwspinlock/sunxi_hwspinlock.c b/drivers/hwspinlock/sunxi_hwspinlock.c
new file mode 100644
index 000000000000..54e6ad3ac1c2
--- /dev/null
+++ b/drivers/hwspinlock/sunxi_hwspinlock.c
@@ -0,0 +1,282 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * sunxi_hwspinlock.c - hardware spinlock driver for Allwinner SoCs
+ * Copyright (C) 2020 Wilken Gottwalt <wilken.gottwalt@posteo.net>
+ */
+
+#include <linux/bitops.h>
+#include <linux/clk.h>
+#include <linux/debugfs.h>
+#include <linux/errno.h>
+#include <linux/hwspinlock.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#include "hwspinlock_internal.h"
+
+#define DRIVER_NAME		"sunxi_hwspinlock"
+
+#define SPINLOCK_BASE_ID	0 /* there is only one hwspinlock device in each SoC */
+#define SPINLOCK_SYSSTATUS_REG	0x0000
+#define SPINLOCK_STATUS_REG	0x0010
+#define SPINLOCK_LOCK_REGN	0x0100
+#define SPINLOCK_NOTTAKEN	0
+#define SPINLOCK_TAKEN		1
+
+struct sunxi_hwspinlock {
+	void __iomem *io_base;
+};
+
+struct sunxi_hwspinlock_data {
+	void __iomem *io_base;
+	struct hwspinlock_device *bank;
+	struct reset_control *reset;
+	struct clk *ahb_clock;
+	struct dentry *debugfs;
+	int nlocks;
+};
+
+#ifdef CONFIG_DEBUG_FS
+
+static int hwlocks_supported_show(struct seq_file *seqf, void *unused)
+{
+	struct sunxi_hwspinlock_data *priv = seqf->private;
+
+	seq_printf(seqf, "%d\n", priv->nlocks);
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(hwlocks_supported);
+
+static int hwlocks_inuse_show(struct seq_file *seqf, void *unused)
+{
+	struct sunxi_hwspinlock_data *priv = seqf->private;
+	int inuse;
+
+	/* getting the status of only the main 32 spinlocks is supported */
+	inuse = hweight32(readl(priv->io_base + SPINLOCK_STATUS_REG));
+	seq_printf(seqf, "%d\n", inuse);
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(hwlocks_inuse);
+
+static void sunxi_hwspinlock_debugfs_init(struct sunxi_hwspinlock_data *priv)
+{
+	char name[32];
+
+	scnprintf(name, sizeof(name), "%s", DRIVER_NAME);
+
+	priv->debugfs = debugfs_create_dir(name, NULL);
+	debugfs_create_file("hwlocks_supported", 0444, priv->debugfs, priv,
+			    &hwlocks_supported_fops);
+	debugfs_create_file("hwlocks_inuse", 0444, priv->debugfs, priv, &hwlocks_inuse_fops);
+}
+
+#else
+
+static void sunxi_hwspinlock_debugfs_init(struct sunxi_hwspinlock_data *priv)
+{
+}
+
+#endif
+
+static int sunxi_hwspinlock_trylock(struct hwspinlock *lock)
+{
+	struct sunxi_hwspinlock_data *priv = lock->priv;
+	void __iomem *lock_addr = priv->io_base;
+
+	return (readl(lock_addr) == SPINLOCK_NOTTAKEN);
+}
+
+static void sunxi_hwspinlock_unlock(struct hwspinlock *lock)
+{
+	struct sunxi_hwspinlock_data *priv = lock->priv;
+	void __iomem *lock_addr = priv->io_base;
+
+	writel(SPINLOCK_NOTTAKEN, lock_addr);
+}
+
+static const struct hwspinlock_ops sunxi_hwspinlock_ops = {
+	.trylock	= sunxi_hwspinlock_trylock,
+	.unlock		= sunxi_hwspinlock_unlock,
+};
+
+static int sunxi_hwspinlock_probe(struct platform_device *pdev)
+{
+	struct device_node *node = pdev->dev.of_node;
+	struct sunxi_hwspinlock_data *priv;
+	struct sunxi_hwspinlock *hwpriv;
+	struct hwspinlock *hwlock;
+	struct resource *res;
+	int num_banks, err, i;
+
+	if (!node)
+		return -ENODEV;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res)
+		return -ENODEV;
+
+	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	priv->io_base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(priv->io_base)) {
+		err = PTR_ERR(priv->io_base);
+		dev_err(&pdev->dev, "unable to request MMIO (%d)\n", err);
+		return err;
+	}
+
+	priv->ahb_clock = devm_clk_get(&pdev->dev, "ahb");
+	if (IS_ERR(priv->ahb_clock)) {
+		err = PTR_ERR(priv->ahb_clock);
+		dev_err(&pdev->dev, "unable to get AHB clock (%d)\n", err);
+		return err;
+	}
+
+	priv->reset = devm_reset_control_get_optional(&pdev->dev, "ahb");
+	if (IS_ERR(priv->reset)) {
+		err = PTR_ERR(priv->reset);
+		if (err == -EPROBE_DEFER)
+			return err;
+
+		dev_info(&pdev->dev, "no optional reset control available (%d)\n", err);
+		priv->reset = NULL;
+	}
+
+	if (priv->reset) {
+		err = reset_control_deassert(priv->reset);
+		if (err) {
+			dev_err(&pdev->dev, "deassert reset control failure (%d)\n", err);
+			return err;
+		}
+	}
+
+	err = clk_prepare_enable(priv->ahb_clock);
+	if (err) {
+		dev_err(&pdev->dev, "unable to prepare AHB clock (%d)\n", err);
+		goto reset_fail;
+	}
+
+	/* bit 28 and 29 hold the amount of spinlock banks */
+	num_banks = (readl(priv->io_base + SPINLOCK_SYSSTATUS_REG) >> 28) & 0x03;
+	switch (num_banks) {
+	case 1 ... 4:
+		/*
+		 * 32, 64, 128 and 256 spinlocks are supported by the hardware implementation,
+		 * though most only support 32 spinlocks
+		 */
+		priv->nlocks = 1 << (4 + num_banks);
+		break;
+	default:
+		dev_err(&pdev->dev, "unsupported hwspinlock setup (%d)\n", num_banks);
+		err = -EINVAL;
+		goto fail;
+	}
+
+	/*
+	 * the Allwinner hwspinlock device uses 32bit registers for representing every single
+	 * spinlock, which is a real waste of space
+	 */
+	priv->bank = devm_kzalloc(&pdev->dev, priv->nlocks * sizeof(*hwlock) + sizeof(*priv->bank),
+				  GFP_KERNEL);
+	if (!priv->bank) {
+		err = -ENOMEM;
+		goto fail;
+	}
+
+	for (i = 0, hwlock = &priv->bank->lock[0]; i < priv->nlocks; ++i, ++hwlock) {
+		hwlock->priv = devm_kzalloc(&pdev->dev, sizeof(struct sunxi_hwspinlock),
+					    GFP_KERNEL);
+		if (!hwlock->priv) {
+			err = -ENOMEM;
+			goto fail;
+		}
+
+		hwpriv = hwlock->priv;
+		hwpriv->io_base = priv->io_base + SPINLOCK_LOCK_REGN + sizeof(u32) * i;
+	}
+
+	err = hwspin_lock_register(priv->bank, &pdev->dev, &sunxi_hwspinlock_ops, SPINLOCK_BASE_ID,
+				   priv->nlocks);
+	if (err) {
+		dev_err(&pdev->dev, "unable to register hwspinlocks (%d)\n", err);
+		goto fail;
+	}
+
+	sunxi_hwspinlock_debugfs_init(priv);
+
+	dev_dbg(&pdev->dev, "SUNXI hardware spinlock driver enabled (%d locks)\n", priv->nlocks);
+
+	return 0;
+
+fail:
+	clk_disable_unprepare(priv->ahb_clock);
+
+reset_fail:
+	if (priv->reset)
+		reset_control_assert(priv->reset);
+
+	return err;
+}
+
+static int sunxi_hwspinlock_remove(struct platform_device *pdev)
+{
+	struct sunxi_hwspinlock_data *priv = platform_get_drvdata(pdev);
+	int err;
+
+	debugfs_remove_recursive(priv->debugfs);
+
+	err = hwspin_lock_unregister(priv->bank);
+	if (err) {
+		dev_err(&pdev->dev, "unregister device failed (%d)\n", err);
+		return err;
+	}
+
+	if (priv->reset)
+		reset_control_assert(priv->reset);
+
+	clk_disable_unprepare(priv->ahb_clock);
+
+	return 0;
+}
+
+static const struct of_device_id sunxi_hwspinlock_ids[] = {
+	{ .compatible = "allwinner,sun8i-hwspinlock", },
+	{ .compatible = "allwinner,sun50i-hwspinlock", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, sunxi_hwspinlock_ids);
+
+static struct platform_driver sunxi_hwspinlock_driver = {
+	.probe	= sunxi_hwspinlock_probe,
+	.remove	= sunxi_hwspinlock_remove,
+	.driver	= {
+		.name		= DRIVER_NAME,
+		.of_match_table	= of_match_ptr(sunxi_hwspinlock_ids),
+	},
+};
+
+static int __init sunxi_hwspinlock_init(void)
+{
+	return platform_driver_register(&sunxi_hwspinlock_driver);
+}
+postcore_initcall(sunxi_hwspinlock_init);
+
+static void __exit sunxi_hwspinlock_exit(void)
+{
+	platform_driver_unregister(&sunxi_hwspinlock_driver);
+}
+module_exit(sunxi_hwspinlock_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("SUNXI hardware spinlock driver");
+MODULE_AUTHOR("Wilken Gottwalt <wilken.gottwalt@posteo.net>");
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-18 10:02 ` [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support Wilken Gottwalt
@ 2020-11-18 15:37   ` Maxime Ripard
  2020-11-18 19:36     ` Wilken Gottwalt
  2020-11-22  5:19   ` Bjorn Andersson
  1 sibling, 1 reply; 18+ messages in thread
From: Maxime Ripard @ 2020-11-18 15:37 UTC (permalink / raw)
  To: Wilken Gottwalt
  Cc: linux-kernel, Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang,
	Rob Herring, Chen-Yu Tsai, Jernej Skrabec

[-- Attachment #1: Type: text/plain, Size: 809 bytes --]

Hi Wilken,

On Wed, Nov 18, 2020 at 11:02:40AM +0100, Wilken Gottwalt wrote:
> Adds the sunxi_hwspinlock driver and updates makefiles/maintainers.
> 
> Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net>

A more descriptive commit log would be welcome here, for example
containing on which SoC this driver can be used, and on which it was
tested.

This is the third attempt at that driver, and you can find the previous
versions here:
https://patchwork.kernel.org/project/linux-arm-kernel/cover/20200210170143.20007-1-nborisov@suse.com/
https://lore.kernel.org/patchwork/patch/706512/

Most of the comments on those series still apply to yours.

Most importantly, this hwspinlock is used to synchronize the ARM cores
and the ARISC. How did you test this driver?

Thanks!
Maxime

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-18 15:37   ` Maxime Ripard
@ 2020-11-18 19:36     ` Wilken Gottwalt
  2020-11-19  7:15       ` Maxime Ripard
  0 siblings, 1 reply; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-18 19:36 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: linux-kernel, Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang,
	Rob Herring, Chen-Yu Tsai, Jernej Skrabec

Hi Maxime,

On Wed, 18 Nov 2020 16:37:33 +0100
Maxime Ripard <maxime@cerno.tech> wrote:
> Hi Wilken,
> 
> On Wed, Nov 18, 2020 at 11:02:40AM +0100, Wilken Gottwalt wrote:
> > Adds the sunxi_hwspinlock driver and updates makefiles/maintainers.
> > 
> > Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net>
> 
> A more descriptive commit log would be welcome here, for example
> containing on which SoC this driver can be used, and on which it was
> tested.

can you help me here a bit? I still try to figure out how to do patch sets
properly. Some kernel submitting documentation says everything goes into the
coverletter and other documentation only tells how to split the patches. So
what would be the right way? A quick example based on my patch set would be
really helpful.

> This is the third attempt at that driver, and you can find the previous
> versions here:
> https://patchwork.kernel.org/project/linux-arm-kernel/cover/20200210170143.20007-1-nborisov@suse.com/
> https://lore.kernel.org/patchwork/patch/706512/
> 
> Most of the comments on those series still apply to yours.

Oh, I wrote my driver 2-3 years ago and just prepared it for mainline. I
wasn't aware of other attempts. I really should have checked this. Though,
I really want to get to the point where this driver is good enough for
mainline. Hmmm, it is interesting how similar these drivers are. Looks like
the other developers also got inspired by the already existing hwspinlock
drivers. :D

> Most importantly, this hwspinlock is used to synchronize the ARM cores
> and the ARISC. How did you test this driver?

Yes, you are right, I should have mentioned this. I have a simple test kernel
module for this. But I must admit, testing the ARISC is very hard and I have
no real idea how to do it. Testing the hwspinlocks in general seems to work
with my test kernel module, but I'm not sure if this is really sufficient. I
can provide the code for it if you like. What would be the best way? Github?
Just mailing a patch?

The test module produces these results:

# insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
[   45.395672] [init] sunxi hwspinlock test driver start
[   45.400775] [init] start test locks
[   45.404263] [run ] testing 32 locks
[   45.407804] [test] testing lock 0 -----
[   45.411652] [test] taking lock attempt #0 succeded
[   45.416438] [test] try taken lock attempt #0
[   45.420735] [test] unlock/take attempt #0
[   45.424752] [test] taking lock attempt #1 succeded
[   45.429556] [test] try taken lock attempt #1
[   45.433823] [test] unlock/take attempt #1
[   45.437862] [test] testing lock 1 -----
[   45.441699] [test] taking lock attempt #0 succeded
[   45.446484] [test] try taken lock attempt #0
[   45.450768] [test] unlock/take attempt #0
[   45.454774] [test] taking lock attempt #1 succeded
[   45.459576] [test] try taken lock attempt #1
[   45.463843] [test] unlock/take attempt #1
.
.
.
[   46.309925] [test] testing lock 30 -----
[   46.313852] [test] taking lock attempt #0 succeded
[   46.318654] [test] try taken lock attempt #0
[   46.322920] [test] unlock/take attempt #0
[   46.326944] [test] taking lock attempt #1 succeded
[   46.331729] [test] try taken lock attempt #1
[   46.335994] [test] unlock/take attempt #1
[   46.340021] [test] testing lock 31 -----
[   46.343947] [test] taking lock attempt #0 succeded
[   46.348749] [test] try taken lock attempt #0
[   46.353016] [test] unlock/take attempt #0
[   46.357040] [test] taking lock attempt #1 succeded
[   46.361825] [test] try taken lock attempt #1
[   46.366090] [test] unlock/take attempt #1
[   46.370112] [init] end test locks

Greetings,
Wilken

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-18 19:36     ` Wilken Gottwalt
@ 2020-11-19  7:15       ` Maxime Ripard
  2020-11-19 10:13         ` Wilken Gottwalt
  0 siblings, 1 reply; 18+ messages in thread
From: Maxime Ripard @ 2020-11-19  7:15 UTC (permalink / raw)
  To: Wilken Gottwalt
  Cc: linux-kernel, Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang,
	Rob Herring, Chen-Yu Tsai, Jernej Skrabec

[-- Attachment #1: Type: text/plain, Size: 4437 bytes --]

On Wed, Nov 18, 2020 at 08:36:24PM +0100, Wilken Gottwalt wrote:
> On Wed, 18 Nov 2020 16:37:33 +0100
> Maxime Ripard <maxime@cerno.tech> wrote:
> > Hi Wilken,
> > 
> > On Wed, Nov 18, 2020 at 11:02:40AM +0100, Wilken Gottwalt wrote:
> > > Adds the sunxi_hwspinlock driver and updates makefiles/maintainers.
> > > 
> > > Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net>
> > 
> > A more descriptive commit log would be welcome here, for example
> > containing on which SoC this driver can be used, and on which it was
> > tested.
> 
> can you help me here a bit? I still try to figure out how to do patch sets
> properly. Some kernel submitting documentation says everything goes into the
> coverletter and other documentation only tells how to split the patches. So
> what would be the right way? A quick example based on my patch set would be
> really helpful.

I mean, the split between your patches and so on is good, you got that right

The thing I wanted better details on is the commit log itself, so the
message attached to that patch.

> > This is the third attempt at that driver, and you can find the previous
> > versions here:
> > https://patchwork.kernel.org/project/linux-arm-kernel/cover/20200210170143.20007-1-nborisov@suse.com/
> > https://lore.kernel.org/patchwork/patch/706512/
> > 
> > Most of the comments on those series still apply to yours.
> 
> Oh, I wrote my driver 2-3 years ago and just prepared it for mainline. I
> wasn't aware of other attempts. I really should have checked this. Though,
> I really want to get to the point where this driver is good enough for
> mainline. Hmmm, it is interesting how similar these drivers are. Looks like
> the other developers also got inspired by the already existing hwspinlock
> drivers. :D

Yeah, it looks like you all got the same inspiration :)

> > Most importantly, this hwspinlock is used to synchronize the ARM cores
> > and the ARISC. How did you test this driver?
> 
> Yes, you are right, I should have mentioned this. I have a simple test kernel
> module for this. But I must admit, testing the ARISC is very hard and I have
> no real idea how to do it. Testing the hwspinlocks in general seems to work
> with my test kernel module, but I'm not sure if this is really sufficient. I
> can provide the code for it if you like. What would be the best way? Github?
> Just mailing a patch?
> 
> The test module produces these results:
> 
> # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
> [   45.395672] [init] sunxi hwspinlock test driver start
> [   45.400775] [init] start test locks
> [   45.404263] [run ] testing 32 locks
> [   45.407804] [test] testing lock 0 -----
> [   45.411652] [test] taking lock attempt #0 succeded
> [   45.416438] [test] try taken lock attempt #0
> [   45.420735] [test] unlock/take attempt #0
> [   45.424752] [test] taking lock attempt #1 succeded
> [   45.429556] [test] try taken lock attempt #1
> [   45.433823] [test] unlock/take attempt #1
> [   45.437862] [test] testing lock 1 -----
> [   45.441699] [test] taking lock attempt #0 succeded
> [   45.446484] [test] try taken lock attempt #0
> [   45.450768] [test] unlock/take attempt #0
> [   45.454774] [test] taking lock attempt #1 succeded
> [   45.459576] [test] try taken lock attempt #1
> [   45.463843] [test] unlock/take attempt #1
> .
> .
> .
> [   46.309925] [test] testing lock 30 -----
> [   46.313852] [test] taking lock attempt #0 succeded
> [   46.318654] [test] try taken lock attempt #0
> [   46.322920] [test] unlock/take attempt #0
> [   46.326944] [test] taking lock attempt #1 succeded
> [   46.331729] [test] try taken lock attempt #1
> [   46.335994] [test] unlock/take attempt #1
> [   46.340021] [test] testing lock 31 -----
> [   46.343947] [test] taking lock attempt #0 succeded
> [   46.348749] [test] try taken lock attempt #0
> [   46.353016] [test] unlock/take attempt #0
> [   46.357040] [test] taking lock attempt #1 succeded
> [   46.361825] [test] try taken lock attempt #1
> [   46.366090] [test] unlock/take attempt #1
> [   46.370112] [init] end test locks

That doesn't really test for contention though, and dealing with
contention is mostly what this hardware is about. Could you make a small
test with crust to see if when the arisc has taken the lock, the ARM
cores can't take it?

Maxime

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-19  7:15       ` Maxime Ripard
@ 2020-11-19 10:13         ` Wilken Gottwalt
  2020-11-20 16:42           ` Maxime Ripard
  0 siblings, 1 reply; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-19 10:13 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: linux-kernel, Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang,
	Rob Herring, Chen-Yu Tsai, Jernej Skrabec

On Thu, 19 Nov 2020 08:15:23 +0100
Maxime Ripard <maxime@cerno.tech> wrote:
> > can you help me here a bit? I still try to figure out how to do patch sets
> > properly. Some kernel submitting documentation says everything goes into the
> > coverletter and other documentation only tells how to split the patches. So
> > what would be the right way? A quick example based on my patch set would be
> > really helpful.
> 
> I mean, the split between your patches and so on is good, you got that right
> 
> The thing I wanted better details on is the commit log itself, so the
> message attached to that patch.

Ah yes, I think I got it now. So basically add a nice summary of the coverletter
there.

> > > Most importantly, this hwspinlock is used to synchronize the ARM cores
> > > and the ARISC. How did you test this driver?
> > 
> > Yes, you are right, I should have mentioned this. I have a simple test kernel
> > module for this. But I must admit, testing the ARISC is very hard and I have
> > no real idea how to do it. Testing the hwspinlocks in general seems to work
> > with my test kernel module, but I'm not sure if this is really sufficient. I
> > can provide the code for it if you like. What would be the best way? Github?
> > Just mailing a patch?
> > 
> > The test module produces these results:
> > 
> > # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
> > [   45.395672] [init] sunxi hwspinlock test driver start
> > [   45.400775] [init] start test locks
> > [   45.404263] [run ] testing 32 locks
> > [   45.407804] [test] testing lock 0 -----
> > [   45.411652] [test] taking lock attempt #0 succeded
> > [   45.416438] [test] try taken lock attempt #0
> > [   45.420735] [test] unlock/take attempt #0
> > [   45.424752] [test] taking lock attempt #1 succeded
> > [   45.429556] [test] try taken lock attempt #1
> > [   45.433823] [test] unlock/take attempt #1
> > [   45.437862] [test] testing lock 1 -----
> 
> That doesn't really test for contention though, and dealing with
> contention is mostly what this hardware is about. Could you make a small
> test with crust to see if when the arisc has taken the lock, the ARM
> cores can't take it?

So the best solution would be to write a bare metal program that runs on the
arisc and can be triggered from the linux side (the test kernel module) to take
a spinlock ... or at least take spinlocks periodically for a while and watch it
on the linux side. Okay, I think I can do this. Though, I have to dig through
all this new stuff first.

Greetings,
Wilken

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-19 10:13         ` Wilken Gottwalt
@ 2020-11-20 16:42           ` Maxime Ripard
  2020-11-21 12:22             ` fuyao
  0 siblings, 1 reply; 18+ messages in thread
From: Maxime Ripard @ 2020-11-20 16:42 UTC (permalink / raw)
  To: Wilken Gottwalt
  Cc: linux-kernel, Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang,
	Rob Herring, Chen-Yu Tsai, Jernej Skrabec

[-- Attachment #1: Type: text/plain, Size: 3359 bytes --]

Hi,

On Thu, Nov 19, 2020 at 11:13:43AM +0100, Wilken Gottwalt wrote:
> On Thu, 19 Nov 2020 08:15:23 +0100
> Maxime Ripard <maxime@cerno.tech> wrote:
> > > can you help me here a bit? I still try to figure out how to do patch sets
> > > properly. Some kernel submitting documentation says everything goes into the
> > > coverletter and other documentation only tells how to split the patches. So
> > > what would be the right way? A quick example based on my patch set would be
> > > really helpful.
> > 
> > I mean, the split between your patches and so on is good, you got that right
> > 
> > The thing I wanted better details on is the commit log itself, so the
> > message attached to that patch.
> 
> Ah yes, I think I got it now. So basically add a nice summary of the coverletter
> there.

Yes, a bit more context as well. Eventually, this should be the
motivation on why this patch is useful. So what it can be used for, what
are the challenges, how it was tested, etc.

The cover letter is usually here more to provide some meta-context: what
you expect from the maintainers / reviewers if it's an RFC, if there's
any feature missing or that could be added later on, etc.

> > > > Most importantly, this hwspinlock is used to synchronize the ARM cores
> > > > and the ARISC. How did you test this driver?
> > > 
> > > Yes, you are right, I should have mentioned this. I have a simple test kernel
> > > module for this. But I must admit, testing the ARISC is very hard and I have
> > > no real idea how to do it. Testing the hwspinlocks in general seems to work
> > > with my test kernel module, but I'm not sure if this is really sufficient. I
> > > can provide the code for it if you like. What would be the best way? Github?
> > > Just mailing a patch?
> > > 
> > > The test module produces these results:
> > > 
> > > # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
> > > [   45.395672] [init] sunxi hwspinlock test driver start
> > > [   45.400775] [init] start test locks
> > > [   45.404263] [run ] testing 32 locks
> > > [   45.407804] [test] testing lock 0 -----
> > > [   45.411652] [test] taking lock attempt #0 succeded
> > > [   45.416438] [test] try taken lock attempt #0
> > > [   45.420735] [test] unlock/take attempt #0
> > > [   45.424752] [test] taking lock attempt #1 succeded
> > > [   45.429556] [test] try taken lock attempt #1
> > > [   45.433823] [test] unlock/take attempt #1
> > > [   45.437862] [test] testing lock 1 -----
> > 
> > That doesn't really test for contention though, and dealing with
> > contention is mostly what this hardware is about. Could you make a small
> > test with crust to see if when the arisc has taken the lock, the ARM
> > cores can't take it?
> 
> So the best solution would be to write a bare metal program that runs on the
> arisc and can be triggered from the linux side (the test kernel module) to take
> a spinlock ... or at least take spinlocks periodically for a while and watch it
> on the linux side. Okay, I think I can do this. Though, I have to dig through
> all this new stuff first.

It doesn't have to be super complicated, just a loop that takes a lock,
sleeps for some time, and releases the lock should be enough to at least
validate that the lock is actually working

Maxime

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-20 16:42           ` Maxime Ripard
@ 2020-11-21 12:22             ` fuyao
  2020-11-21 16:44               ` Maxime Ripard
  0 siblings, 1 reply; 18+ messages in thread
From: fuyao @ 2020-11-21 12:22 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Wilken Gottwalt, linux-kernel, Ohad Ben-Cohen, Bjorn Andersson,
	Baolin Wang, Rob Herring, Chen-Yu Tsai, Jernej Skrabec

On Fri, Nov 20, 2020 at 05:42:31PM +0100, Maxime Ripard wrote:
> Hi,
> 
> On Thu, Nov 19, 2020 at 11:13:43AM +0100, Wilken Gottwalt wrote:
> > On Thu, 19 Nov 2020 08:15:23 +0100
> > Maxime Ripard <maxime@cerno.tech> wrote:
> > > > can you help me here a bit? I still try to figure out how to do patch sets
> > > > properly. Some kernel submitting documentation says everything goes into the
> > > > coverletter and other documentation only tells how to split the patches. So
> > > > what would be the right way? A quick example based on my patch set would be
> > > > really helpful.
> > > 
> > > I mean, the split between your patches and so on is good, you got that right
> > > 
> > > The thing I wanted better details on is the commit log itself, so the
> > > message attached to that patch.
> > 
> > Ah yes, I think I got it now. So basically add a nice summary of the coverletter
> > there.
> 
> Yes, a bit more context as well. Eventually, this should be the
> motivation on why this patch is useful. So what it can be used for, what
> are the challenges, how it was tested, etc.
> 
> The cover letter is usually here more to provide some meta-context: what
> you expect from the maintainers / reviewers if it's an RFC, if there's
> any feature missing or that could be added later on, etc.
> 
> > > > > Most importantly, this hwspinlock is used to synchronize the ARM cores
> > > > > and the ARISC. How did you test this driver?
> > > > 
> > > > Yes, you are right, I should have mentioned this. I have a simple test kernel
> > > > module for this. But I must admit, testing the ARISC is very hard and I have
> > > > no real idea how to do it. Testing the hwspinlocks in general seems to work
> > > > with my test kernel module, but I'm not sure if this is really sufficient. I
> > > > can provide the code for it if you like. What would be the best way? Github?
> > > > Just mailing a patch?
> > > > 
> > > > The test module produces these results:
> > > > 
> > > > # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
> > > > [   45.395672] [init] sunxi hwspinlock test driver start
> > > > [   45.400775] [init] start test locks
> > > > [   45.404263] [run ] testing 32 locks
> > > > [   45.407804] [test] testing lock 0 -----
> > > > [   45.411652] [test] taking lock attempt #0 succeded
> > > > [   45.416438] [test] try taken lock attempt #0
> > > > [   45.420735] [test] unlock/take attempt #0
> > > > [   45.424752] [test] taking lock attempt #1 succeded
> > > > [   45.429556] [test] try taken lock attempt #1
> > > > [   45.433823] [test] unlock/take attempt #1
> > > > [   45.437862] [test] testing lock 1 -----
> > > 
> > > That doesn't really test for contention though, and dealing with
> > > contention is mostly what this hardware is about. Could you make a small
> > > test with crust to see if when the arisc has taken the lock, the ARM
> > > cores can't take it?
> > 
> > So the best solution would be to write a bare metal program that runs on the
> > arisc and can be triggered from the linux side (the test kernel module) to take
> > a spinlock ... or at least take spinlocks periodically for a while and watch it
> > on the linux side. Okay, I think I can do this. Though, I have to dig through
> > all this new stuff first.
> 
> It doesn't have to be super complicated, just a loop that takes a lock,
> sleeps for some time, and releases the lock should be enough to at least
> validate that the lock is actually working
> 
I think the difficulty is the bare metal program in arsic has little
documentation.

the arisc is usually used for standby, so don't provide detailed, and
start it with boot0 or uboot.

maybe we can run it use remoteproc to start it.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-21 12:22             ` fuyao
@ 2020-11-21 16:44               ` Maxime Ripard
  2020-11-23 18:32                 ` Wilken Gottwalt
  0 siblings, 1 reply; 18+ messages in thread
From: Maxime Ripard @ 2020-11-21 16:44 UTC (permalink / raw)
  To: Wilken Gottwalt, linux-kernel, Ohad Ben-Cohen, Bjorn Andersson,
	Baolin Wang, Rob Herring, Chen-Yu Tsai, Jernej Skrabec

[-- Attachment #1: Type: text/plain, Size: 3920 bytes --]

On Sat, Nov 21, 2020 at 08:22:55PM +0800, fuyao wrote:
> On Fri, Nov 20, 2020 at 05:42:31PM +0100, Maxime Ripard wrote:
> > Hi,
> > 
> > On Thu, Nov 19, 2020 at 11:13:43AM +0100, Wilken Gottwalt wrote:
> > > On Thu, 19 Nov 2020 08:15:23 +0100
> > > Maxime Ripard <maxime@cerno.tech> wrote:
> > > > > can you help me here a bit? I still try to figure out how to do patch sets
> > > > > properly. Some kernel submitting documentation says everything goes into the
> > > > > coverletter and other documentation only tells how to split the patches. So
> > > > > what would be the right way? A quick example based on my patch set would be
> > > > > really helpful.
> > > > 
> > > > I mean, the split between your patches and so on is good, you got that right
> > > > 
> > > > The thing I wanted better details on is the commit log itself, so the
> > > > message attached to that patch.
> > > 
> > > Ah yes, I think I got it now. So basically add a nice summary of the coverletter
> > > there.
> > 
> > Yes, a bit more context as well. Eventually, this should be the
> > motivation on why this patch is useful. So what it can be used for, what
> > are the challenges, how it was tested, etc.
> > 
> > The cover letter is usually here more to provide some meta-context: what
> > you expect from the maintainers / reviewers if it's an RFC, if there's
> > any feature missing or that could be added later on, etc.
> > 
> > > > > > Most importantly, this hwspinlock is used to synchronize the ARM cores
> > > > > > and the ARISC. How did you test this driver?
> > > > > 
> > > > > Yes, you are right, I should have mentioned this. I have a simple test kernel
> > > > > module for this. But I must admit, testing the ARISC is very hard and I have
> > > > > no real idea how to do it. Testing the hwspinlocks in general seems to work
> > > > > with my test kernel module, but I'm not sure if this is really sufficient. I
> > > > > can provide the code for it if you like. What would be the best way? Github?
> > > > > Just mailing a patch?
> > > > > 
> > > > > The test module produces these results:
> > > > > 
> > > > > # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
> > > > > [   45.395672] [init] sunxi hwspinlock test driver start
> > > > > [   45.400775] [init] start test locks
> > > > > [   45.404263] [run ] testing 32 locks
> > > > > [   45.407804] [test] testing lock 0 -----
> > > > > [   45.411652] [test] taking lock attempt #0 succeded
> > > > > [   45.416438] [test] try taken lock attempt #0
> > > > > [   45.420735] [test] unlock/take attempt #0
> > > > > [   45.424752] [test] taking lock attempt #1 succeded
> > > > > [   45.429556] [test] try taken lock attempt #1
> > > > > [   45.433823] [test] unlock/take attempt #1
> > > > > [   45.437862] [test] testing lock 1 -----
> > > > 
> > > > That doesn't really test for contention though, and dealing with
> > > > contention is mostly what this hardware is about. Could you make a small
> > > > test with crust to see if when the arisc has taken the lock, the ARM
> > > > cores can't take it?
> > > 
> > > So the best solution would be to write a bare metal program that runs on the
> > > arisc and can be triggered from the linux side (the test kernel module) to take
> > > a spinlock ... or at least take spinlocks periodically for a while and watch it
> > > on the linux side. Okay, I think I can do this. Though, I have to dig through
> > > all this new stuff first.
> > 
> > It doesn't have to be super complicated, just a loop that takes a lock,
> > sleeps for some time, and releases the lock should be enough to at least
> > validate that the lock is actually working
> >
>
> I think the difficulty is the bare metal program in arsic has little
> documentation.

crust has mostly figured it out:
https://github.com/crust-firmware/crust

Maxime

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-18 10:02 ` [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support Wilken Gottwalt
  2020-11-18 15:37   ` Maxime Ripard
@ 2020-11-22  5:19   ` Bjorn Andersson
  2020-11-23 18:17     ` Wilken Gottwalt
  1 sibling, 1 reply; 18+ messages in thread
From: Bjorn Andersson @ 2020-11-22  5:19 UTC (permalink / raw)
  To: Wilken Gottwalt
  Cc: linux-kernel, Ohad Ben-Cohen, Baolin Wang, Rob Herring,
	Maxime Ripard, Chen-Yu Tsai, Jernej Skrabec

On Wed 18 Nov 04:02 CST 2020, Wilken Gottwalt wrote:

> Adds the sunxi_hwspinlock driver and updates makefiles/maintainers.
> 
> Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net>
> ---
>  MAINTAINERS                           |   6 +
>  drivers/hwspinlock/Kconfig            |   9 +
>  drivers/hwspinlock/Makefile           |   1 +
>  drivers/hwspinlock/sunxi_hwspinlock.c | 282 ++++++++++++++++++++++++++
>  4 files changed, 298 insertions(+)
>  create mode 100644 drivers/hwspinlock/sunxi_hwspinlock.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 5cc595ac7b28..7765da172f10 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -722,6 +722,12 @@ L:	linux-crypto@vger.kernel.org
>  S:	Maintained
>  F:	drivers/crypto/allwinner/
>  
> +ALLWINNER HARDWARE SPINLOCK SUPPORT
> +M:	Wilken Gottwalt <wilken.gottwalt@posteo.net>
> +S:	Maintained
> +F:	Documentation/devicetree/bindings/hwlock/sunxi-hwspinlock.yaml
> +F:	drivers/hwspinlock/sunxi_hwspinlock.c
> +
>  ALLWINNER THERMAL DRIVER
>  M:	Vasily Khoruzhick <anarsoul@gmail.com>
>  M:	Yangtao Li <tiny.windzz@gmail.com>
> diff --git a/drivers/hwspinlock/Kconfig b/drivers/hwspinlock/Kconfig
> index 32cd26352f38..27b6e22126f7 100644
> --- a/drivers/hwspinlock/Kconfig
> +++ b/drivers/hwspinlock/Kconfig
> @@ -55,6 +55,15 @@ config HWSPINLOCK_STM32
>  
>  	  If unsure, say N.
>  
> +config HWSPINLOCK_SUNXI
> +	tristate "SUNXI Hardware Spinlock device"
> +	depends on ARCH_SUNXI || COMPILE_TEST
> +	help
> +	  Say y here to support the Allwinner hardware mutex device available
> +	  in the H2+, H3, H5 and H6 SoCs.
> +
> +	  If unsure, say N.
> +
>  config HSEM_U8500
>  	tristate "STE Hardware Semaphore functionality"
>  	depends on ARCH_U8500 || COMPILE_TEST
> diff --git a/drivers/hwspinlock/Makefile b/drivers/hwspinlock/Makefile
> index ed053e3f02be..bf46bee95226 100644
> --- a/drivers/hwspinlock/Makefile
> +++ b/drivers/hwspinlock/Makefile
> @@ -9,4 +9,5 @@ obj-$(CONFIG_HWSPINLOCK_QCOM)		+= qcom_hwspinlock.o
>  obj-$(CONFIG_HWSPINLOCK_SIRF)		+= sirf_hwspinlock.o
>  obj-$(CONFIG_HWSPINLOCK_SPRD)		+= sprd_hwspinlock.o
>  obj-$(CONFIG_HWSPINLOCK_STM32)		+= stm32_hwspinlock.o
> +obj-$(CONFIG_HWSPINLOCK_SUNXI)		+= sunxi_hwspinlock.o
>  obj-$(CONFIG_HSEM_U8500)		+= u8500_hsem.o
> diff --git a/drivers/hwspinlock/sunxi_hwspinlock.c b/drivers/hwspinlock/sunxi_hwspinlock.c
> new file mode 100644
> index 000000000000..54e6ad3ac1c2
> --- /dev/null
> +++ b/drivers/hwspinlock/sunxi_hwspinlock.c
> @@ -0,0 +1,282 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * sunxi_hwspinlock.c - hardware spinlock driver for Allwinner SoCs
> + * Copyright (C) 2020 Wilken Gottwalt <wilken.gottwalt@posteo.net>
> + */
> +
> +#include <linux/bitops.h>
> +#include <linux/clk.h>
> +#include <linux/debugfs.h>
> +#include <linux/errno.h>
> +#include <linux/hwspinlock.h>
> +#include <linux/io.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/platform_device.h>
> +#include <linux/reset.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/types.h>
> +
> +#include "hwspinlock_internal.h"
> +
> +#define DRIVER_NAME		"sunxi_hwspinlock"
> +
> +#define SPINLOCK_BASE_ID	0 /* there is only one hwspinlock device in each SoC */
> +#define SPINLOCK_SYSSTATUS_REG	0x0000
> +#define SPINLOCK_STATUS_REG	0x0010
> +#define SPINLOCK_LOCK_REGN	0x0100
> +#define SPINLOCK_NOTTAKEN	0
> +#define SPINLOCK_TAKEN		1
> +
> +struct sunxi_hwspinlock {
> +	void __iomem *io_base;
> +};
> +
> +struct sunxi_hwspinlock_data {
> +	void __iomem *io_base;
> +	struct hwspinlock_device *bank;
> +	struct reset_control *reset;
> +	struct clk *ahb_clock;
> +	struct dentry *debugfs;
> +	int nlocks;
> +};
> +
> +#ifdef CONFIG_DEBUG_FS
> +
> +static int hwlocks_supported_show(struct seq_file *seqf, void *unused)
> +{
> +	struct sunxi_hwspinlock_data *priv = seqf->private;
> +
> +	seq_printf(seqf, "%d\n", priv->nlocks);
> +
> +	return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(hwlocks_supported);
> +
> +static int hwlocks_inuse_show(struct seq_file *seqf, void *unused)
> +{
> +	struct sunxi_hwspinlock_data *priv = seqf->private;
> +	int inuse;
> +
> +	/* getting the status of only the main 32 spinlocks is supported */
> +	inuse = hweight32(readl(priv->io_base + SPINLOCK_STATUS_REG));

So this returns how many of the locks are taken? How is that useful?

> +	seq_printf(seqf, "%d\n", inuse);
> +
> +	return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(hwlocks_inuse);
> +
> +static void sunxi_hwspinlock_debugfs_init(struct sunxi_hwspinlock_data *priv)
> +{
> +	char name[32];
> +
> +	scnprintf(name, sizeof(name), "%s", DRIVER_NAME);

Why not just pass DRIVER_NAME directly to debugfs_create_dir()?

> +
> +	priv->debugfs = debugfs_create_dir(name, NULL);
> +	debugfs_create_file("hwlocks_supported", 0444, priv->debugfs, priv,
> +			    &hwlocks_supported_fops);
> +	debugfs_create_file("hwlocks_inuse", 0444, priv->debugfs, priv, &hwlocks_inuse_fops);
> +}
> +
> +#else
> +
> +static void sunxi_hwspinlock_debugfs_init(struct sunxi_hwspinlock_data *priv)
> +{
> +}
> +
> +#endif
> +
> +static int sunxi_hwspinlock_trylock(struct hwspinlock *lock)
> +{
> +	struct sunxi_hwspinlock_data *priv = lock->priv;
> +	void __iomem *lock_addr = priv->io_base;
> +
> +	return (readl(lock_addr) == SPINLOCK_NOTTAKEN);
> +}
> +
> +static void sunxi_hwspinlock_unlock(struct hwspinlock *lock)
> +{
> +	struct sunxi_hwspinlock_data *priv = lock->priv;
> +	void __iomem *lock_addr = priv->io_base;
> +
> +	writel(SPINLOCK_NOTTAKEN, lock_addr);
> +}
> +
> +static const struct hwspinlock_ops sunxi_hwspinlock_ops = {
> +	.trylock	= sunxi_hwspinlock_trylock,
> +	.unlock		= sunxi_hwspinlock_unlock,
> +};
> +
> +static int sunxi_hwspinlock_probe(struct platform_device *pdev)
> +{
> +	struct device_node *node = pdev->dev.of_node;

I don't see a need for this, or the check to fail if it's NULL. Please
remove it.

> +	struct sunxi_hwspinlock_data *priv;
> +	struct sunxi_hwspinlock *hwpriv;
> +	struct hwspinlock *hwlock;
> +	struct resource *res;
> +	int num_banks, err, i;
> +
> +	if (!node)
> +		return -ENODEV;
> +
> +	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	if (!res)
> +		return -ENODEV;
> +
> +	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
> +	if (!priv)
> +		return -ENOMEM;
> +
> +	priv->io_base = devm_ioremap_resource(&pdev->dev, res);

Please use devm_platform_ioremap_resource() instead.

> +	if (IS_ERR(priv->io_base)) {
> +		err = PTR_ERR(priv->io_base);
> +		dev_err(&pdev->dev, "unable to request MMIO (%d)\n", err);
> +		return err;
> +	}
> +
> +	priv->ahb_clock = devm_clk_get(&pdev->dev, "ahb");
> +	if (IS_ERR(priv->ahb_clock)) {
> +		err = PTR_ERR(priv->ahb_clock);
> +		dev_err(&pdev->dev, "unable to get AHB clock (%d)\n", err);
> +		return err;
> +	}
> +
> +	priv->reset = devm_reset_control_get_optional(&pdev->dev, "ahb");
> +	if (IS_ERR(priv->reset)) { > +		err = PTR_ERR(priv->reset);
> +		if (err == -EPROBE_DEFER)
> +			return err;
> +
> +		dev_info(&pdev->dev, "no optional reset control available (%d)\n", err);

In the even that no "ahb" reset is specified priv->reset is NULL, as
such if you get here there's something wrong - so it's better to fail
here.

And you can use the convenient dev_err_probe() function to deal with the
EPROBE_DEFER.

> +		priv->reset = NULL;
> +	}
> +
> +	if (priv->reset) {

It's perfectly fine to call reset_control_deassert(NULL); so you can
omit this check.

> +		err = reset_control_deassert(priv->reset);
> +		if (err) {
> +			dev_err(&pdev->dev, "deassert reset control failure (%d)\n", err);
> +			return err;
> +		}
> +	}
> +
> +	err = clk_prepare_enable(priv->ahb_clock);
> +	if (err) {
> +		dev_err(&pdev->dev, "unable to prepare AHB clock (%d)\n", err);
> +		goto reset_fail;
> +	}
> +
> +	/* bit 28 and 29 hold the amount of spinlock banks */
> +	num_banks = (readl(priv->io_base + SPINLOCK_SYSSTATUS_REG) >> 28) & 0x03;
> +	switch (num_banks) {
> +	case 1 ... 4:

But the comment above says...and num_banks was & 0x3, so how can it be ...4?

> +		/*
> +		 * 32, 64, 128 and 256 spinlocks are supported by the hardware implementation,
> +		 * though most only support 32 spinlocks
> +		 */
> +		priv->nlocks = 1 << (4 + num_banks);
> +		break;
> +	default:

Given the mask above I believe this would only happen if bits 28 and 29
are both 0...

Regardless I think that this would be better written as a

if (num_banks == invalid) {
	...
	goto fail;
}

priv->nlocks = ...;

> +		dev_err(&pdev->dev, "unsupported hwspinlock setup (%d)\n", num_banks);
> +		err = -EINVAL;
> +		goto fail;
> +	}
> +
> +	/*
> +	 * the Allwinner hwspinlock device uses 32bit registers for representing every single
> +	 * spinlock, which is a real waste of space
> +	 */

Might be a waste, but having them be the natural write size in hardware
makes sense. I'm however not sure what this comment has to do with the
following allocation.

> +	priv->bank = devm_kzalloc(&pdev->dev, priv->nlocks * sizeof(*hwlock) + sizeof(*priv->bank),

Consider using struct_size() here.

> +				  GFP_KERNEL);
> +	if (!priv->bank) {
> +		err = -ENOMEM;
> +		goto fail;

If you do this allocation before you start the clock and deassert the
reset you can just return -ENOMEM here, instead of the goto.

> +	}
> +
> +	for (i = 0, hwlock = &priv->bank->lock[0]; i < priv->nlocks; ++i, ++hwlock) {
> +		hwlock->priv = devm_kzalloc(&pdev->dev, sizeof(struct sunxi_hwspinlock),
> +					    GFP_KERNEL);

		hwpriv = devm_kzalloc(&pdev->dev, sizeof(*hwpriv), GFP_KERNEL);
		if (!hwpriv)
			return -ENOMEM;

		hwpriv->io_base = ...;
		priv->bank->lock[i].priv = hwpriv;


> +		if (!hwlock->priv) {
> +			err = -ENOMEM;
> +			goto fail;
> +		}
> +
> +		hwpriv = hwlock->priv;
> +		hwpriv->io_base = priv->io_base + SPINLOCK_LOCK_REGN + sizeof(u32) * i;
> +	}
> +
> +	err = hwspin_lock_register(priv->bank, &pdev->dev, &sunxi_hwspinlock_ops, SPINLOCK_BASE_ID,
> +				   priv->nlocks);
> +	if (err) {
> +		dev_err(&pdev->dev, "unable to register hwspinlocks (%d)\n", err);
> +		goto fail;
> +	}
> +
> +	sunxi_hwspinlock_debugfs_init(priv);
> +
> +	dev_dbg(&pdev->dev, "SUNXI hardware spinlock driver enabled (%d locks)\n", priv->nlocks);
> +
> +	return 0;
> +
> +fail:
> +	clk_disable_unprepare(priv->ahb_clock);
> +
> +reset_fail:
> +	if (priv->reset)
> +		reset_control_assert(priv->reset);
> +
> +	return err;
> +}
> +
> +static int sunxi_hwspinlock_remove(struct platform_device *pdev)
> +{
> +	struct sunxi_hwspinlock_data *priv = platform_get_drvdata(pdev);
> +	int err;
> +
> +	debugfs_remove_recursive(priv->debugfs);
> +
> +	err = hwspin_lock_unregister(priv->bank);
> +	if (err) {
> +		dev_err(&pdev->dev, "unregister device failed (%d)\n", err);
> +		return err;
> +	}
> +
> +	if (priv->reset)
> +		reset_control_assert(priv->reset);
> +
> +	clk_disable_unprepare(priv->ahb_clock);

By using devm_add_action_or_reset() to set up an "assert and unprepare"-
function you can use devm_hwspin_lock_register(), you can drop the
remove function and you can clean up your sunxi_hwspinlock_data (e.g.
you no longer need a pointer to priv->bank).

> +
> +	return 0;
> +}
> +
> +static const struct of_device_id sunxi_hwspinlock_ids[] = {
> +	{ .compatible = "allwinner,sun8i-hwspinlock", },
> +	{ .compatible = "allwinner,sun50i-hwspinlock", },
> +	{},
> +};
> +MODULE_DEVICE_TABLE(of, sunxi_hwspinlock_ids);
> +
> +static struct platform_driver sunxi_hwspinlock_driver = {
> +	.probe	= sunxi_hwspinlock_probe,
> +	.remove	= sunxi_hwspinlock_remove,
> +	.driver	= {
> +		.name		= DRIVER_NAME,
> +		.of_match_table	= of_match_ptr(sunxi_hwspinlock_ids),

Please avoid of_match_ptr, as this will cause warnings about unused
variables when COMPILE_TEST without OF.

Regards,
Bjorn

> +	},
> +};
> +
> +static int __init sunxi_hwspinlock_init(void)
> +{
> +	return platform_driver_register(&sunxi_hwspinlock_driver);
> +}
> +postcore_initcall(sunxi_hwspinlock_init);
> +
> +static void __exit sunxi_hwspinlock_exit(void)
> +{
> +	platform_driver_unregister(&sunxi_hwspinlock_driver);
> +}
> +module_exit(sunxi_hwspinlock_exit);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_DESCRIPTION("SUNXI hardware spinlock driver");
> +MODULE_AUTHOR("Wilken Gottwalt <wilken.gottwalt@posteo.net>");
> -- 
> 2.29.2
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-22  5:19   ` Bjorn Andersson
@ 2020-11-23 18:17     ` Wilken Gottwalt
  2020-11-24 14:54       ` Bjorn Andersson
  0 siblings, 1 reply; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-23 18:17 UTC (permalink / raw)
  To: Bjorn Andersson
  Cc: linux-kernel, Ohad Ben-Cohen, Baolin Wang, Rob Herring,
	Maxime Ripard, Chen-Yu Tsai, Jernej Skrabec

On Sat, 21 Nov 2020 23:19:00 -0600
Bjorn Andersson <bjorn.andersson@linaro.org> wrote:
> > +static int hwlocks_inuse_show(struct seq_file *seqf, void *unused)
> > +{
> > +	struct sunxi_hwspinlock_data *priv = seqf->private;
> > +	int inuse;
> > +
> > +	/* getting the status of only the main 32 spinlocks is supported */
> > +	inuse = hweight32(readl(priv->io_base + SPINLOCK_STATUS_REG));
> 
> So this returns how many of the locks are taken? How is that useful?

It is a way to see if locks were taken from linux or the arisc core without
touching the actual hwspinlock abi or the locks. So it is a nice way to debug
hwspinlocks, hence it is part of debugfs.

> > +	seq_printf(seqf, "%d\n", inuse);
> > +
> > +	return 0;
> > +}
> > +DEFINE_SHOW_ATTRIBUTE(hwlocks_inuse);
> > +
> > +static void sunxi_hwspinlock_debugfs_init(struct sunxi_hwspinlock_data *priv)
> > +{
> > +	char name[32];
> > +
> > +	scnprintf(name, sizeof(name), "%s", DRIVER_NAME);
> 
> Why not just pass DRIVER_NAME directly to debugfs_create_dir()?

Uuuh, you're right, that wasn't very clever. I think I changed the name creation
to something simpler and and just missed most obvious one.

> > +static int sunxi_hwspinlock_probe(struct platform_device *pdev)
> > +{
> > +	struct device_node *node = pdev->dev.of_node;
> 
> I don't see a need for this, or the check to fail if it's NULL. Please
> remove it.

Yeah, will remove it.

> > +	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
> > +	if (!priv)
> > +		return -ENOMEM;
> > +
> > +	priv->io_base = devm_ioremap_resource(&pdev->dev, res);
> 
> Please use devm_platform_ioremap_resource() instead.

Hmm, so there is a platform version of it, too. Will change it.

> > +	priv->reset = devm_reset_control_get_optional(&pdev->dev, "ahb");
> > +	if (IS_ERR(priv->reset)) { > +		err = PTR_ERR(priv->reset);
> > +		if (err == -EPROBE_DEFER)
> > +			return err;
> > +
> > +		dev_info(&pdev->dev, "no optional reset control available (%d)\n", err);
> 
> In the even that no "ahb" reset is specified priv->reset is NULL, as
> such if you get here there's something wrong - so it's better to fail
> here.
> 
> And you can use the convenient dev_err_probe() function to deal with the
> EPROBE_DEFER.
> 
> > +		priv->reset = NULL;
> > +	}
> > +
> > +	if (priv->reset) {
> 
> It's perfectly fine to call reset_control_deassert(NULL); so you can
> omit this check.

Also will update these.

> > +	/* bit 28 and 29 hold the amount of spinlock banks */
> > +	num_banks = (readl(priv->io_base + SPINLOCK_SYSSTATUS_REG) >> 28) & 0x03;
> > +	switch (num_banks) {
> > +	case 1 ... 4:
> 
> But the comment above says...and num_banks was & 0x3, so how can it be ...4?
>
> > +		/*
> > +		 * 32, 64, 128 and 256 spinlocks are supported by the hardware implementation,
> > +		 * though most only support 32 spinlocks
> > +		 */
> > +		priv->nlocks = 1 << (4 + num_banks);
> > +		break;
> > +	default:
> 
> Given the mask above I believe this would only happen if bits 28 and 29
> are both 0...
> 
> Regardless I think that this would be better written as a
> 
> if (num_banks == invalid) {
> 	...
> 	goto fail;
> }
> 
> priv->nlocks = ...;

This one is a really odd one I noticed right after I submitted the patch. I
added the & 0x03 after reading the datasheets again. But I think the datasheets
are not fully correct here. The datasheets say, 0-4 represent 0, 32, 64, 128 and
256 locks and at the same time say, bits 28/29 are used for this and bits 30/31
are reserved. But you can't represent 5 values with 2 bits. So I'm pretty sure
these reserved bits are also used for it, at least bit 30. I will change this to
something which is more clear. It's weird, the H3, H5 and H6 datasheet state
exactly the same issue.

> > +		dev_err(&pdev->dev, "unsupported hwspinlock setup (%d)\n", num_banks);
> > +		err = -EINVAL;
> > +		goto fail;
> > +	}
> > +
> > +	/*
> > +	 * the Allwinner hwspinlock device uses 32bit registers for representing every single
> > +	 * spinlock, which is a real waste of space
> > +	 */
> 
> Might be a waste, but having them be the natural write size in hardware
> makes sense. I'm however not sure what this comment has to do with the
> following allocation.
> 
> > +	priv->bank = devm_kzalloc(&pdev->dev, priv->nlocks * sizeof(*hwlock) +
> > sizeof(*priv->bank),
> 
> Consider using struct_size() here.
>
> > +				  GFP_KERNEL);
> > +	if (!priv->bank) {
> > +		err = -ENOMEM;
> > +		goto fail;
> 
> If you do this allocation before you start the clock and deassert the
> reset you can just return -ENOMEM here, instead of the goto.
> 
> > +	}
> > +
> > +	for (i = 0, hwlock = &priv->bank->lock[0]; i < priv->nlocks; ++i, ++hwlock) {
> > +		hwlock->priv = devm_kzalloc(&pdev->dev, sizeof(struct sunxi_hwspinlock),
> > +					    GFP_KERNEL);
> 
> 		hwpriv = devm_kzalloc(&pdev->dev, sizeof(*hwpriv), GFP_KERNEL);
> 		if (!hwpriv)
> 			return -ENOMEM;
> 
> 		hwpriv->io_base = ...;
> 		priv->bank->lock[i].priv = hwpriv;
> 
>

You're right, I will update this.

> > +		if (!hwlock->priv) {
> > +			err = -ENOMEM;
> > +			goto fail;
> > +		}
> > +
> > +		hwpriv = hwlock->priv;
> > +		hwpriv->io_base = priv->io_base + SPINLOCK_LOCK_REGN + sizeof(u32) * i;
> > +	}
> > +
> > +	err = hwspin_lock_register(priv->bank, &pdev->dev, &sunxi_hwspinlock_ops,
> > SPINLOCK_BASE_ID,
> > +				   priv->nlocks);
> > +	if (err) {
> > +		dev_err(&pdev->dev, "unable to register hwspinlocks (%d)\n", err);
> > +		goto fail;
> > +	}
> > +
> > +	sunxi_hwspinlock_debugfs_init(priv);
> > +
> > +	dev_dbg(&pdev->dev, "SUNXI hardware spinlock driver enabled (%d locks)\n",
> > priv->nlocks); +
> > +	return 0;
> > +
> > +fail:
> > +	clk_disable_unprepare(priv->ahb_clock);
> > +
> > +reset_fail:
> > +	if (priv->reset)
> > +		reset_control_assert(priv->reset);
> > +
> > +	return err;
> > +}
> > +
> > +static int sunxi_hwspinlock_remove(struct platform_device *pdev)
> > +{
> > +	struct sunxi_hwspinlock_data *priv = platform_get_drvdata(pdev);
> > +	int err;
> > +
> > +	debugfs_remove_recursive(priv->debugfs);
> > +
> > +	err = hwspin_lock_unregister(priv->bank);
> > +	if (err) {
> > +		dev_err(&pdev->dev, "unregister device failed (%d)\n", err);
> > +		return err;
> > +	}
> > +
> > +	if (priv->reset)
> > +		reset_control_assert(priv->reset);
> > +
> > +	clk_disable_unprepare(priv->ahb_clock);
> 
> By using devm_add_action_or_reset() to set up an "assert and unprepare"-
> function you can use devm_hwspin_lock_register(), you can drop the
> remove function and you can clean up your sunxi_hwspinlock_data (e.g.
> you no longer need a pointer to priv->bank).

I'm not very used to these devm_* functions yet, but will try to use these.

> > +
> > +	return 0;
> > +}
> > +
> > +static const struct of_device_id sunxi_hwspinlock_ids[] = {
> > +	{ .compatible = "allwinner,sun8i-hwspinlock", },
> > +	{ .compatible = "allwinner,sun50i-hwspinlock", },
> > +	{},
> > +};
> > +MODULE_DEVICE_TABLE(of, sunxi_hwspinlock_ids);
> > +
> > +static struct platform_driver sunxi_hwspinlock_driver = {
> > +	.probe	= sunxi_hwspinlock_probe,
> > +	.remove	= sunxi_hwspinlock_remove,
> > +	.driver	= {
> > +		.name		= DRIVER_NAME,
> > +		.of_match_table	= of_match_ptr(sunxi_hwspinlock_ids),
> 
> Please avoid of_match_ptr, as this will cause warnings about unused
> variables when COMPILE_TEST without OF.

So did you mean to leave it out completely?

Thanks for looking through this, this really helps a lot. :-)

greetings,
Wilken

> Regards,
> Bjorn
> 
> > +	},
> > +};
> > +
> > +static int __init sunxi_hwspinlock_init(void)
> > +{
> > +	return platform_driver_register(&sunxi_hwspinlock_driver);
> > +}
> > +postcore_initcall(sunxi_hwspinlock_init);
> > +
> > +static void __exit sunxi_hwspinlock_exit(void)
> > +{
> > +	platform_driver_unregister(&sunxi_hwspinlock_driver);
> > +}
> > +module_exit(sunxi_hwspinlock_exit);
> > +
> > +MODULE_LICENSE("GPL");
> > +MODULE_DESCRIPTION("SUNXI hardware spinlock driver");
> > +MODULE_AUTHOR("Wilken Gottwalt <wilken.gottwalt@posteo.net>");
> > -- 
> > 2.29.2
> > 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-21 16:44               ` Maxime Ripard
@ 2020-11-23 18:32                 ` Wilken Gottwalt
  2020-11-24  3:35                   ` Samuel Holland
  0 siblings, 1 reply; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-23 18:32 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: linux-kernel, Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang,
	Rob Herring, Chen-Yu Tsai, Jernej Skrabec

On Sat, 21 Nov 2020 17:44:18 +0100
Maxime Ripard <maxime@cerno.tech> wrote:

> On Sat, Nov 21, 2020 at 08:22:55PM +0800, fuyao wrote:
> > On Fri, Nov 20, 2020 at 05:42:31PM +0100, Maxime Ripard wrote:
> > > Hi,
> > > 
> > > On Thu, Nov 19, 2020 at 11:13:43AM +0100, Wilken Gottwalt wrote:
> > > > On Thu, 19 Nov 2020 08:15:23 +0100
> > > > Maxime Ripard <maxime@cerno.tech> wrote:
> > > > > > can you help me here a bit? I still try to figure out how to do patch sets
> > > > > > properly. Some kernel submitting documentation says everything goes into the
> > > > > > coverletter and other documentation only tells how to split the patches. So
> > > > > > what would be the right way? A quick example based on my patch set would be
> > > > > > really helpful.
> > > > > 
> > > > > I mean, the split between your patches and so on is good, you got that right
> > > > > 
> > > > > The thing I wanted better details on is the commit log itself, so the
> > > > > message attached to that patch.
> > > > 
> > > > Ah yes, I think I got it now. So basically add a nice summary of the coverletter
> > > > there.
> > > 
> > > Yes, a bit more context as well. Eventually, this should be the
> > > motivation on why this patch is useful. So what it can be used for, what
> > > are the challenges, how it was tested, etc.
> > > 
> > > The cover letter is usually here more to provide some meta-context: what
> > > you expect from the maintainers / reviewers if it's an RFC, if there's
> > > any feature missing or that could be added later on, etc.
> > > 
> > > > > > > Most importantly, this hwspinlock is used to synchronize the ARM cores
> > > > > > > and the ARISC. How did you test this driver?
> > > > > > 
> > > > > > Yes, you are right, I should have mentioned this. I have a simple test kernel
> > > > > > module for this. But I must admit, testing the ARISC is very hard and I have
> > > > > > no real idea how to do it. Testing the hwspinlocks in general seems to work
> > > > > > with my test kernel module, but I'm not sure if this is really sufficient. I
> > > > > > can provide the code for it if you like. What would be the best way? Github?
> > > > > > Just mailing a patch?
> > > > > > 
> > > > > > The test module produces these results:
> > > > > > 
> > > > > > # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
> > > > > > [   45.395672] [init] sunxi hwspinlock test driver start
> > > > > > [   45.400775] [init] start test locks
> > > > > > [   45.404263] [run ] testing 32 locks
> > > > > > [   45.407804] [test] testing lock 0 -----
> > > > > > [   45.411652] [test] taking lock attempt #0 succeded
> > > > > > [   45.416438] [test] try taken lock attempt #0
> > > > > > [   45.420735] [test] unlock/take attempt #0
> > > > > > [   45.424752] [test] taking lock attempt #1 succeded
> > > > > > [   45.429556] [test] try taken lock attempt #1
> > > > > > [   45.433823] [test] unlock/take attempt #1
> > > > > > [   45.437862] [test] testing lock 1 -----
> > > > > 
> > > > > That doesn't really test for contention though, and dealing with
> > > > > contention is mostly what this hardware is about. Could you make a small
> > > > > test with crust to see if when the arisc has taken the lock, the ARM
> > > > > cores can't take it?
> > > > 
> > > > So the best solution would be to write a bare metal program that runs on the
> > > > arisc and can be triggered from the linux side (the test kernel module) to take
> > > > a spinlock ... or at least take spinlocks periodically for a while and watch it
> > > > on the linux side. Okay, I think I can do this. Though, I have to dig through
> > > > all this new stuff first.
> > > 
> > > It doesn't have to be super complicated, just a loop that takes a lock,
> > > sleeps for some time, and releases the lock should be enough to at least
> > > validate that the lock is actually working
> > >
> >
> > I think the difficulty is the bare metal program in arsic has little
> > documentation.
> 
> crust has mostly figured it out:
> https://github.com/crust-firmware/crust

I actually have serious trouble to get crust running. It compiles for H2+/H3, but
I can't figure out if it runs at all. I will switch to a H5 based device which is
confirmed to work. If I see this correctly crust is doing nothing with spinlocks
yet, so I may end up also working on crust, adding the spinlocks there too. Don't
know yet how long I will take to understand every detail, but I will report
progress.

Greetings,
Wilken

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-23 18:32                 ` Wilken Gottwalt
@ 2020-11-24  3:35                   ` Samuel Holland
  2020-11-24 14:28                     ` Maxime Ripard
  2020-11-26 13:10                     ` Wilken Gottwalt
  0 siblings, 2 replies; 18+ messages in thread
From: Samuel Holland @ 2020-11-24  3:35 UTC (permalink / raw)
  To: Wilken Gottwalt, Maxime Ripard
  Cc: linux-kernel, Ohad Ben-Cohen, Bjorn Andersson, Baolin Wang,
	Rob Herring, Chen-Yu Tsai, Jernej Skrabec

On 11/23/20 12:32 PM, Wilken Gottwalt wrote:
> On Sat, 21 Nov 2020 17:44:18 +0100
> Maxime Ripard <maxime@cerno.tech> wrote:
> 
>> On Sat, Nov 21, 2020 at 08:22:55PM +0800, fuyao wrote:
>>> On Fri, Nov 20, 2020 at 05:42:31PM +0100, Maxime Ripard wrote:
>>>> Hi,
>>>>
>>>> On Thu, Nov 19, 2020 at 11:13:43AM +0100, Wilken Gottwalt wrote:
>>>>> On Thu, 19 Nov 2020 08:15:23 +0100
>>>>> Maxime Ripard <maxime@cerno.tech> wrote:
>>>>>>> can you help me here a bit? I still try to figure out how to do patch sets
>>>>>>> properly. Some kernel submitting documentation says everything goes into the
>>>>>>> coverletter and other documentation only tells how to split the patches. So
>>>>>>> what would be the right way? A quick example based on my patch set would be
>>>>>>> really helpful.
>>>>>>
>>>>>> I mean, the split between your patches and so on is good, you got that right
>>>>>>
>>>>>> The thing I wanted better details on is the commit log itself, so the
>>>>>> message attached to that patch.
>>>>>
>>>>> Ah yes, I think I got it now. So basically add a nice summary of the coverletter
>>>>> there.
>>>>
>>>> Yes, a bit more context as well. Eventually, this should be the
>>>> motivation on why this patch is useful. So what it can be used for, what
>>>> are the challenges, how it was tested, etc.
>>>>
>>>> The cover letter is usually here more to provide some meta-context: what
>>>> you expect from the maintainers / reviewers if it's an RFC, if there's
>>>> any feature missing or that could be added later on, etc.
>>>>
>>>>>>>> Most importantly, this hwspinlock is used to synchronize the ARM cores
>>>>>>>> and the ARISC. How did you test this driver?
>>>>>>>
>>>>>>> Yes, you are right, I should have mentioned this. I have a simple test kernel
>>>>>>> module for this. But I must admit, testing the ARISC is very hard and I have
>>>>>>> no real idea how to do it. Testing the hwspinlocks in general seems to work
>>>>>>> with my test kernel module, but I'm not sure if this is really sufficient. I
>>>>>>> can provide the code for it if you like. What would be the best way? Github?
>>>>>>> Just mailing a patch?
>>>>>>>
>>>>>>> The test module produces these results:
>>>>>>>
>>>>>>> # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
>>>>>>> [   45.395672] [init] sunxi hwspinlock test driver start
>>>>>>> [   45.400775] [init] start test locks
>>>>>>> [   45.404263] [run ] testing 32 locks
>>>>>>> [   45.407804] [test] testing lock 0 -----
>>>>>>> [   45.411652] [test] taking lock attempt #0 succeded
>>>>>>> [   45.416438] [test] try taken lock attempt #0
>>>>>>> [   45.420735] [test] unlock/take attempt #0
>>>>>>> [   45.424752] [test] taking lock attempt #1 succeded
>>>>>>> [   45.429556] [test] try taken lock attempt #1
>>>>>>> [   45.433823] [test] unlock/take attempt #1
>>>>>>> [   45.437862] [test] testing lock 1 -----
>>>>>>
>>>>>> That doesn't really test for contention though, and dealing with
>>>>>> contention is mostly what this hardware is about. Could you make a small
>>>>>> test with crust to see if when the arisc has taken the lock, the ARM
>>>>>> cores can't take it?
>>>>>
>>>>> So the best solution would be to write a bare metal program that runs on the
>>>>> arisc and can be triggered from the linux side (the test kernel module) to take
>>>>> a spinlock ... or at least take spinlocks periodically for a while and watch it
>>>>> on the linux side. Okay, I think I can do this. Though, I have to dig through
>>>>> all this new stuff first.
>>>>
>>>> It doesn't have to be super complicated, just a loop that takes a lock,
>>>> sleeps for some time, and releases the lock should be enough to at least
>>>> validate that the lock is actually working
>>>>
>>>
>>> I think the difficulty is the bare metal program in arsic has little
>>> documentation.
>>
>> crust has mostly figured it out:
>> https://github.com/crust-firmware/crust
> 
> I actually have serious trouble to get crust running. It compiles for H2+/H3, but
> I can't figure out if it runs at all. I will switch to a H5 based device which is

Crust does not yet support the H2+/H3 (it is active WIP). H5 should work
well.

> confirmed to work. If I see this correctly crust is doing nothing with spinlocks
> yet, so I may end up also working on crust, adding the spinlocks there too. Don't> know yet how long I will take to understand every detail, but I will
report
> progress.

Correct. There is currently no hwspinlock driver in crust. For testing,
you can poke MMIO from the main loop, near the call to scpi_poll() in
common/system.c. You can use the timeout.h functions for timing.

If you want to write a full driver, I would like to know how you expect
to use the hwspinlocks. Allocating the locks has to be coordinated among
all of the users: Linux, U-Boot, crust, any other ARISC firmware, etc.

> Greetings,
> Wilken

Cheers,
Samuel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-24  3:35                   ` Samuel Holland
@ 2020-11-24 14:28                     ` Maxime Ripard
  2020-11-26 13:10                     ` Wilken Gottwalt
  1 sibling, 0 replies; 18+ messages in thread
From: Maxime Ripard @ 2020-11-24 14:28 UTC (permalink / raw)
  To: Samuel Holland
  Cc: Wilken Gottwalt, linux-kernel, Ohad Ben-Cohen, Bjorn Andersson,
	Baolin Wang, Rob Herring, Chen-Yu Tsai, Jernej Skrabec

[-- Attachment #1: Type: text/plain, Size: 5471 bytes --]

On Mon, Nov 23, 2020 at 09:35:52PM -0600, Samuel Holland wrote:
> On 11/23/20 12:32 PM, Wilken Gottwalt wrote:
> > On Sat, 21 Nov 2020 17:44:18 +0100
> > Maxime Ripard <maxime@cerno.tech> wrote:
> > 
> >> On Sat, Nov 21, 2020 at 08:22:55PM +0800, fuyao wrote:
> >>> On Fri, Nov 20, 2020 at 05:42:31PM +0100, Maxime Ripard wrote:
> >>>> Hi,
> >>>>
> >>>> On Thu, Nov 19, 2020 at 11:13:43AM +0100, Wilken Gottwalt wrote:
> >>>>> On Thu, 19 Nov 2020 08:15:23 +0100
> >>>>> Maxime Ripard <maxime@cerno.tech> wrote:
> >>>>>>> can you help me here a bit? I still try to figure out how to do patch sets
> >>>>>>> properly. Some kernel submitting documentation says everything goes into the
> >>>>>>> coverletter and other documentation only tells how to split the patches. So
> >>>>>>> what would be the right way? A quick example based on my patch set would be
> >>>>>>> really helpful.
> >>>>>>
> >>>>>> I mean, the split between your patches and so on is good, you got that right
> >>>>>>
> >>>>>> The thing I wanted better details on is the commit log itself, so the
> >>>>>> message attached to that patch.
> >>>>>
> >>>>> Ah yes, I think I got it now. So basically add a nice summary of the coverletter
> >>>>> there.
> >>>>
> >>>> Yes, a bit more context as well. Eventually, this should be the
> >>>> motivation on why this patch is useful. So what it can be used for, what
> >>>> are the challenges, how it was tested, etc.
> >>>>
> >>>> The cover letter is usually here more to provide some meta-context: what
> >>>> you expect from the maintainers / reviewers if it's an RFC, if there's
> >>>> any feature missing or that could be added later on, etc.
> >>>>
> >>>>>>>> Most importantly, this hwspinlock is used to synchronize the ARM cores
> >>>>>>>> and the ARISC. How did you test this driver?
> >>>>>>>
> >>>>>>> Yes, you are right, I should have mentioned this. I have a simple test kernel
> >>>>>>> module for this. But I must admit, testing the ARISC is very hard and I have
> >>>>>>> no real idea how to do it. Testing the hwspinlocks in general seems to work
> >>>>>>> with my test kernel module, but I'm not sure if this is really sufficient. I
> >>>>>>> can provide the code for it if you like. What would be the best way? Github?
> >>>>>>> Just mailing a patch?
> >>>>>>>
> >>>>>>> The test module produces these results:
> >>>>>>>
> >>>>>>> # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
> >>>>>>> [   45.395672] [init] sunxi hwspinlock test driver start
> >>>>>>> [   45.400775] [init] start test locks
> >>>>>>> [   45.404263] [run ] testing 32 locks
> >>>>>>> [   45.407804] [test] testing lock 0 -----
> >>>>>>> [   45.411652] [test] taking lock attempt #0 succeded
> >>>>>>> [   45.416438] [test] try taken lock attempt #0
> >>>>>>> [   45.420735] [test] unlock/take attempt #0
> >>>>>>> [   45.424752] [test] taking lock attempt #1 succeded
> >>>>>>> [   45.429556] [test] try taken lock attempt #1
> >>>>>>> [   45.433823] [test] unlock/take attempt #1
> >>>>>>> [   45.437862] [test] testing lock 1 -----
> >>>>>>
> >>>>>> That doesn't really test for contention though, and dealing with
> >>>>>> contention is mostly what this hardware is about. Could you make a small
> >>>>>> test with crust to see if when the arisc has taken the lock, the ARM
> >>>>>> cores can't take it?
> >>>>>
> >>>>> So the best solution would be to write a bare metal program that runs on the
> >>>>> arisc and can be triggered from the linux side (the test kernel module) to take
> >>>>> a spinlock ... or at least take spinlocks periodically for a while and watch it
> >>>>> on the linux side. Okay, I think I can do this. Though, I have to dig through
> >>>>> all this new stuff first.
> >>>>
> >>>> It doesn't have to be super complicated, just a loop that takes a lock,
> >>>> sleeps for some time, and releases the lock should be enough to at least
> >>>> validate that the lock is actually working
> >>>>
> >>>
> >>> I think the difficulty is the bare metal program in arsic has little
> >>> documentation.
> >>
> >> crust has mostly figured it out:
> >> https://github.com/crust-firmware/crust
> > 
> > I actually have serious trouble to get crust running. It compiles for H2+/H3, but
> > I can't figure out if it runs at all. I will switch to a H5 based device which is
> 
> Crust does not yet support the H2+/H3 (it is active WIP). H5 should work
> well.
> 
> > confirmed to work. If I see this correctly crust is doing nothing
> > with spinlocks yet, so I may end up also working on crust, adding
> > the spinlocks there too. Don't know yet how long I will take to
> > understand every detail, but I will report progress.
> 
> Correct. There is currently no hwspinlock driver in crust. For testing,
> you can poke MMIO from the main loop, near the call to scpi_poll() in
> common/system.c. You can use the timeout.h functions for timing.

Yeah, that would be enough for me. We only really need to make sure that
the concurrency is properly handled to merge the driver

> If you want to write a full driver, I would like to know how you expect
> to use the hwspinlocks. Allocating the locks has to be coordinated among
> all of the users: Linux, U-Boot, crust, any other ARISC firmware, etc.

while that can come as a second step and I wouldn't make it a
requirement for Linux to merged the hwspinlock driver.

Maxime

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-23 18:17     ` Wilken Gottwalt
@ 2020-11-24 14:54       ` Bjorn Andersson
  2020-11-26 13:31         ` Wilken Gottwalt
  0 siblings, 1 reply; 18+ messages in thread
From: Bjorn Andersson @ 2020-11-24 14:54 UTC (permalink / raw)
  To: Wilken Gottwalt
  Cc: linux-kernel, Ohad Ben-Cohen, Baolin Wang, Rob Herring,
	Maxime Ripard, Chen-Yu Tsai, Jernej Skrabec

On Mon 23 Nov 12:17 CST 2020, Wilken Gottwalt wrote:

> On Sat, 21 Nov 2020 23:19:00 -0600
> Bjorn Andersson <bjorn.andersson@linaro.org> wrote:
> > > +static int hwlocks_inuse_show(struct seq_file *seqf, void *unused)
> > > +{
> > > +	struct sunxi_hwspinlock_data *priv = seqf->private;
> > > +	int inuse;
> > > +
> > > +	/* getting the status of only the main 32 spinlocks is supported */
> > > +	inuse = hweight32(readl(priv->io_base + SPINLOCK_STATUS_REG));
> > 
> > So this returns how many of the locks are taken? How is that useful?
> 
> It is a way to see if locks were taken from linux or the arisc core without
> touching the actual hwspinlock abi or the locks. So it is a nice way to debug
> hwspinlocks, hence it is part of debugfs.
> 

So in a scenario where two remote processors ping-pong the lock between
them, this will always read 1 and you won't know why?

> > > +	seq_printf(seqf, "%d\n", inuse);
[..]
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static const struct of_device_id sunxi_hwspinlock_ids[] = {
> > > +	{ .compatible = "allwinner,sun8i-hwspinlock", },
> > > +	{ .compatible = "allwinner,sun50i-hwspinlock", },
> > > +	{},
> > > +};
> > > +MODULE_DEVICE_TABLE(of, sunxi_hwspinlock_ids);
> > > +
> > > +static struct platform_driver sunxi_hwspinlock_driver = {
> > > +	.probe	= sunxi_hwspinlock_probe,
> > > +	.remove	= sunxi_hwspinlock_remove,
> > > +	.driver	= {
> > > +		.name		= DRIVER_NAME,
> > > +		.of_match_table	= of_match_ptr(sunxi_hwspinlock_ids),
> > 
> > Please avoid of_match_ptr, as this will cause warnings about unused
> > variables when COMPILE_TEST without OF.
> 
> So did you mean to leave it out completely?
> 

Yes, "worst case" is that you include the reference to
sunxi_hwspinlock_ids on a build without CONFIG_OF and wasting a little
bit of memory.

Using of_match_ptr() with CONFIG_OF=n will result in NULL and as such
we'll get a compile warning that nothing references sunxi_hwspinlock_ids
- so then that will have to be marked __maybe_unused, or wrapped in an
#if...

So better just leave it as:
	.of_match_table = sunxi_hwspinlock_ids,

Regards,
Bjorn

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-24  3:35                   ` Samuel Holland
  2020-11-24 14:28                     ` Maxime Ripard
@ 2020-11-26 13:10                     ` Wilken Gottwalt
  1 sibling, 0 replies; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-26 13:10 UTC (permalink / raw)
  To: Samuel Holland
  Cc: Maxime Ripard, linux-kernel, Ohad Ben-Cohen, Bjorn Andersson,
	Baolin Wang, Rob Herring, Chen-Yu Tsai, Jernej Skrabec

On Mon, 23 Nov 2020 21:35:52 -0600
Samuel Holland <samuel@sholland.org> wrote:

> On 11/23/20 12:32 PM, Wilken Gottwalt wrote:
> > On Sat, 21 Nov 2020 17:44:18 +0100
> > Maxime Ripard <maxime@cerno.tech> wrote:
> > 
> >> On Sat, Nov 21, 2020 at 08:22:55PM +0800, fuyao wrote:
> >>> On Fri, Nov 20, 2020 at 05:42:31PM +0100, Maxime Ripard wrote:
> >>>> Hi,
> >>>>
> >>>> On Thu, Nov 19, 2020 at 11:13:43AM +0100, Wilken Gottwalt wrote:
> >>>>> On Thu, 19 Nov 2020 08:15:23 +0100
> >>>>> Maxime Ripard <maxime@cerno.tech> wrote:
> >>>>>>> can you help me here a bit? I still try to figure out how to do patch sets
> >>>>>>> properly. Some kernel submitting documentation says everything goes into the
> >>>>>>> coverletter and other documentation only tells how to split the patches. So
> >>>>>>> what would be the right way? A quick example based on my patch set would be
> >>>>>>> really helpful.
> >>>>>>
> >>>>>> I mean, the split between your patches and so on is good, you got that right
> >>>>>>
> >>>>>> The thing I wanted better details on is the commit log itself, so the
> >>>>>> message attached to that patch.
> >>>>>
> >>>>> Ah yes, I think I got it now. So basically add a nice summary of the coverletter
> >>>>> there.
> >>>>
> >>>> Yes, a bit more context as well. Eventually, this should be the
> >>>> motivation on why this patch is useful. So what it can be used for, what
> >>>> are the challenges, how it was tested, etc.
> >>>>
> >>>> The cover letter is usually here more to provide some meta-context: what
> >>>> you expect from the maintainers / reviewers if it's an RFC, if there's
> >>>> any feature missing or that could be added later on, etc.
> >>>>
> >>>>>>>> Most importantly, this hwspinlock is used to synchronize the ARM cores
> >>>>>>>> and the ARISC. How did you test this driver?
> >>>>>>>
> >>>>>>> Yes, you are right, I should have mentioned this. I have a simple test kernel
> >>>>>>> module for this. But I must admit, testing the ARISC is very hard and I have
> >>>>>>> no real idea how to do it. Testing the hwspinlocks in general seems to work
> >>>>>>> with my test kernel module, but I'm not sure if this is really sufficient. I
> >>>>>>> can provide the code for it if you like. What would be the best way? Github?
> >>>>>>> Just mailing a patch?
> >>>>>>>
> >>>>>>> The test module produces these results:
> >>>>>>>
> >>>>>>> # insmod /lib/modules/5.9.8/kernel/drivers/hwspinlock/sunxi_hwspinlock_test.ko 
> >>>>>>> [   45.395672] [init] sunxi hwspinlock test driver start
> >>>>>>> [   45.400775] [init] start test locks
> >>>>>>> [   45.404263] [run ] testing 32 locks
> >>>>>>> [   45.407804] [test] testing lock 0 -----
> >>>>>>> [   45.411652] [test] taking lock attempt #0 succeded
> >>>>>>> [   45.416438] [test] try taken lock attempt #0
> >>>>>>> [   45.420735] [test] unlock/take attempt #0
> >>>>>>> [   45.424752] [test] taking lock attempt #1 succeded
> >>>>>>> [   45.429556] [test] try taken lock attempt #1
> >>>>>>> [   45.433823] [test] unlock/take attempt #1
> >>>>>>> [   45.437862] [test] testing lock 1 -----
> >>>>>>
> >>>>>> That doesn't really test for contention though, and dealing with
> >>>>>> contention is mostly what this hardware is about. Could you make a small
> >>>>>> test with crust to see if when the arisc has taken the lock, the ARM
> >>>>>> cores can't take it?
> >>>>>
> >>>>> So the best solution would be to write a bare metal program that runs on the
> >>>>> arisc and can be triggered from the linux side (the test kernel module) to take
> >>>>> a spinlock ... or at least take spinlocks periodically for a while and watch it
> >>>>> on the linux side. Okay, I think I can do this. Though, I have to dig through
> >>>>> all this new stuff first.
> >>>>
> >>>> It doesn't have to be super complicated, just a loop that takes a lock,
> >>>> sleeps for some time, and releases the lock should be enough to at least
> >>>> validate that the lock is actually working
> >>>>
> >>>
> >>> I think the difficulty is the bare metal program in arsic has little
> >>> documentation.
> >>
> >> crust has mostly figured it out:
> >> https://github.com/crust-firmware/crust
> > 
> > I actually have serious trouble to get crust running. It compiles for H2+/H3, but
> > I can't figure out if it runs at all. I will switch to a H5 based device which is
> 
> Crust does not yet support the H2+/H3 (it is active WIP). H5 should work
> well.
> 
> > confirmed to work. If I see this correctly crust is doing nothing with spinlocks
> > yet, so I may end up also working on crust, adding the spinlocks there too. Don't> know yet how
> > long I will take to understand every detail, but I will
> report
> > progress.
> 
> Correct. There is currently no hwspinlock driver in crust. For testing,
> you can poke MMIO from the main loop, near the call to scpi_poll() in
> common/system.c. You can use the timeout.h functions for timing.

Thank you very much for the hint. I already have a very simple test running
were crust changes the state of the first spinlock every 250ms (with the high
timeout it is much easier to catch).
 
> If you want to write a full driver, I would like to know how you expect
> to use the hwspinlocks. Allocating the locks has to be coordinated among
> all of the users: Linux, U-Boot, crust, any other ARISC firmware, etc.

I will think about this if the Linux hwspinlock driver is in an acceptable
state and I can easily show, that it works. I want to create more complex
tests first.

Can I actualy print messages from crust to the debug uart while linux runs?
It doesn't matter if the messages get scrambled while both write to the uart.
It would be just nice to see this playing out in "realtime".

> > Greetings,
> > Wilken
> 
> Cheers,
> Samuel


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support
  2020-11-24 14:54       ` Bjorn Andersson
@ 2020-11-26 13:31         ` Wilken Gottwalt
  0 siblings, 0 replies; 18+ messages in thread
From: Wilken Gottwalt @ 2020-11-26 13:31 UTC (permalink / raw)
  To: Bjorn Andersson
  Cc: linux-kernel, Ohad Ben-Cohen, Baolin Wang, Rob Herring,
	Maxime Ripard, Chen-Yu Tsai, Jernej Skrabec

On Tue, 24 Nov 2020 08:54:25 -0600
Bjorn Andersson <bjorn.andersson@linaro.org> wrote:

> On Mon 23 Nov 12:17 CST 2020, Wilken Gottwalt wrote:
> 
> > On Sat, 21 Nov 2020 23:19:00 -0600
> > Bjorn Andersson <bjorn.andersson@linaro.org> wrote:
> > > > +static int hwlocks_inuse_show(struct seq_file *seqf, void *unused)
> > > > +{
> > > > +	struct sunxi_hwspinlock_data *priv = seqf->private;
> > > > +	int inuse;
> > > > +
> > > > +	/* getting the status of only the main 32 spinlocks is supported */
> > > > +	inuse = hweight32(readl(priv->io_base + SPINLOCK_STATUS_REG));
> > > 
> > > So this returns how many of the locks are taken? How is that useful?
> > 
> > It is a way to see if locks were taken from linux or the arisc core without
> > touching the actual hwspinlock abi or the locks. So it is a nice way to debug
> > hwspinlocks, hence it is part of debugfs.
> > 
> 
> So in a scenario where two remote processors ping-pong the lock between
> them, this will always read 1 and you won't know why?

I know it is not perfect. I will change it to actually report which locks are
taken. And currently the crust firmware does not use the locks and on the
Linux side there are only a handful of driver/components using hwspinlocks,
and none of them are active in a kernel compiled for a H5. So it really is a
nice way to check/debug the hwspinlocks. I already have a simple test running
where crust sets a spinlock and I can see it on Linux without touching the
actuall locks thanks to this status register.

> > > > +	seq_printf(seqf, "%d\n", inuse);
> [..]
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static const struct of_device_id sunxi_hwspinlock_ids[] = {
> > > > +	{ .compatible = "allwinner,sun8i-hwspinlock", },
> > > > +	{ .compatible = "allwinner,sun50i-hwspinlock", },
> > > > +	{},
> > > > +};
> > > > +MODULE_DEVICE_TABLE(of, sunxi_hwspinlock_ids);
> > > > +
> > > > +static struct platform_driver sunxi_hwspinlock_driver = {
> > > > +	.probe	= sunxi_hwspinlock_probe,
> > > > +	.remove	= sunxi_hwspinlock_remove,
> > > > +	.driver	= {
> > > > +		.name		= DRIVER_NAME,
> > > > +		.of_match_table	= of_match_ptr(sunxi_hwspinlock_ids),
> > > 
> > > Please avoid of_match_ptr, as this will cause warnings about unused
> > > variables when COMPILE_TEST without OF.
> > 
> > So did you mean to leave it out completely?
> > 
> 
> Yes, "worst case" is that you include the reference to
> sunxi_hwspinlock_ids on a build without CONFIG_OF and wasting a little
> bit of memory.
> 
> Using of_match_ptr() with CONFIG_OF=n will result in NULL and as such
> we'll get a compile warning that nothing references sunxi_hwspinlock_ids
> - so then that will have to be marked __maybe_unused, or wrapped in an
> #if...
> 
> So better just leave it as:
> 	.of_match_table = sunxi_hwspinlock_ids,

Thank you for the explanation. I really like to know the details and reasons
behind this.

greetings,
Wilken

> Regards,
> Bjorn


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2020-11-26 13:31 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-18 10:01 [PATCH 0/2] hwspinlock: add sunxi hardware spinlock support Wilken Gottwalt
2020-11-18 10:01 ` [PATCH 1/2] dt-bindings: hwlock: sunxi: add sunxi_hwspinlock documentation Wilken Gottwalt
2020-11-18 10:02 ` [PATCH 2/2] hwspinlock: add sunxi hardware spinlock support Wilken Gottwalt
2020-11-18 15:37   ` Maxime Ripard
2020-11-18 19:36     ` Wilken Gottwalt
2020-11-19  7:15       ` Maxime Ripard
2020-11-19 10:13         ` Wilken Gottwalt
2020-11-20 16:42           ` Maxime Ripard
2020-11-21 12:22             ` fuyao
2020-11-21 16:44               ` Maxime Ripard
2020-11-23 18:32                 ` Wilken Gottwalt
2020-11-24  3:35                   ` Samuel Holland
2020-11-24 14:28                     ` Maxime Ripard
2020-11-26 13:10                     ` Wilken Gottwalt
2020-11-22  5:19   ` Bjorn Andersson
2020-11-23 18:17     ` Wilken Gottwalt
2020-11-24 14:54       ` Bjorn Andersson
2020-11-26 13:31         ` Wilken Gottwalt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).