All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/13] SCMI Notifications Core Support
@ 2020-03-04 16:25 ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Hi all,

this series wants to introduce SCMI Notification Support, built on top of
the standard Kernel notification chain subsystem.

At initialization time each SCMI Protocol takes care to register with the
new SCMI notification core the set of its own events which it intends to
support.

Using the API exposed via scmi_handle.notify_ops a Kernel user can register
its own notifier_t callback (via a notifier_block as usual) against any
registered event as identified by the tuple:

		(proto_id, event_id, src_id)

where src_id represents a generic source identifier which is protocol
dependent like domain_id, performance_id, sensor_id and so forth.
(users can anyway do NOT provide any src_id, and subscribe instead to ALL
 the existing (if any) src_id sources for that proto_id/evt_id combination)

Each of the above tuple-specified event will be served on its own dedicated
blocking notification chain, dynamically allocated on-demand when at least
one user has shown interest on that event.

Upon a notification delivery all the users' registered notifier_t callbacks
will be in turn invoked and fed with the event_id as @action param and a
generated custom per-event struct _report as @data param.
(as in include/linux/scmi_protocol.h)

The final step of notification delivery via users' callback invocation is
instead delegated to a pool of deferred workers (Kernel cmwq): each
SCMI protocol has its own dedicated worker and dedicated queue to push
events from the rx ISR to the worker.

Based on scmi-next 5.6 [1], on top of:

commit 5c8a47a5a91d ("firmware: arm_scmi: Make scmi core independent of
		      the transport type")

This series has been tested on JUNO with an experimental firmware only
supporting Perf Notifications.

Thanks

Cristian
----

v3 --> v4:
- dropped RFC tag
- avoid one unneeded evt payload memcpy on the ISR RC code path by
  redesigning dispatcher to handle partial queue-reads (in_flight events,
  only header)
- fixed the initialization issue exposed by late SCMI modules loading by
  reviewing the init process to support possible late events registrations
  by protocols and early callbacks registrations by users (pending)
- cleanup/simplification of exit path: SCMI protocols are generally never
  de-initialized after the initial device creation, so do not deinit
  notification core either (we do halt the delivery, stop the wq and empty
  the queues though)
- reduced contention on regustered_events_handler to the minimum during
  delivery by splitting the common registered_events_handlers hashtable
  into a number of per-protocol tables
- converted registered_protocols and registered_events hastable to
  fixed size arrays: simpler and lockless in our usage scenario

v2 --> v3:
- added platform instance awareness to the notification core: a
  notification instance is created for each known handle
- reviewed notification core initialization and shutdown process
- removed generic non-handle-rooted registration API
- added WQ_SYSFS flag to workqueue instance

v1 --> v2:
- dropped anti-tampering patch
- rebased on top of scmi-for-next-5.6, which includes Viresh series that
  make SCMI core independent of transport (5c8a47a5a91d)
- add a few new SCMI transport methods on top of Viresh patch to address
  needs of SCMI Notifications
- reviewed/renamed scmi_handle_xfer_delayed_resp()
- split main SCMI Notification core patch (~1k lines) into three chunks:
  protocol-registration / callbacks-registration / dispatch-and-delivery
- removed awkward usage of IDR maps in favour of pure hashtables
- added enable/disable refcounting in notification core (was broken in v1)
- removed per-protocol candidate API: a single generic API is now proposed
  instead of scmi_register_<proto>_event_notifier(evt_id, *src_id, *nb)
- added handle->notify_ops as an alternative notification API
  for scmi_driver
- moved ALL_SRCIDs enabled handling from protocol code to core code
- reviewed protocol registration/unregistration logic to use devres
- reviewed cleanup phase on shutdown
- fixed  ERROR: reference preceded by free as reported by kbuild test robot

[1] git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git


Cristian Marussi (10):
  firmware: arm_scmi: Add notifications support in transport layer
  firmware: arm_scmi: Add notification protocol-registration
  firmware: arm_scmi: Add notification callbacks-registration
  firmware: arm_scmi: Add notification dispatch and delivery
  firmware: arm_scmi: Enable notification core
  firmware: arm_scmi: Add Power notifications support
  firmware: arm_scmi: Add Perf notifications support
  firmware: arm_scmi: Add Sensor notifications support
  firmware: arm_scmi: Add Reset notifications support
  firmware: arm_scmi: Add Base notifications support

Sudeep Holla (3):
  firmware: arm_scmi: Add receive buffer support for notifications
  firmware: arm_scmi: Update protocol commands and notification list
  firmware: arm_scmi: Add support for notifications message processing

 drivers/firmware/arm_scmi/Makefile  |    2 +-
 drivers/firmware/arm_scmi/base.c    |  116 +++
 drivers/firmware/arm_scmi/common.h  |   12 +
 drivers/firmware/arm_scmi/driver.c  |  118 ++-
 drivers/firmware/arm_scmi/mailbox.c |   17 +
 drivers/firmware/arm_scmi/notify.c  | 1471 +++++++++++++++++++++++++++
 drivers/firmware/arm_scmi/notify.h  |   78 ++
 drivers/firmware/arm_scmi/perf.c    |  135 +++
 drivers/firmware/arm_scmi/power.c   |  129 +++
 drivers/firmware/arm_scmi/reset.c   |   96 ++
 drivers/firmware/arm_scmi/sensors.c |   73 ++
 drivers/firmware/arm_scmi/shmem.c   |   15 +
 include/linux/scmi_protocol.h       |  110 ++
 13 files changed, 2345 insertions(+), 27 deletions(-)
 create mode 100644 drivers/firmware/arm_scmi/notify.c
 create mode 100644 drivers/firmware/arm_scmi/notify.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v4 00/13] SCMI Notifications Core Support
@ 2020-03-04 16:25 ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Hi all,

this series wants to introduce SCMI Notification Support, built on top of
the standard Kernel notification chain subsystem.

At initialization time each SCMI Protocol takes care to register with the
new SCMI notification core the set of its own events which it intends to
support.

Using the API exposed via scmi_handle.notify_ops a Kernel user can register
its own notifier_t callback (via a notifier_block as usual) against any
registered event as identified by the tuple:

		(proto_id, event_id, src_id)

where src_id represents a generic source identifier which is protocol
dependent like domain_id, performance_id, sensor_id and so forth.
(users can anyway do NOT provide any src_id, and subscribe instead to ALL
 the existing (if any) src_id sources for that proto_id/evt_id combination)

Each of the above tuple-specified event will be served on its own dedicated
blocking notification chain, dynamically allocated on-demand when at least
one user has shown interest on that event.

Upon a notification delivery all the users' registered notifier_t callbacks
will be in turn invoked and fed with the event_id as @action param and a
generated custom per-event struct _report as @data param.
(as in include/linux/scmi_protocol.h)

The final step of notification delivery via users' callback invocation is
instead delegated to a pool of deferred workers (Kernel cmwq): each
SCMI protocol has its own dedicated worker and dedicated queue to push
events from the rx ISR to the worker.

Based on scmi-next 5.6 [1], on top of:

commit 5c8a47a5a91d ("firmware: arm_scmi: Make scmi core independent of
		      the transport type")

This series has been tested on JUNO with an experimental firmware only
supporting Perf Notifications.

Thanks

Cristian
----

v3 --> v4:
- dropped RFC tag
- avoid one unneeded evt payload memcpy on the ISR RC code path by
  redesigning dispatcher to handle partial queue-reads (in_flight events,
  only header)
- fixed the initialization issue exposed by late SCMI modules loading by
  reviewing the init process to support possible late events registrations
  by protocols and early callbacks registrations by users (pending)
- cleanup/simplification of exit path: SCMI protocols are generally never
  de-initialized after the initial device creation, so do not deinit
  notification core either (we do halt the delivery, stop the wq and empty
  the queues though)
- reduced contention on regustered_events_handler to the minimum during
  delivery by splitting the common registered_events_handlers hashtable
  into a number of per-protocol tables
- converted registered_protocols and registered_events hastable to
  fixed size arrays: simpler and lockless in our usage scenario

v2 --> v3:
- added platform instance awareness to the notification core: a
  notification instance is created for each known handle
- reviewed notification core initialization and shutdown process
- removed generic non-handle-rooted registration API
- added WQ_SYSFS flag to workqueue instance

v1 --> v2:
- dropped anti-tampering patch
- rebased on top of scmi-for-next-5.6, which includes Viresh series that
  make SCMI core independent of transport (5c8a47a5a91d)
- add a few new SCMI transport methods on top of Viresh patch to address
  needs of SCMI Notifications
- reviewed/renamed scmi_handle_xfer_delayed_resp()
- split main SCMI Notification core patch (~1k lines) into three chunks:
  protocol-registration / callbacks-registration / dispatch-and-delivery
- removed awkward usage of IDR maps in favour of pure hashtables
- added enable/disable refcounting in notification core (was broken in v1)
- removed per-protocol candidate API: a single generic API is now proposed
  instead of scmi_register_<proto>_event_notifier(evt_id, *src_id, *nb)
- added handle->notify_ops as an alternative notification API
  for scmi_driver
- moved ALL_SRCIDs enabled handling from protocol code to core code
- reviewed protocol registration/unregistration logic to use devres
- reviewed cleanup phase on shutdown
- fixed  ERROR: reference preceded by free as reported by kbuild test robot

[1] git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git


Cristian Marussi (10):
  firmware: arm_scmi: Add notifications support in transport layer
  firmware: arm_scmi: Add notification protocol-registration
  firmware: arm_scmi: Add notification callbacks-registration
  firmware: arm_scmi: Add notification dispatch and delivery
  firmware: arm_scmi: Enable notification core
  firmware: arm_scmi: Add Power notifications support
  firmware: arm_scmi: Add Perf notifications support
  firmware: arm_scmi: Add Sensor notifications support
  firmware: arm_scmi: Add Reset notifications support
  firmware: arm_scmi: Add Base notifications support

Sudeep Holla (3):
  firmware: arm_scmi: Add receive buffer support for notifications
  firmware: arm_scmi: Update protocol commands and notification list
  firmware: arm_scmi: Add support for notifications message processing

 drivers/firmware/arm_scmi/Makefile  |    2 +-
 drivers/firmware/arm_scmi/base.c    |  116 +++
 drivers/firmware/arm_scmi/common.h  |   12 +
 drivers/firmware/arm_scmi/driver.c  |  118 ++-
 drivers/firmware/arm_scmi/mailbox.c |   17 +
 drivers/firmware/arm_scmi/notify.c  | 1471 +++++++++++++++++++++++++++
 drivers/firmware/arm_scmi/notify.h  |   78 ++
 drivers/firmware/arm_scmi/perf.c    |  135 +++
 drivers/firmware/arm_scmi/power.c   |  129 +++
 drivers/firmware/arm_scmi/reset.c   |   96 ++
 drivers/firmware/arm_scmi/sensors.c |   73 ++
 drivers/firmware/arm_scmi/shmem.c   |   15 +
 include/linux/scmi_protocol.h       |  110 ++
 13 files changed, 2345 insertions(+), 27 deletions(-)
 create mode 100644 drivers/firmware/arm_scmi/notify.c
 create mode 100644 drivers/firmware/arm_scmi/notify.h

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v4 01/13] firmware: arm_scmi: Add receive buffer support for notifications
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

From: Sudeep Holla <sudeep.holla@arm.com>

With all the plumbing in place, let's just add the separate dedicated
receive buffers to handle notifications that can arrive asynchronously
from the platform firmware to OS.

Also add one check to see if the platform supports any receive channels
before allocating the receive buffers: since those buffers are optionally
supported though, the whole xfer initialization is also postponed to be
able to check for their existence in advance.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
[Changed parameters in __scmi_xfer_info_init()]
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V1 --> V2:
- reviewed commit message
- reviewed parameters of __scmi_xfer_info_init()
---
 drivers/firmware/arm_scmi/driver.c | 24 ++++++++++++++++++------
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index dbec767222e9..efb660c34b57 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -76,6 +76,7 @@ struct scmi_xfers_info {
  *	implementation version and (sub-)vendor identification.
  * @handle: Instance of SCMI handle to send to clients
  * @tx_minfo: Universal Transmit Message management info
+ * @rx_minfo: Universal Receive Message management info
  * @tx_idr: IDR object to map protocol id to Tx channel info pointer
  * @rx_idr: IDR object to map protocol id to Rx channel info pointer
  * @protocols_imp: List of protocols implemented, currently maximum of
@@ -89,6 +90,7 @@ struct scmi_info {
 	struct scmi_revision_info version;
 	struct scmi_handle handle;
 	struct scmi_xfers_info tx_minfo;
+	struct scmi_xfers_info rx_minfo;
 	struct idr tx_idr;
 	struct idr rx_idr;
 	u8 *protocols_imp;
@@ -525,13 +527,13 @@ int scmi_handle_put(const struct scmi_handle *handle)
 	return 0;
 }
 
-static int scmi_xfer_info_init(struct scmi_info *sinfo)
+static int __scmi_xfer_info_init(struct scmi_info *sinfo,
+				 struct scmi_xfers_info *info)
 {
 	int i;
 	struct scmi_xfer *xfer;
 	struct device *dev = sinfo->dev;
 	const struct scmi_desc *desc = sinfo->desc;
-	struct scmi_xfers_info *info = &sinfo->tx_minfo;
 
 	/* Pre-allocated messages, no more than what hdr.seq can support */
 	if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) {
@@ -566,6 +568,16 @@ static int scmi_xfer_info_init(struct scmi_info *sinfo)
 	return 0;
 }
 
+static int scmi_xfer_info_init(struct scmi_info *sinfo)
+{
+	int ret = __scmi_xfer_info_init(sinfo, &sinfo->tx_minfo);
+
+	if (!ret && idr_find(&sinfo->rx_idr, SCMI_PROTOCOL_BASE))
+		ret = __scmi_xfer_info_init(sinfo, &sinfo->rx_minfo);
+
+	return ret;
+}
+
 static int scmi_chan_setup(struct scmi_info *info, struct device *dev,
 			   int prot_id, bool tx)
 {
@@ -699,10 +711,6 @@ static int scmi_probe(struct platform_device *pdev)
 	info->desc = desc;
 	INIT_LIST_HEAD(&info->node);
 
-	ret = scmi_xfer_info_init(info);
-	if (ret)
-		return ret;
-
 	platform_set_drvdata(pdev, info);
 	idr_init(&info->tx_idr);
 	idr_init(&info->rx_idr);
@@ -715,6 +723,10 @@ static int scmi_probe(struct platform_device *pdev)
 	if (ret)
 		return ret;
 
+	ret = scmi_xfer_info_init(info);
+	if (ret)
+		return ret;
+
 	ret = scmi_base_protocol_init(handle);
 	if (ret) {
 		dev_err(dev, "unable to communicate with SCMI(%d)\n", ret);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 01/13] firmware: arm_scmi: Add receive buffer support for notifications
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

From: Sudeep Holla <sudeep.holla@arm.com>

With all the plumbing in place, let's just add the separate dedicated
receive buffers to handle notifications that can arrive asynchronously
from the platform firmware to OS.

Also add one check to see if the platform supports any receive channels
before allocating the receive buffers: since those buffers are optionally
supported though, the whole xfer initialization is also postponed to be
able to check for their existence in advance.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
[Changed parameters in __scmi_xfer_info_init()]
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V1 --> V2:
- reviewed commit message
- reviewed parameters of __scmi_xfer_info_init()
---
 drivers/firmware/arm_scmi/driver.c | 24 ++++++++++++++++++------
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index dbec767222e9..efb660c34b57 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -76,6 +76,7 @@ struct scmi_xfers_info {
  *	implementation version and (sub-)vendor identification.
  * @handle: Instance of SCMI handle to send to clients
  * @tx_minfo: Universal Transmit Message management info
+ * @rx_minfo: Universal Receive Message management info
  * @tx_idr: IDR object to map protocol id to Tx channel info pointer
  * @rx_idr: IDR object to map protocol id to Rx channel info pointer
  * @protocols_imp: List of protocols implemented, currently maximum of
@@ -89,6 +90,7 @@ struct scmi_info {
 	struct scmi_revision_info version;
 	struct scmi_handle handle;
 	struct scmi_xfers_info tx_minfo;
+	struct scmi_xfers_info rx_minfo;
 	struct idr tx_idr;
 	struct idr rx_idr;
 	u8 *protocols_imp;
@@ -525,13 +527,13 @@ int scmi_handle_put(const struct scmi_handle *handle)
 	return 0;
 }
 
-static int scmi_xfer_info_init(struct scmi_info *sinfo)
+static int __scmi_xfer_info_init(struct scmi_info *sinfo,
+				 struct scmi_xfers_info *info)
 {
 	int i;
 	struct scmi_xfer *xfer;
 	struct device *dev = sinfo->dev;
 	const struct scmi_desc *desc = sinfo->desc;
-	struct scmi_xfers_info *info = &sinfo->tx_minfo;
 
 	/* Pre-allocated messages, no more than what hdr.seq can support */
 	if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) {
@@ -566,6 +568,16 @@ static int scmi_xfer_info_init(struct scmi_info *sinfo)
 	return 0;
 }
 
+static int scmi_xfer_info_init(struct scmi_info *sinfo)
+{
+	int ret = __scmi_xfer_info_init(sinfo, &sinfo->tx_minfo);
+
+	if (!ret && idr_find(&sinfo->rx_idr, SCMI_PROTOCOL_BASE))
+		ret = __scmi_xfer_info_init(sinfo, &sinfo->rx_minfo);
+
+	return ret;
+}
+
 static int scmi_chan_setup(struct scmi_info *info, struct device *dev,
 			   int prot_id, bool tx)
 {
@@ -699,10 +711,6 @@ static int scmi_probe(struct platform_device *pdev)
 	info->desc = desc;
 	INIT_LIST_HEAD(&info->node);
 
-	ret = scmi_xfer_info_init(info);
-	if (ret)
-		return ret;
-
 	platform_set_drvdata(pdev, info);
 	idr_init(&info->tx_idr);
 	idr_init(&info->rx_idr);
@@ -715,6 +723,10 @@ static int scmi_probe(struct platform_device *pdev)
 	if (ret)
 		return ret;
 
+	ret = scmi_xfer_info_init(info);
+	if (ret)
+		return ret;
+
 	ret = scmi_base_protocol_init(handle);
 	if (ret) {
 		dev_err(dev, "unable to communicate with SCMI(%d)\n", ret);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 02/13] firmware: arm_scmi: Update protocol commands and notification list
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

From: Sudeep Holla <sudeep.holla@arm.com>

Add commands' enumerations and messages definitions for all existing
notify-enable commands across all protocols.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
 drivers/firmware/arm_scmi/base.c    | 7 +++++++
 drivers/firmware/arm_scmi/perf.c    | 5 +++++
 drivers/firmware/arm_scmi/power.c   | 6 ++++++
 drivers/firmware/arm_scmi/sensors.c | 4 ++++
 4 files changed, 22 insertions(+)

diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
index f804e8af6521..ce7d9203e41b 100644
--- a/drivers/firmware/arm_scmi/base.c
+++ b/drivers/firmware/arm_scmi/base.c
@@ -14,6 +14,13 @@ enum scmi_base_protocol_cmd {
 	BASE_DISCOVER_LIST_PROTOCOLS = 0x6,
 	BASE_DISCOVER_AGENT = 0x7,
 	BASE_NOTIFY_ERRORS = 0x8,
+	BASE_SET_DEVICE_PERMISSIONS = 0x9,
+	BASE_SET_PROTOCOL_PERMISSIONS = 0xa,
+	BASE_RESET_AGENT_CONFIGURATION = 0xb,
+};
+
+enum scmi_base_protocol_notify {
+	BASE_ERROR_EVENT = 0x0,
 };
 
 struct scmi_msg_resp_base_attributes {
diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
index ec81e6f7e7a4..88509ec637d0 100644
--- a/drivers/firmware/arm_scmi/perf.c
+++ b/drivers/firmware/arm_scmi/perf.c
@@ -27,6 +27,11 @@ enum scmi_performance_protocol_cmd {
 	PERF_DESCRIBE_FASTCHANNEL = 0xb,
 };
 
+enum scmi_performance_protocol_notify {
+	PERFORMANCE_LIMITS_CHANGED = 0x0,
+	PERFORMANCE_LEVEL_CHANGED = 0x1,
+};
+
 struct scmi_opp {
 	u32 perf;
 	u32 power;
diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
index 214886ce84f1..cf7f0312381b 100644
--- a/drivers/firmware/arm_scmi/power.c
+++ b/drivers/firmware/arm_scmi/power.c
@@ -12,6 +12,12 @@ enum scmi_power_protocol_cmd {
 	POWER_STATE_SET = 0x4,
 	POWER_STATE_GET = 0x5,
 	POWER_STATE_NOTIFY = 0x6,
+	POWER_STATE_CHANGE_REQUESTED_NOTIFY = 0x7,
+};
+
+enum scmi_power_protocol_notify {
+	POWER_STATE_CHANGED = 0x0,
+	POWER_STATE_CHANGE_REQUESTED = 0x1,
 };
 
 struct scmi_msg_resp_power_attributes {
diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c
index eba61b9c1f53..db1b1ab303da 100644
--- a/drivers/firmware/arm_scmi/sensors.c
+++ b/drivers/firmware/arm_scmi/sensors.c
@@ -14,6 +14,10 @@ enum scmi_sensor_protocol_cmd {
 	SENSOR_READING_GET = 0x6,
 };
 
+enum scmi_sensor_protocol_notify {
+	SENSOR_TRIP_POINT_EVENT = 0x0,
+};
+
 struct scmi_msg_resp_sensor_attributes {
 	__le16 num_sensors;
 	u8 max_requests;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 02/13] firmware: arm_scmi: Update protocol commands and notification list
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

From: Sudeep Holla <sudeep.holla@arm.com>

Add commands' enumerations and messages definitions for all existing
notify-enable commands across all protocols.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
 drivers/firmware/arm_scmi/base.c    | 7 +++++++
 drivers/firmware/arm_scmi/perf.c    | 5 +++++
 drivers/firmware/arm_scmi/power.c   | 6 ++++++
 drivers/firmware/arm_scmi/sensors.c | 4 ++++
 4 files changed, 22 insertions(+)

diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
index f804e8af6521..ce7d9203e41b 100644
--- a/drivers/firmware/arm_scmi/base.c
+++ b/drivers/firmware/arm_scmi/base.c
@@ -14,6 +14,13 @@ enum scmi_base_protocol_cmd {
 	BASE_DISCOVER_LIST_PROTOCOLS = 0x6,
 	BASE_DISCOVER_AGENT = 0x7,
 	BASE_NOTIFY_ERRORS = 0x8,
+	BASE_SET_DEVICE_PERMISSIONS = 0x9,
+	BASE_SET_PROTOCOL_PERMISSIONS = 0xa,
+	BASE_RESET_AGENT_CONFIGURATION = 0xb,
+};
+
+enum scmi_base_protocol_notify {
+	BASE_ERROR_EVENT = 0x0,
 };
 
 struct scmi_msg_resp_base_attributes {
diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
index ec81e6f7e7a4..88509ec637d0 100644
--- a/drivers/firmware/arm_scmi/perf.c
+++ b/drivers/firmware/arm_scmi/perf.c
@@ -27,6 +27,11 @@ enum scmi_performance_protocol_cmd {
 	PERF_DESCRIBE_FASTCHANNEL = 0xb,
 };
 
+enum scmi_performance_protocol_notify {
+	PERFORMANCE_LIMITS_CHANGED = 0x0,
+	PERFORMANCE_LEVEL_CHANGED = 0x1,
+};
+
 struct scmi_opp {
 	u32 perf;
 	u32 power;
diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
index 214886ce84f1..cf7f0312381b 100644
--- a/drivers/firmware/arm_scmi/power.c
+++ b/drivers/firmware/arm_scmi/power.c
@@ -12,6 +12,12 @@ enum scmi_power_protocol_cmd {
 	POWER_STATE_SET = 0x4,
 	POWER_STATE_GET = 0x5,
 	POWER_STATE_NOTIFY = 0x6,
+	POWER_STATE_CHANGE_REQUESTED_NOTIFY = 0x7,
+};
+
+enum scmi_power_protocol_notify {
+	POWER_STATE_CHANGED = 0x0,
+	POWER_STATE_CHANGE_REQUESTED = 0x1,
 };
 
 struct scmi_msg_resp_power_attributes {
diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c
index eba61b9c1f53..db1b1ab303da 100644
--- a/drivers/firmware/arm_scmi/sensors.c
+++ b/drivers/firmware/arm_scmi/sensors.c
@@ -14,6 +14,10 @@ enum scmi_sensor_protocol_cmd {
 	SENSOR_READING_GET = 0x6,
 };
 
+enum scmi_sensor_protocol_notify {
+	SENSOR_TRIP_POINT_EVENT = 0x0,
+};
+
 struct scmi_msg_resp_sensor_attributes {
 	__le16 num_sensors;
 	u8 max_requests;
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 03/13] firmware: arm_scmi: Add notifications support in transport layer
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Add common transport-layer methods to:
 - fetch a notification instead of a response
 - clear a pending notification

Add also all the needed support in mailbox/shmem transports.

Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
 drivers/firmware/arm_scmi/common.h  |  8 ++++++++
 drivers/firmware/arm_scmi/mailbox.c | 17 +++++++++++++++++
 drivers/firmware/arm_scmi/shmem.c   | 15 +++++++++++++++
 3 files changed, 40 insertions(+)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 5ac06469b01c..3c2e5d0d7b68 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -178,6 +178,8 @@ struct scmi_chan_info {
  * @send_message: Callback to send a message
  * @mark_txdone: Callback to mark tx as done
  * @fetch_response: Callback to fetch response
+ * @fetch_notification: Callback to fetch notification
+ * @clear_notification: Callback to clear a pending notification
  * @poll_done: Callback to poll transfer status
  */
 struct scmi_transport_ops {
@@ -190,6 +192,9 @@ struct scmi_transport_ops {
 	void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret);
 	void (*fetch_response)(struct scmi_chan_info *cinfo,
 			       struct scmi_xfer *xfer);
+	void (*fetch_notification)(struct scmi_chan_info *cinfo,
+				   size_t max_len, struct scmi_xfer *xfer);
+	void (*clear_notification)(struct scmi_chan_info *cinfo);
 	bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer);
 };
 
@@ -222,5 +227,8 @@ void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem,
 u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem);
 void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
 			  struct scmi_xfer *xfer);
+void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
+			      size_t max_len, struct scmi_xfer *xfer);
+void shmem_clear_notification(struct scmi_shared_mem __iomem *shmem);
 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
 		     struct scmi_xfer *xfer);
diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c
index 73077bbc4ad9..19ee058f9f44 100644
--- a/drivers/firmware/arm_scmi/mailbox.c
+++ b/drivers/firmware/arm_scmi/mailbox.c
@@ -158,6 +158,21 @@ static void mailbox_fetch_response(struct scmi_chan_info *cinfo,
 	shmem_fetch_response(smbox->shmem, xfer);
 }
 
+static void mailbox_fetch_notification(struct scmi_chan_info *cinfo,
+				       size_t max_len, struct scmi_xfer *xfer)
+{
+	struct scmi_mailbox *smbox = cinfo->transport_info;
+
+	shmem_fetch_notification(smbox->shmem, max_len, xfer);
+}
+
+static void mailbox_clear_notification(struct scmi_chan_info *cinfo)
+{
+	struct scmi_mailbox *smbox = cinfo->transport_info;
+
+	shmem_clear_notification(smbox->shmem);
+}
+
 static bool
 mailbox_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer)
 {
@@ -173,6 +188,8 @@ static struct scmi_transport_ops scmi_mailbox_ops = {
 	.send_message = mailbox_send_message,
 	.mark_txdone = mailbox_mark_txdone,
 	.fetch_response = mailbox_fetch_response,
+	.fetch_notification = mailbox_fetch_notification,
+	.clear_notification = mailbox_clear_notification,
 	.poll_done = mailbox_poll_done,
 };
 
diff --git a/drivers/firmware/arm_scmi/shmem.c b/drivers/firmware/arm_scmi/shmem.c
index ca0ffd302ea2..e1ab05be90e3 100644
--- a/drivers/firmware/arm_scmi/shmem.c
+++ b/drivers/firmware/arm_scmi/shmem.c
@@ -67,6 +67,21 @@ void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
 	memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
 }
 
+void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
+			      size_t max_len, struct scmi_xfer *xfer)
+{
+	/* Skip only the length of header in shmem area i.e 4 bytes */
+	xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4);
+
+	/* Take a copy to the rx buffer.. */
+	memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
+}
+
+void shmem_clear_notification(struct scmi_shared_mem __iomem *shmem)
+{
+	iowrite32(SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE, &shmem->channel_status);
+}
+
 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
 		     struct scmi_xfer *xfer)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 03/13] firmware: arm_scmi: Add notifications support in transport layer
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Add common transport-layer methods to:
 - fetch a notification instead of a response
 - clear a pending notification

Add also all the needed support in mailbox/shmem transports.

Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
 drivers/firmware/arm_scmi/common.h  |  8 ++++++++
 drivers/firmware/arm_scmi/mailbox.c | 17 +++++++++++++++++
 drivers/firmware/arm_scmi/shmem.c   | 15 +++++++++++++++
 3 files changed, 40 insertions(+)

diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 5ac06469b01c..3c2e5d0d7b68 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -178,6 +178,8 @@ struct scmi_chan_info {
  * @send_message: Callback to send a message
  * @mark_txdone: Callback to mark tx as done
  * @fetch_response: Callback to fetch response
+ * @fetch_notification: Callback to fetch notification
+ * @clear_notification: Callback to clear a pending notification
  * @poll_done: Callback to poll transfer status
  */
 struct scmi_transport_ops {
@@ -190,6 +192,9 @@ struct scmi_transport_ops {
 	void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret);
 	void (*fetch_response)(struct scmi_chan_info *cinfo,
 			       struct scmi_xfer *xfer);
+	void (*fetch_notification)(struct scmi_chan_info *cinfo,
+				   size_t max_len, struct scmi_xfer *xfer);
+	void (*clear_notification)(struct scmi_chan_info *cinfo);
 	bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer);
 };
 
@@ -222,5 +227,8 @@ void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem,
 u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem);
 void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
 			  struct scmi_xfer *xfer);
+void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
+			      size_t max_len, struct scmi_xfer *xfer);
+void shmem_clear_notification(struct scmi_shared_mem __iomem *shmem);
 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
 		     struct scmi_xfer *xfer);
diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c
index 73077bbc4ad9..19ee058f9f44 100644
--- a/drivers/firmware/arm_scmi/mailbox.c
+++ b/drivers/firmware/arm_scmi/mailbox.c
@@ -158,6 +158,21 @@ static void mailbox_fetch_response(struct scmi_chan_info *cinfo,
 	shmem_fetch_response(smbox->shmem, xfer);
 }
 
+static void mailbox_fetch_notification(struct scmi_chan_info *cinfo,
+				       size_t max_len, struct scmi_xfer *xfer)
+{
+	struct scmi_mailbox *smbox = cinfo->transport_info;
+
+	shmem_fetch_notification(smbox->shmem, max_len, xfer);
+}
+
+static void mailbox_clear_notification(struct scmi_chan_info *cinfo)
+{
+	struct scmi_mailbox *smbox = cinfo->transport_info;
+
+	shmem_clear_notification(smbox->shmem);
+}
+
 static bool
 mailbox_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer)
 {
@@ -173,6 +188,8 @@ static struct scmi_transport_ops scmi_mailbox_ops = {
 	.send_message = mailbox_send_message,
 	.mark_txdone = mailbox_mark_txdone,
 	.fetch_response = mailbox_fetch_response,
+	.fetch_notification = mailbox_fetch_notification,
+	.clear_notification = mailbox_clear_notification,
 	.poll_done = mailbox_poll_done,
 };
 
diff --git a/drivers/firmware/arm_scmi/shmem.c b/drivers/firmware/arm_scmi/shmem.c
index ca0ffd302ea2..e1ab05be90e3 100644
--- a/drivers/firmware/arm_scmi/shmem.c
+++ b/drivers/firmware/arm_scmi/shmem.c
@@ -67,6 +67,21 @@ void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
 	memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
 }
 
+void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
+			      size_t max_len, struct scmi_xfer *xfer)
+{
+	/* Skip only the length of header in shmem area i.e 4 bytes */
+	xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4);
+
+	/* Take a copy to the rx buffer.. */
+	memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
+}
+
+void shmem_clear_notification(struct scmi_shared_mem __iomem *shmem)
+{
+	iowrite32(SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE, &shmem->channel_status);
+}
+
 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
 		     struct scmi_xfer *xfer)
 {
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 04/13] firmware: arm_scmi: Add support for notifications message processing
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

From: Sudeep Holla <sudeep.holla@arm.com>

Add the mechanisms to distinguish notifications from delayed responses and
to properly fetch notification messages upon reception: notifications
processing does not continue further after the fetch phase.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
[Reworked/renamed scmi_handle_xfer_delayed_resp()]
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V1 --> V2
- switch the notif/delayed_resp message processing logic to use new
  transport independent layer methods
- reviewed logic of scmi_handle_xfer_delayed_resp() while renaming it as
  scmi_handle_response()
- properly relocated tracer points
---
 drivers/firmware/arm_scmi/driver.c | 84 +++++++++++++++++++++++-------
 1 file changed, 64 insertions(+), 20 deletions(-)

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index efb660c34b57..868cc36a07c9 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -202,29 +202,42 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
 	spin_unlock_irqrestore(&minfo->xfer_lock, flags);
 }
 
-/**
- * scmi_rx_callback() - callback for receiving messages
- *
- * @cinfo: SCMI channel info
- * @msg_hdr: Message header
- *
- * Processes one received message to appropriate transfer information and
- * signals completion of the transfer.
- *
- * NOTE: This function will be invoked in IRQ context, hence should be
- * as optimal as possible.
- */
-void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr)
+static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
 {
-	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
-	struct scmi_xfers_info *minfo = &info->tx_minfo;
-	u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr);
-	u8 msg_type = MSG_XTRACT_TYPE(msg_hdr);
-	struct device *dev = cinfo->dev;
 	struct scmi_xfer *xfer;
+	struct device *dev = cinfo->dev;
+	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
+	struct scmi_xfers_info *minfo = &info->rx_minfo;
+
+	xfer = scmi_xfer_get(cinfo->handle, minfo);
+	if (IS_ERR(xfer)) {
+		dev_err(dev, "failed to get free message slot (%ld)\n",
+			PTR_ERR(xfer));
+		info->desc->ops->clear_notification(cinfo);
+		return;
+	}
+
+	unpack_scmi_header(msg_hdr, &xfer->hdr);
+	scmi_dump_header_dbg(dev, &xfer->hdr);
+	info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size,
+					    xfer);
+
+	trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
+			   xfer->hdr.protocol_id, xfer->hdr.seq,
+			   MSG_TYPE_NOTIFICATION);
 
-	if (msg_type == MSG_TYPE_NOTIFICATION)
-		return; /* Notifications not yet supported */
+	__scmi_xfer_put(minfo, xfer);
+
+	info->desc->ops->clear_notification(cinfo);
+}
+
+static void scmi_handle_response(struct scmi_chan_info *cinfo,
+				 u16 xfer_id, u8 msg_type)
+{
+	struct scmi_xfer *xfer;
+	struct device *dev = cinfo->dev;
+	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
+	struct scmi_xfers_info *minfo = &info->tx_minfo;
 
 	/* Are we even expecting this? */
 	if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
@@ -248,6 +261,37 @@ void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr)
 		complete(&xfer->done);
 }
 
+/**
+ * scmi_rx_callback() - callback for receiving messages
+ *
+ * @cinfo: SCMI channel info
+ * @msg_hdr: Message header
+ *
+ * Processes one received message to appropriate transfer information and
+ * signals completion of the transfer.
+ *
+ * NOTE: This function will be invoked in IRQ context, hence should be
+ * as optimal as possible.
+ */
+void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr)
+{
+	u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr);
+	u8 msg_type = MSG_XTRACT_TYPE(msg_hdr);
+
+	switch (msg_type) {
+	case MSG_TYPE_NOTIFICATION:
+		scmi_handle_notification(cinfo, msg_hdr);
+		break;
+	case MSG_TYPE_COMMAND:
+	case MSG_TYPE_DELAYED_RESP:
+		scmi_handle_response(cinfo, xfer_id, msg_type);
+		break;
+	default:
+		WARN_ONCE(1, "received unknown msg_type:%d\n", msg_type);
+		break;
+	}
+}
+
 /**
  * scmi_xfer_put() - Release a transmit message
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 04/13] firmware: arm_scmi: Add support for notifications message processing
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

From: Sudeep Holla <sudeep.holla@arm.com>

Add the mechanisms to distinguish notifications from delayed responses and
to properly fetch notification messages upon reception: notifications
processing does not continue further after the fetch phase.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
[Reworked/renamed scmi_handle_xfer_delayed_resp()]
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V1 --> V2
- switch the notif/delayed_resp message processing logic to use new
  transport independent layer methods
- reviewed logic of scmi_handle_xfer_delayed_resp() while renaming it as
  scmi_handle_response()
- properly relocated tracer points
---
 drivers/firmware/arm_scmi/driver.c | 84 +++++++++++++++++++++++-------
 1 file changed, 64 insertions(+), 20 deletions(-)

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index efb660c34b57..868cc36a07c9 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -202,29 +202,42 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
 	spin_unlock_irqrestore(&minfo->xfer_lock, flags);
 }
 
-/**
- * scmi_rx_callback() - callback for receiving messages
- *
- * @cinfo: SCMI channel info
- * @msg_hdr: Message header
- *
- * Processes one received message to appropriate transfer information and
- * signals completion of the transfer.
- *
- * NOTE: This function will be invoked in IRQ context, hence should be
- * as optimal as possible.
- */
-void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr)
+static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
 {
-	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
-	struct scmi_xfers_info *minfo = &info->tx_minfo;
-	u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr);
-	u8 msg_type = MSG_XTRACT_TYPE(msg_hdr);
-	struct device *dev = cinfo->dev;
 	struct scmi_xfer *xfer;
+	struct device *dev = cinfo->dev;
+	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
+	struct scmi_xfers_info *minfo = &info->rx_minfo;
+
+	xfer = scmi_xfer_get(cinfo->handle, minfo);
+	if (IS_ERR(xfer)) {
+		dev_err(dev, "failed to get free message slot (%ld)\n",
+			PTR_ERR(xfer));
+		info->desc->ops->clear_notification(cinfo);
+		return;
+	}
+
+	unpack_scmi_header(msg_hdr, &xfer->hdr);
+	scmi_dump_header_dbg(dev, &xfer->hdr);
+	info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size,
+					    xfer);
+
+	trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
+			   xfer->hdr.protocol_id, xfer->hdr.seq,
+			   MSG_TYPE_NOTIFICATION);
 
-	if (msg_type == MSG_TYPE_NOTIFICATION)
-		return; /* Notifications not yet supported */
+	__scmi_xfer_put(minfo, xfer);
+
+	info->desc->ops->clear_notification(cinfo);
+}
+
+static void scmi_handle_response(struct scmi_chan_info *cinfo,
+				 u16 xfer_id, u8 msg_type)
+{
+	struct scmi_xfer *xfer;
+	struct device *dev = cinfo->dev;
+	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
+	struct scmi_xfers_info *minfo = &info->tx_minfo;
 
 	/* Are we even expecting this? */
 	if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
@@ -248,6 +261,37 @@ void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr)
 		complete(&xfer->done);
 }
 
+/**
+ * scmi_rx_callback() - callback for receiving messages
+ *
+ * @cinfo: SCMI channel info
+ * @msg_hdr: Message header
+ *
+ * Processes one received message to appropriate transfer information and
+ * signals completion of the transfer.
+ *
+ * NOTE: This function will be invoked in IRQ context, hence should be
+ * as optimal as possible.
+ */
+void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr)
+{
+	u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr);
+	u8 msg_type = MSG_XTRACT_TYPE(msg_hdr);
+
+	switch (msg_type) {
+	case MSG_TYPE_NOTIFICATION:
+		scmi_handle_notification(cinfo, msg_hdr);
+		break;
+	case MSG_TYPE_COMMAND:
+	case MSG_TYPE_DELAYED_RESP:
+		scmi_handle_response(cinfo, xfer_id, msg_type);
+		break;
+	default:
+		WARN_ONCE(1, "received unknown msg_type:%d\n", msg_type);
+		break;
+	}
+}
+
 /**
  * scmi_xfer_put() - Release a transmit message
  *
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 05/13] firmware: arm_scmi: Add notification protocol-registration
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Add core SCMI Notifications protocol-registration support: allow protocols
to register their own set of supported events, during their initialization
phase. Notification core can track multiple platform instances by their
handles.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- removed scratch ISR buffer, move scratch BH buffer into protocol
  descriptor
- converted registered_protocols and registered_events from hashtables
  into bare fixed-sized arrays
- removed unregister protocols' routines (never called really)
V2 --> V3
- added scmi_notify_instance to track target platform instance
V1 --> V2
- splitted out of V1 patch 04
- moved from IDR maps to real HashTables to store events
- scmi_notifications_initialized is now an atomic_t
- reviewed protocol registration/unregistration to use devres
- fixed:
  drivers/firmware/arm_scmi/notify.c:483:18-23: ERROR:
  	reference preceded by free on line 482

Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
---
 drivers/firmware/arm_scmi/Makefile |   2 +-
 drivers/firmware/arm_scmi/common.h |   4 +
 drivers/firmware/arm_scmi/notify.c | 439 +++++++++++++++++++++++++++++
 drivers/firmware/arm_scmi/notify.h |  57 ++++
 include/linux/scmi_protocol.h      |   9 +
 5 files changed, 510 insertions(+), 1 deletion(-)
 create mode 100644 drivers/firmware/arm_scmi/notify.c
 create mode 100644 drivers/firmware/arm_scmi/notify.h

diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
index 6694d0d908d6..24a03a36aee4 100644
--- a/drivers/firmware/arm_scmi/Makefile
+++ b/drivers/firmware/arm_scmi/Makefile
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
 obj-y	= scmi-bus.o scmi-driver.o scmi-protocols.o scmi-transport.o
 scmi-bus-y = bus.o
-scmi-driver-y = driver.o
+scmi-driver-y = driver.o notify.o
 scmi-transport-y = mailbox.o shmem.o
 scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o
 obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 3c2e5d0d7b68..2106c35195ce 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -6,6 +6,8 @@
  *
  * Copyright (C) 2018 ARM Ltd.
  */
+#ifndef _SCMI_COMMON_H
+#define _SCMI_COMMON_H
 
 #include <linux/bitfield.h>
 #include <linux/completion.h>
@@ -232,3 +234,5 @@ void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
 void shmem_clear_notification(struct scmi_shared_mem __iomem *shmem);
 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
 		     struct scmi_xfer *xfer);
+
+#endif /* _SCMI_COMMON_H */
diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
new file mode 100644
index 000000000000..31e49cb7d88e
--- /dev/null
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -0,0 +1,439 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Notification support
+ *
+ * Copyright (C) 2020 ARM Ltd.
+ *
+ * SCMI Protocol specification allows the platform to signal events to
+ * interested agents via notification messages: this is an implementation
+ * of the dispatch and delivery of such notifications to the interested users
+ * inside the Linux kernel.
+ *
+ * An SCMI Notification core instance is initialized for each active platform
+ * instance identified by the means of the usual @scmi_handle.
+ *
+ * Each SCMI Protocol implementation, during its initialization, registers with
+ * this core its set of supported events using @scmi_register_protocol_events():
+ * all the needed descriptors are stored in the @registered_protocols and
+ * @registered_events arrays.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#include <linux/compiler.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/kfifo.h>
+#include <linux/mutex.h>
+#include <linux/refcount.h>
+#include <linux/scmi_protocol.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "notify.h"
+
+#define	SCMI_MAX_PROTO			256
+#define	SCMI_ALL_SRC_IDS		0xffffUL
+/*
+ * Builds an unsigned 32bit key from the given input tuple to be used
+ * as a key in hashtables.
+ */
+#define MAKE_HASH_KEY(p, e, s)			\
+	((u32)(((p) << 24) | ((e) << 16) | ((s) & SCMI_ALL_SRC_IDS)))
+
+#define MAKE_ALL_SRCS_KEY(p, e)			\
+	MAKE_HASH_KEY((p), (e), SCMI_ALL_SRC_IDS)
+
+struct scmi_registered_protocol_events_desc;
+
+/**
+ * scmi_notify_instance  - Represents an instance of the notification core
+ *
+ * Each platform instance, represented by a handle, has its own instance of
+ * the notification subsystem represented by this structure.
+ *
+ * @gid: GroupID used for devres
+ * @handle: A reference to the platform instance
+ * @initialized: A flag that indicates if the core resources have been allocated
+ *		 and protocols are allowed to register their supported events
+ * @enabled: A flag to indicate events can be enabled and start flowing
+ * @registered_protocols: An statically allocated array containing pointers to
+ *			  all the registered protocol-level specific information
+ *			  related to events' handling
+ */
+struct scmi_notify_instance {
+	void						*gid;
+	struct scmi_handle				*handle;
+	atomic_t					initialized;
+	atomic_t					enabled;
+	struct scmi_registered_protocol_events_desc	**registered_protocols;
+};
+
+/**
+ * events_queue  - Describes a queue and its associated worker
+ *
+ * Each protocol has its own dedicated events_queue descriptor.
+ *
+ * @sz: Size in bytes of the related kfifo
+ * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
+ * @kfifo: A dedicated Kernel kfifo descriptor
+ */
+struct events_queue {
+	size_t				sz;
+	u8				*qbuf;
+	struct kfifo			kfifo;
+};
+
+/**
+ * scmi_event_header  - A utility header
+ *
+ * This header is prepended to each received event message payload before
+ * queueing it on the related events_queue.
+ *
+ * @timestamp: The timestamp, in nanoseconds (boottime), which was associated
+ *	       to this event as soon as it entered the SCMI RX ISR
+ * @evt_id: Event ID (corresponds to the Event MsgID for this Protocol)
+ * @payld_sz: Effective size of the embedded message payload which follows
+ * @payld: A reference to the embedded event payload
+ */
+struct scmi_event_header {
+	u64	timestamp;
+	u8	evt_id;
+	size_t	payld_sz;
+	u8	payld[];
+} __packed;
+
+struct scmi_registered_event;
+
+/**
+ * scmi_registered_protocol_events_desc  - Protocol Specific information
+ *
+ * All protocols that registers at least one event have their protocol-specific
+ * information stored here, together with the embedded allocated events_queue.
+ * These descriptors are stored in the @registered_protocols array at protocol
+ * registration time.
+ *
+ * Once these descriptors are successfully registered, they are NEVER again
+ * removed or modified since protocols do not unregister ever, so that once we
+ * safely grab a NON-NULL reference from the array we can keep it and use it.
+ *
+ * @id: Protocol ID
+ * @ops: Protocol specific and event-related operations
+ * @equeue: The embedded per-protocol events_queue
+ * @ni: A reference to the initialized instance descriptor
+ * @eh: A reference to pre-allocated buffer to be used as a scratch area by the
+ *	deferred worker when fetching data from the kfifo
+ * @eh_sz: Size of the pre-allocated buffer @eh
+ * @in_flight: A reference to an in flight @scmi_registered_event
+ * @num_events: Number of events in @registered_events
+ * @registered_events: A dynamically allocated array holding all the registered
+ *		       events' descriptors, whose fixed-size is determined at
+ *		       compile time.
+ */
+struct scmi_registered_protocol_events_desc {
+	u8					id;
+	const struct scmi_protocol_event_ops	*ops;
+	struct events_queue			equeue;
+	struct scmi_notify_instance		*ni;
+	struct scmi_event_header		*eh;
+	size_t					eh_sz;
+	void					*in_flight;
+	int					num_events;
+	struct scmi_registered_event		**registered_events;
+};
+
+/**
+ * scmi_registered_event  - Event Specific Information
+ *
+ * All registered events are represented by one of these structures that are
+ * stored in the @registered_events array at protocol registration time.
+ *
+ * Once these descriptors are successfully registered, they are NEVER again
+ * removed or modified since protocols do not unregister ever, so that once we
+ * safely grab a NON-NULL reference from the table we can keep it and use it.
+ *
+ * @proto: A reference to the associated protocol descriptor
+ * @evt: A reference to the associated event descriptor (as provided at
+ *       registration time)
+ * @report: A pre-allocated buffer used by the deferred worker to fill a
+ *	    customized event report
+ * @num_sources: The number of possible sources for this event as stated at
+ *		 events' registration time
+ * @sources: A reference to a dynamically allocated array used to refcount the
+ *	     events' enable requests for all the existing sources
+ * @sources_mtx: A mutex to serialize the access to @sources
+ */
+struct scmi_registered_event {
+	struct scmi_registered_protocol_events_desc	*proto;
+	const struct scmi_event				*evt;
+	void						*report;
+	u32						num_sources;
+	refcount_t					*sources;
+	struct mutex					sources_mtx;
+};
+
+/**
+ * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
+ *
+ * Allocate a buffer for the kfifo and initialize it.
+ *
+ * @ni: A reference to the notification instance to use
+ * @equeue: The events_queue to initialize
+ * @sz: Size of the kfifo buffer to allocate
+ *
+ * Return: 0 on Success
+ */
+static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
+					struct events_queue *equeue, size_t sz)
+{
+	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
+	if (!equeue->qbuf)
+		return -ENOMEM;
+	equeue->sz = sz;
+
+	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
+}
+
+/**
+ * scmi_allocate_registered_protocol_desc  - Allocate a registered protocol
+ * events' descriptor
+ *
+ * It is supposed to be called only once for each protocol at protocol
+ * initialization time, so it warns if the requested protocol is found
+ * already registered.
+ *
+ * @ni: A reference to the notification instance to use
+ * @proto_id: Protocol ID
+ * @queue_sz: Size of the associated queue to allocate
+ * @eh_sz: Size of the event header scratch area to pre-allocate
+ * @num_events: Number of events to support (size of @registered_events)
+ * @ops: Pointer to a struct holding references to protocol specific helpers
+ *	 needed during events handling
+ *
+ * Returns the allocated and registered descriptor on Success
+ */
+static struct scmi_registered_protocol_events_desc *
+scmi_allocate_registered_protocol_desc(struct scmi_notify_instance *ni,
+				       u8 proto_id, size_t queue_sz,
+				       size_t eh_sz, int num_events,
+				const struct scmi_protocol_event_ops *ops)
+{
+	int ret;
+	struct scmi_registered_protocol_events_desc *pd;
+
+	pd = READ_ONCE(ni->registered_protocols[proto_id]);
+	if (pd) {
+		WARN_ON(1);
+		return ERR_PTR(-EINVAL);
+	}
+
+	pd = devm_kzalloc(ni->handle->dev, sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		return ERR_PTR(-ENOMEM);
+	pd->id = proto_id;
+	pd->ops = ops;
+	pd->ni = ni;
+
+	ret = scmi_initialize_events_queue(ni, &pd->equeue, queue_sz);
+	if (ret)
+		return ERR_PTR(ret);
+
+	pd->eh = devm_kzalloc(ni->handle->dev, eh_sz, GFP_KERNEL);
+	if (!pd->eh)
+		return ERR_PTR(-ENOMEM);
+	pd->eh_sz = eh_sz;
+
+	pd->registered_events = devm_kcalloc(ni->handle->dev, num_events,
+					     sizeof(char *), GFP_KERNEL);
+	if (!pd->registered_events)
+		return ERR_PTR(-ENOMEM);
+	pd->num_events = num_events;
+
+	return pd;
+}
+
+/**
+ * scmi_register_protocol_events  - Register Protocol Events with the core
+ *
+ * Used by SCMI Protocols initialization code to register with the notification
+ * core the list of supported events and their descriptors: takes care to
+ * pre-allocate and store all needed descriptors, scratch buffers and event
+ * queues.
+ *
+ * @handle: The handle identifying the platform instance against which the
+ *	    the protocol's events are registered
+ * @proto_id: Protocol ID
+ * @queue_sz: Size in bytes of the associated queue to be allocated
+ * @ops: Protocol specific event-related operations
+ * @evt: Event descriptor array
+ * @num_events: Number of events in @evt array
+ * @num_sources: Number of possible sources for this protocol on this
+ *		 platform.
+ *
+ * Return: 0 on Success
+ */
+int scmi_register_protocol_events(const struct scmi_handle *handle,
+				  u8 proto_id, size_t queue_sz,
+				  const struct scmi_protocol_event_ops *ops,
+				  const struct scmi_event *evt, int num_events,
+				  int num_sources)
+{
+	int i;
+	size_t payld_sz = 0;
+	struct scmi_registered_protocol_events_desc *pd;
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	if (!ops || !evt || proto_id >= SCMI_MAX_PROTO)
+		return -EINVAL;
+
+	/* Ensure atomic value is updated */
+	smp_mb__before_atomic();
+	if (unlikely(!ni || !atomic_read(&ni->initialized)))
+		return -EAGAIN;
+
+	/* Attach to the notification main devres group */
+	if (!devres_open_group(ni->handle->dev, ni->gid, GFP_KERNEL))
+		return -ENOMEM;
+
+	for (i = 0; i < num_events; i++)
+		payld_sz = max_t(size_t, payld_sz, evt[i].max_payld_sz);
+	pd = scmi_allocate_registered_protocol_desc(ni, proto_id, queue_sz,
+				    sizeof(struct scmi_event_header) + payld_sz,
+						    num_events, ops);
+	if (IS_ERR(pd))
+		goto err;
+
+	for (i = 0; i < num_events; i++, evt++) {
+		struct scmi_registered_event *r_evt;
+
+		r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt),
+				     GFP_KERNEL);
+		if (!r_evt)
+			goto err;
+		r_evt->proto = pd;
+		r_evt->evt = evt;
+
+		r_evt->sources = devm_kcalloc(ni->handle->dev, num_sources,
+					      sizeof(refcount_t), GFP_KERNEL);
+		if (!r_evt->sources)
+			goto err;
+		r_evt->num_sources = num_sources;
+		mutex_init(&r_evt->sources_mtx);
+
+		r_evt->report = devm_kzalloc(ni->handle->dev,
+					     evt->max_report_sz, GFP_KERNEL);
+		if (!r_evt->report)
+			goto err;
+
+		WRITE_ONCE(pd->registered_events[i], r_evt);
+		pr_info("SCMI Notifications: registered event - %X\n",
+			MAKE_ALL_SRCS_KEY(r_evt->proto->id, r_evt->evt->id));
+	}
+
+	/* Register protocol and events...it will never be removed */
+	WRITE_ONCE(ni->registered_protocols[proto_id], pd);
+
+	devres_close_group(ni->handle->dev, ni->gid);
+
+	return 0;
+
+err:
+	pr_warn("SCMI Notifications - Proto:%X - Registration Failed !\n",
+		proto_id);
+	/* A failing protocol registration does not trigger full failure */
+	devres_close_group(ni->handle->dev, ni->gid);
+
+	return -ENOMEM;
+}
+
+/**
+ * scmi_notification_init  - Initializes Notification Core Support
+ *
+ * This function lays out all the basic resources needed by the notification
+ * core instance identified by the provided handle: once done, all of the
+ * SCMI Protocols can register their events with the core during their own
+ * initializations.
+ *
+ * Note that failing to initialize the core notifications support does not
+ * cause the whole SCMI Protocols stack to fail its initialization.
+ *
+ * SCMI Notification Initialization happens in 2 steps:
+ *
+ *  - initialization: basic common allocations (this function) -> .initialized
+ *  - registration: protocols asynchronously come into life and registers their
+ *		    own supported list of events with the core; this causes
+ *		    further per-protocol allocations.
+ *
+ * Any user's callback registration attempt, referring a still not registered
+ * event, will be registered as pending and finalized later (if possible)
+ * by @scmi_protocols_late_init work.
+ * This allows for lazy initialization of SCMI Protocols due to late (or
+ * missing) SCMI drivers' modules loading.
+ *
+ * @handle: The handle identifying the platform instance to initialize
+ *
+ * Return: 0 on Success
+ */
+int scmi_notification_init(struct scmi_handle *handle)
+{
+	void *gid;
+	struct scmi_notify_instance *ni;
+
+	gid = devres_open_group(handle->dev, NULL, GFP_KERNEL);
+	if (!gid)
+		return -ENOMEM;
+
+	ni = devm_kzalloc(handle->dev, sizeof(*ni), GFP_KERNEL);
+	if (!ni)
+		goto err;
+
+	ni->gid = gid;
+	ni->handle = handle;
+
+	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
+						sizeof(char *), GFP_KERNEL);
+	if (!ni->registered_protocols)
+		goto err;
+
+	handle->notify_priv = ni;
+
+	atomic_set(&ni->initialized, 1);
+	atomic_set(&ni->enabled, 1);
+	/* Ensure atomic values are updated */
+	smp_mb__after_atomic();
+
+	pr_info("SCMI Notifications Core Initialized.\n");
+
+	devres_close_group(handle->dev, ni->gid);
+
+	return 0;
+
+err:
+	pr_warn("SCMI Notifications - Initialization Failed.\n");
+	devres_release_group(handle->dev, NULL);
+	return -ENOMEM;
+}
+
+/**
+ * scmi_notification_exit  - Shutdown and clean Notification core
+ *
+ * @handle: The handle identifying the platform instance to shutdown
+ */
+void scmi_notification_exit(struct scmi_handle *handle)
+{
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	if (unlikely(!ni || !atomic_read(&ni->initialized)))
+		return;
+
+	atomic_set(&ni->enabled, 0);
+	/* Ensure atomic values are updated */
+	smp_mb__after_atomic();
+
+	devres_release_group(ni->handle->dev, ni->gid);
+
+	pr_info("SCMI Notifications Core Shutdown.\n");
+}
diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
new file mode 100644
index 000000000000..a7ece64e8842
--- /dev/null
+++ b/drivers/firmware/arm_scmi/notify.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * System Control and Management Interface (SCMI) Message Protocol
+ * notification header file containing some definitions, structures
+ * and function prototypes related to SCMI Notification handling.
+ *
+ * Copyright (C) 2019 ARM Ltd.
+ */
+#ifndef _SCMI_NOTIFY_H
+#define _SCMI_NOTIFY_H
+
+#include <linux/device.h>
+#include <linux/types.h>
+
+/**
+ * scmi_event  - Describes an event to be supported
+ *
+ * Each SCMI protocol, during its initialization phase, can describe the events
+ * it wishes to support in a few struct scmi_event and pass them to the core
+ * using scmi_register_protocol_events().
+ *
+ * @id: Event ID
+ * @max_payld_sz: Max possible size for the payload of a notif msg of this kind
+ * @max_report_sz: Max possible size for the report of a notif msg of this kind
+ */
+struct scmi_event {
+	u8	id;
+	size_t	max_payld_sz;
+	size_t	max_report_sz;
+
+};
+
+/**
+ * scmi_protocol_event_ops  - Helpers called by notification core.
+ *
+ * These are called only in process context.
+ *
+ * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications
+ *			using the proper custom protocol commands.
+ *			Return true if at least one the required src_id
+ *			has been successfully enabled/disabled
+ */
+struct scmi_protocol_event_ops {
+	bool (*set_notify_enabled)(const struct scmi_handle *handle,
+				   u8 evt_id, u32 src_id, bool enabled);
+};
+
+int scmi_notification_init(struct scmi_handle *handle);
+void scmi_notification_exit(struct scmi_handle *handle);
+
+int scmi_register_protocol_events(const struct scmi_handle *handle,
+				  u8 proto_id, size_t queue_sz,
+				  const struct scmi_protocol_event_ops *ops,
+				  const struct scmi_event *evt, int num_events,
+				  int num_sources);
+
+#endif /* _SCMI_NOTIFY_H */
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 5c873a59b387..0679f10ab05e 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -4,6 +4,10 @@
  *
  * Copyright (C) 2018 ARM Ltd.
  */
+
+#ifndef _LINUX_SCMI_PROTOCOL_H
+#define _LINUX_SCMI_PROTOCOL_H
+
 #include <linux/device.h>
 #include <linux/types.h>
 
@@ -227,6 +231,8 @@ struct scmi_reset_ops {
  *	protocol(for internal use only)
  * @reset_priv: pointer to private data structure specific to reset
  *	protocol(for internal use only)
+ * @notify_priv: pointer to private data structure specific to notifications
+ *	(for internal use only)
  */
 struct scmi_handle {
 	struct device *dev;
@@ -242,6 +248,7 @@ struct scmi_handle {
 	void *power_priv;
 	void *sensor_priv;
 	void *reset_priv;
+	void *notify_priv;
 };
 
 enum scmi_std_protocol {
@@ -319,3 +326,5 @@ static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
 typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
 int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
 void scmi_protocol_unregister(int protocol_id);
+
+#endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 05/13] firmware: arm_scmi: Add notification protocol-registration
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Add core SCMI Notifications protocol-registration support: allow protocols
to register their own set of supported events, during their initialization
phase. Notification core can track multiple platform instances by their
handles.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- removed scratch ISR buffer, move scratch BH buffer into protocol
  descriptor
- converted registered_protocols and registered_events from hashtables
  into bare fixed-sized arrays
- removed unregister protocols' routines (never called really)
V2 --> V3
- added scmi_notify_instance to track target platform instance
V1 --> V2
- splitted out of V1 patch 04
- moved from IDR maps to real HashTables to store events
- scmi_notifications_initialized is now an atomic_t
- reviewed protocol registration/unregistration to use devres
- fixed:
  drivers/firmware/arm_scmi/notify.c:483:18-23: ERROR:
  	reference preceded by free on line 482

Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
---
 drivers/firmware/arm_scmi/Makefile |   2 +-
 drivers/firmware/arm_scmi/common.h |   4 +
 drivers/firmware/arm_scmi/notify.c | 439 +++++++++++++++++++++++++++++
 drivers/firmware/arm_scmi/notify.h |  57 ++++
 include/linux/scmi_protocol.h      |   9 +
 5 files changed, 510 insertions(+), 1 deletion(-)
 create mode 100644 drivers/firmware/arm_scmi/notify.c
 create mode 100644 drivers/firmware/arm_scmi/notify.h

diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
index 6694d0d908d6..24a03a36aee4 100644
--- a/drivers/firmware/arm_scmi/Makefile
+++ b/drivers/firmware/arm_scmi/Makefile
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
 obj-y	= scmi-bus.o scmi-driver.o scmi-protocols.o scmi-transport.o
 scmi-bus-y = bus.o
-scmi-driver-y = driver.o
+scmi-driver-y = driver.o notify.o
 scmi-transport-y = mailbox.o shmem.o
 scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o
 obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 3c2e5d0d7b68..2106c35195ce 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -6,6 +6,8 @@
  *
  * Copyright (C) 2018 ARM Ltd.
  */
+#ifndef _SCMI_COMMON_H
+#define _SCMI_COMMON_H
 
 #include <linux/bitfield.h>
 #include <linux/completion.h>
@@ -232,3 +234,5 @@ void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
 void shmem_clear_notification(struct scmi_shared_mem __iomem *shmem);
 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
 		     struct scmi_xfer *xfer);
+
+#endif /* _SCMI_COMMON_H */
diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
new file mode 100644
index 000000000000..31e49cb7d88e
--- /dev/null
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -0,0 +1,439 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Notification support
+ *
+ * Copyright (C) 2020 ARM Ltd.
+ *
+ * SCMI Protocol specification allows the platform to signal events to
+ * interested agents via notification messages: this is an implementation
+ * of the dispatch and delivery of such notifications to the interested users
+ * inside the Linux kernel.
+ *
+ * An SCMI Notification core instance is initialized for each active platform
+ * instance identified by the means of the usual @scmi_handle.
+ *
+ * Each SCMI Protocol implementation, during its initialization, registers with
+ * this core its set of supported events using @scmi_register_protocol_events():
+ * all the needed descriptors are stored in the @registered_protocols and
+ * @registered_events arrays.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#include <linux/compiler.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/kfifo.h>
+#include <linux/mutex.h>
+#include <linux/refcount.h>
+#include <linux/scmi_protocol.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "notify.h"
+
+#define	SCMI_MAX_PROTO			256
+#define	SCMI_ALL_SRC_IDS		0xffffUL
+/*
+ * Builds an unsigned 32bit key from the given input tuple to be used
+ * as a key in hashtables.
+ */
+#define MAKE_HASH_KEY(p, e, s)			\
+	((u32)(((p) << 24) | ((e) << 16) | ((s) & SCMI_ALL_SRC_IDS)))
+
+#define MAKE_ALL_SRCS_KEY(p, e)			\
+	MAKE_HASH_KEY((p), (e), SCMI_ALL_SRC_IDS)
+
+struct scmi_registered_protocol_events_desc;
+
+/**
+ * scmi_notify_instance  - Represents an instance of the notification core
+ *
+ * Each platform instance, represented by a handle, has its own instance of
+ * the notification subsystem represented by this structure.
+ *
+ * @gid: GroupID used for devres
+ * @handle: A reference to the platform instance
+ * @initialized: A flag that indicates if the core resources have been allocated
+ *		 and protocols are allowed to register their supported events
+ * @enabled: A flag to indicate events can be enabled and start flowing
+ * @registered_protocols: An statically allocated array containing pointers to
+ *			  all the registered protocol-level specific information
+ *			  related to events' handling
+ */
+struct scmi_notify_instance {
+	void						*gid;
+	struct scmi_handle				*handle;
+	atomic_t					initialized;
+	atomic_t					enabled;
+	struct scmi_registered_protocol_events_desc	**registered_protocols;
+};
+
+/**
+ * events_queue  - Describes a queue and its associated worker
+ *
+ * Each protocol has its own dedicated events_queue descriptor.
+ *
+ * @sz: Size in bytes of the related kfifo
+ * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
+ * @kfifo: A dedicated Kernel kfifo descriptor
+ */
+struct events_queue {
+	size_t				sz;
+	u8				*qbuf;
+	struct kfifo			kfifo;
+};
+
+/**
+ * scmi_event_header  - A utility header
+ *
+ * This header is prepended to each received event message payload before
+ * queueing it on the related events_queue.
+ *
+ * @timestamp: The timestamp, in nanoseconds (boottime), which was associated
+ *	       to this event as soon as it entered the SCMI RX ISR
+ * @evt_id: Event ID (corresponds to the Event MsgID for this Protocol)
+ * @payld_sz: Effective size of the embedded message payload which follows
+ * @payld: A reference to the embedded event payload
+ */
+struct scmi_event_header {
+	u64	timestamp;
+	u8	evt_id;
+	size_t	payld_sz;
+	u8	payld[];
+} __packed;
+
+struct scmi_registered_event;
+
+/**
+ * scmi_registered_protocol_events_desc  - Protocol Specific information
+ *
+ * All protocols that registers at least one event have their protocol-specific
+ * information stored here, together with the embedded allocated events_queue.
+ * These descriptors are stored in the @registered_protocols array at protocol
+ * registration time.
+ *
+ * Once these descriptors are successfully registered, they are NEVER again
+ * removed or modified since protocols do not unregister ever, so that once we
+ * safely grab a NON-NULL reference from the array we can keep it and use it.
+ *
+ * @id: Protocol ID
+ * @ops: Protocol specific and event-related operations
+ * @equeue: The embedded per-protocol events_queue
+ * @ni: A reference to the initialized instance descriptor
+ * @eh: A reference to pre-allocated buffer to be used as a scratch area by the
+ *	deferred worker when fetching data from the kfifo
+ * @eh_sz: Size of the pre-allocated buffer @eh
+ * @in_flight: A reference to an in flight @scmi_registered_event
+ * @num_events: Number of events in @registered_events
+ * @registered_events: A dynamically allocated array holding all the registered
+ *		       events' descriptors, whose fixed-size is determined at
+ *		       compile time.
+ */
+struct scmi_registered_protocol_events_desc {
+	u8					id;
+	const struct scmi_protocol_event_ops	*ops;
+	struct events_queue			equeue;
+	struct scmi_notify_instance		*ni;
+	struct scmi_event_header		*eh;
+	size_t					eh_sz;
+	void					*in_flight;
+	int					num_events;
+	struct scmi_registered_event		**registered_events;
+};
+
+/**
+ * scmi_registered_event  - Event Specific Information
+ *
+ * All registered events are represented by one of these structures that are
+ * stored in the @registered_events array at protocol registration time.
+ *
+ * Once these descriptors are successfully registered, they are NEVER again
+ * removed or modified since protocols do not unregister ever, so that once we
+ * safely grab a NON-NULL reference from the table we can keep it and use it.
+ *
+ * @proto: A reference to the associated protocol descriptor
+ * @evt: A reference to the associated event descriptor (as provided at
+ *       registration time)
+ * @report: A pre-allocated buffer used by the deferred worker to fill a
+ *	    customized event report
+ * @num_sources: The number of possible sources for this event as stated at
+ *		 events' registration time
+ * @sources: A reference to a dynamically allocated array used to refcount the
+ *	     events' enable requests for all the existing sources
+ * @sources_mtx: A mutex to serialize the access to @sources
+ */
+struct scmi_registered_event {
+	struct scmi_registered_protocol_events_desc	*proto;
+	const struct scmi_event				*evt;
+	void						*report;
+	u32						num_sources;
+	refcount_t					*sources;
+	struct mutex					sources_mtx;
+};
+
+/**
+ * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
+ *
+ * Allocate a buffer for the kfifo and initialize it.
+ *
+ * @ni: A reference to the notification instance to use
+ * @equeue: The events_queue to initialize
+ * @sz: Size of the kfifo buffer to allocate
+ *
+ * Return: 0 on Success
+ */
+static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
+					struct events_queue *equeue, size_t sz)
+{
+	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
+	if (!equeue->qbuf)
+		return -ENOMEM;
+	equeue->sz = sz;
+
+	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
+}
+
+/**
+ * scmi_allocate_registered_protocol_desc  - Allocate a registered protocol
+ * events' descriptor
+ *
+ * It is supposed to be called only once for each protocol at protocol
+ * initialization time, so it warns if the requested protocol is found
+ * already registered.
+ *
+ * @ni: A reference to the notification instance to use
+ * @proto_id: Protocol ID
+ * @queue_sz: Size of the associated queue to allocate
+ * @eh_sz: Size of the event header scratch area to pre-allocate
+ * @num_events: Number of events to support (size of @registered_events)
+ * @ops: Pointer to a struct holding references to protocol specific helpers
+ *	 needed during events handling
+ *
+ * Returns the allocated and registered descriptor on Success
+ */
+static struct scmi_registered_protocol_events_desc *
+scmi_allocate_registered_protocol_desc(struct scmi_notify_instance *ni,
+				       u8 proto_id, size_t queue_sz,
+				       size_t eh_sz, int num_events,
+				const struct scmi_protocol_event_ops *ops)
+{
+	int ret;
+	struct scmi_registered_protocol_events_desc *pd;
+
+	pd = READ_ONCE(ni->registered_protocols[proto_id]);
+	if (pd) {
+		WARN_ON(1);
+		return ERR_PTR(-EINVAL);
+	}
+
+	pd = devm_kzalloc(ni->handle->dev, sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		return ERR_PTR(-ENOMEM);
+	pd->id = proto_id;
+	pd->ops = ops;
+	pd->ni = ni;
+
+	ret = scmi_initialize_events_queue(ni, &pd->equeue, queue_sz);
+	if (ret)
+		return ERR_PTR(ret);
+
+	pd->eh = devm_kzalloc(ni->handle->dev, eh_sz, GFP_KERNEL);
+	if (!pd->eh)
+		return ERR_PTR(-ENOMEM);
+	pd->eh_sz = eh_sz;
+
+	pd->registered_events = devm_kcalloc(ni->handle->dev, num_events,
+					     sizeof(char *), GFP_KERNEL);
+	if (!pd->registered_events)
+		return ERR_PTR(-ENOMEM);
+	pd->num_events = num_events;
+
+	return pd;
+}
+
+/**
+ * scmi_register_protocol_events  - Register Protocol Events with the core
+ *
+ * Used by SCMI Protocols initialization code to register with the notification
+ * core the list of supported events and their descriptors: takes care to
+ * pre-allocate and store all needed descriptors, scratch buffers and event
+ * queues.
+ *
+ * @handle: The handle identifying the platform instance against which the
+ *	    the protocol's events are registered
+ * @proto_id: Protocol ID
+ * @queue_sz: Size in bytes of the associated queue to be allocated
+ * @ops: Protocol specific event-related operations
+ * @evt: Event descriptor array
+ * @num_events: Number of events in @evt array
+ * @num_sources: Number of possible sources for this protocol on this
+ *		 platform.
+ *
+ * Return: 0 on Success
+ */
+int scmi_register_protocol_events(const struct scmi_handle *handle,
+				  u8 proto_id, size_t queue_sz,
+				  const struct scmi_protocol_event_ops *ops,
+				  const struct scmi_event *evt, int num_events,
+				  int num_sources)
+{
+	int i;
+	size_t payld_sz = 0;
+	struct scmi_registered_protocol_events_desc *pd;
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	if (!ops || !evt || proto_id >= SCMI_MAX_PROTO)
+		return -EINVAL;
+
+	/* Ensure atomic value is updated */
+	smp_mb__before_atomic();
+	if (unlikely(!ni || !atomic_read(&ni->initialized)))
+		return -EAGAIN;
+
+	/* Attach to the notification main devres group */
+	if (!devres_open_group(ni->handle->dev, ni->gid, GFP_KERNEL))
+		return -ENOMEM;
+
+	for (i = 0; i < num_events; i++)
+		payld_sz = max_t(size_t, payld_sz, evt[i].max_payld_sz);
+	pd = scmi_allocate_registered_protocol_desc(ni, proto_id, queue_sz,
+				    sizeof(struct scmi_event_header) + payld_sz,
+						    num_events, ops);
+	if (IS_ERR(pd))
+		goto err;
+
+	for (i = 0; i < num_events; i++, evt++) {
+		struct scmi_registered_event *r_evt;
+
+		r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt),
+				     GFP_KERNEL);
+		if (!r_evt)
+			goto err;
+		r_evt->proto = pd;
+		r_evt->evt = evt;
+
+		r_evt->sources = devm_kcalloc(ni->handle->dev, num_sources,
+					      sizeof(refcount_t), GFP_KERNEL);
+		if (!r_evt->sources)
+			goto err;
+		r_evt->num_sources = num_sources;
+		mutex_init(&r_evt->sources_mtx);
+
+		r_evt->report = devm_kzalloc(ni->handle->dev,
+					     evt->max_report_sz, GFP_KERNEL);
+		if (!r_evt->report)
+			goto err;
+
+		WRITE_ONCE(pd->registered_events[i], r_evt);
+		pr_info("SCMI Notifications: registered event - %X\n",
+			MAKE_ALL_SRCS_KEY(r_evt->proto->id, r_evt->evt->id));
+	}
+
+	/* Register protocol and events...it will never be removed */
+	WRITE_ONCE(ni->registered_protocols[proto_id], pd);
+
+	devres_close_group(ni->handle->dev, ni->gid);
+
+	return 0;
+
+err:
+	pr_warn("SCMI Notifications - Proto:%X - Registration Failed !\n",
+		proto_id);
+	/* A failing protocol registration does not trigger full failure */
+	devres_close_group(ni->handle->dev, ni->gid);
+
+	return -ENOMEM;
+}
+
+/**
+ * scmi_notification_init  - Initializes Notification Core Support
+ *
+ * This function lays out all the basic resources needed by the notification
+ * core instance identified by the provided handle: once done, all of the
+ * SCMI Protocols can register their events with the core during their own
+ * initializations.
+ *
+ * Note that failing to initialize the core notifications support does not
+ * cause the whole SCMI Protocols stack to fail its initialization.
+ *
+ * SCMI Notification Initialization happens in 2 steps:
+ *
+ *  - initialization: basic common allocations (this function) -> .initialized
+ *  - registration: protocols asynchronously come into life and registers their
+ *		    own supported list of events with the core; this causes
+ *		    further per-protocol allocations.
+ *
+ * Any user's callback registration attempt, referring a still not registered
+ * event, will be registered as pending and finalized later (if possible)
+ * by @scmi_protocols_late_init work.
+ * This allows for lazy initialization of SCMI Protocols due to late (or
+ * missing) SCMI drivers' modules loading.
+ *
+ * @handle: The handle identifying the platform instance to initialize
+ *
+ * Return: 0 on Success
+ */
+int scmi_notification_init(struct scmi_handle *handle)
+{
+	void *gid;
+	struct scmi_notify_instance *ni;
+
+	gid = devres_open_group(handle->dev, NULL, GFP_KERNEL);
+	if (!gid)
+		return -ENOMEM;
+
+	ni = devm_kzalloc(handle->dev, sizeof(*ni), GFP_KERNEL);
+	if (!ni)
+		goto err;
+
+	ni->gid = gid;
+	ni->handle = handle;
+
+	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
+						sizeof(char *), GFP_KERNEL);
+	if (!ni->registered_protocols)
+		goto err;
+
+	handle->notify_priv = ni;
+
+	atomic_set(&ni->initialized, 1);
+	atomic_set(&ni->enabled, 1);
+	/* Ensure atomic values are updated */
+	smp_mb__after_atomic();
+
+	pr_info("SCMI Notifications Core Initialized.\n");
+
+	devres_close_group(handle->dev, ni->gid);
+
+	return 0;
+
+err:
+	pr_warn("SCMI Notifications - Initialization Failed.\n");
+	devres_release_group(handle->dev, NULL);
+	return -ENOMEM;
+}
+
+/**
+ * scmi_notification_exit  - Shutdown and clean Notification core
+ *
+ * @handle: The handle identifying the platform instance to shutdown
+ */
+void scmi_notification_exit(struct scmi_handle *handle)
+{
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	if (unlikely(!ni || !atomic_read(&ni->initialized)))
+		return;
+
+	atomic_set(&ni->enabled, 0);
+	/* Ensure atomic values are updated */
+	smp_mb__after_atomic();
+
+	devres_release_group(ni->handle->dev, ni->gid);
+
+	pr_info("SCMI Notifications Core Shutdown.\n");
+}
diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
new file mode 100644
index 000000000000..a7ece64e8842
--- /dev/null
+++ b/drivers/firmware/arm_scmi/notify.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * System Control and Management Interface (SCMI) Message Protocol
+ * notification header file containing some definitions, structures
+ * and function prototypes related to SCMI Notification handling.
+ *
+ * Copyright (C) 2019 ARM Ltd.
+ */
+#ifndef _SCMI_NOTIFY_H
+#define _SCMI_NOTIFY_H
+
+#include <linux/device.h>
+#include <linux/types.h>
+
+/**
+ * scmi_event  - Describes an event to be supported
+ *
+ * Each SCMI protocol, during its initialization phase, can describe the events
+ * it wishes to support in a few struct scmi_event and pass them to the core
+ * using scmi_register_protocol_events().
+ *
+ * @id: Event ID
+ * @max_payld_sz: Max possible size for the payload of a notif msg of this kind
+ * @max_report_sz: Max possible size for the report of a notif msg of this kind
+ */
+struct scmi_event {
+	u8	id;
+	size_t	max_payld_sz;
+	size_t	max_report_sz;
+
+};
+
+/**
+ * scmi_protocol_event_ops  - Helpers called by notification core.
+ *
+ * These are called only in process context.
+ *
+ * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications
+ *			using the proper custom protocol commands.
+ *			Return true if at least one the required src_id
+ *			has been successfully enabled/disabled
+ */
+struct scmi_protocol_event_ops {
+	bool (*set_notify_enabled)(const struct scmi_handle *handle,
+				   u8 evt_id, u32 src_id, bool enabled);
+};
+
+int scmi_notification_init(struct scmi_handle *handle);
+void scmi_notification_exit(struct scmi_handle *handle);
+
+int scmi_register_protocol_events(const struct scmi_handle *handle,
+				  u8 proto_id, size_t queue_sz,
+				  const struct scmi_protocol_event_ops *ops,
+				  const struct scmi_event *evt, int num_events,
+				  int num_sources);
+
+#endif /* _SCMI_NOTIFY_H */
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 5c873a59b387..0679f10ab05e 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -4,6 +4,10 @@
  *
  * Copyright (C) 2018 ARM Ltd.
  */
+
+#ifndef _LINUX_SCMI_PROTOCOL_H
+#define _LINUX_SCMI_PROTOCOL_H
+
 #include <linux/device.h>
 #include <linux/types.h>
 
@@ -227,6 +231,8 @@ struct scmi_reset_ops {
  *	protocol(for internal use only)
  * @reset_priv: pointer to private data structure specific to reset
  *	protocol(for internal use only)
+ * @notify_priv: pointer to private data structure specific to notifications
+ *	(for internal use only)
  */
 struct scmi_handle {
 	struct device *dev;
@@ -242,6 +248,7 @@ struct scmi_handle {
 	void *power_priv;
 	void *sensor_priv;
 	void *reset_priv;
+	void *notify_priv;
 };
 
 enum scmi_std_protocol {
@@ -319,3 +326,5 @@ static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
 typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
 int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
 void scmi_protocol_unregister(int protocol_id);
+
+#endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 06/13] firmware: arm_scmi: Add notification callbacks-registration
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Add core SCMI Notifications callbacks-registration support: allow users
to register their own callbacks against the desired events.
Whenever a registration request is issued against a still non existent
event, mark such request as pending for later processing, in order to
account for possible late initializations of SCMI Protocols associated
to loadable drivers.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- split registered_handlers hashtable on a per-protocol basis to reduce
  unneeded contention
- introduced pending_handlers table and related late_init worker to finalize
  handlers registration upon effective protocols' registrations
- introduced further safe accessors macros for registered_protocols
  and registered_events arrays
V2 --> V3
- refactored get/put event_handler
- removed generic non-handle-based API
V1 --> V2
- splitted out of V1 patch 04
- moved from IDR maps to real HashTables to store event_handlers
- added proper enable_events refcounting via __scmi_enable_evt()
  [was broken in V1 when using ALL_SRCIDs notification chains]
- reviewed hashtable cleanup strategy in scmi_notifications_exit()
- added scmi_register_event_notifier()/scmi_unregister_event_notifier()
  to include/linux/scmi_protocol.h as a candidate user API
  [no EXPORTs still]
- added notify_ops to handle during initialization as an additional
  internal API for scmi_drivers
---
 drivers/firmware/arm_scmi/notify.c | 700 +++++++++++++++++++++++++++++
 drivers/firmware/arm_scmi/notify.h |  12 +
 include/linux/scmi_protocol.h      |  50 +++
 3 files changed, 762 insertions(+)

diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
index 31e49cb7d88e..d6c08cce3c63 100644
--- a/drivers/firmware/arm_scmi/notify.c
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -16,18 +16,50 @@
  * this core its set of supported events using @scmi_register_protocol_events():
  * all the needed descriptors are stored in the @registered_protocols and
  * @registered_events arrays.
+ *
+ * Kernel users interested in some specific event can register their callbacks
+ * providing the usual notifier_block descriptor, since this core implements
+ * events' delivery using the standard Kernel notification chains machinery.
+ *
+ * Given the number of possible events defined by SCMI and the extensibility
+ * of the SCMI Protocol itself, the underlying notification chains are created
+ * and destroyed dynamically on demand depending on the number of users
+ * effectively registered for an event, so that no support structures or chains
+ * are allocated until at least one user has registered a notifier_block for
+ * such event. Similarly, events' generation itself is enabled at the platform
+ * level only after at least one user has registered, and it is shutdown after
+ * the last user for that event has gone.
+ *
+ * All users provided callbacks and allocated notification-chains are stored in
+ * the @registered_events_handlers hashtable. Callbacks' registration requests
+ * for still to be registered events are instead kept in the dedicated common
+ * hashtable @pending_events_handlers.
+ *
+ * An event is identified univocally by the tuple (proto_id, evt_id, src_id)
+ * and is served by its own dedicated notification chain; information contained
+ * in such tuples is used, in a few different ways, to generate the needed
+ * hash-keys.
+ *
+ * Here proto_id and evt_id are simply the protocol_id and message_id numbers
+ * as described in the SCMI Protocol specification, while src_id represents an
+ * optional, protocol dependent, source identifier (like domain_id, perf_id
+ * or sensor_id and so forth).
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
 #include <linux/atomic.h>
+#include <linux/bitfield.h>
 #include <linux/bug.h>
 #include <linux/compiler.h>
 #include <linux/device.h>
 #include <linux/err.h>
+#include <linux/hashtable.h>
 #include <linux/kernel.h>
 #include <linux/kfifo.h>
+#include <linux/list.h>
 #include <linux/mutex.h>
+#include <linux/notifier.h>
 #include <linux/refcount.h>
 #include <linux/scmi_protocol.h>
 #include <linux/slab.h>
@@ -47,6 +79,71 @@
 #define MAKE_ALL_SRCS_KEY(p, e)			\
 	MAKE_HASH_KEY((p), (e), SCMI_ALL_SRC_IDS)
 
+/**
+ * Assumes that the stored obj includes its own hash-key in a field named 'key':
+ * with this simplification this macro can be equally used for all the objects'
+ * types hashed by this implementation.
+ *
+ * @__ht: The hashtable name
+ * @__obj: A pointer to the object type to be retrieved from the hashtable;
+ *	   it will be used as a cursor while scanning the hastable and it will
+ *	   be possibly left as NULL when @__k is not found
+ * @__k: The key to search for
+ */
+#define KEY_FIND(__ht, __obj, __k)				\
+({								\
+	hash_for_each_possible((__ht), (__obj), hash, (__k))	\
+		if (likely((__obj)->key == (__k)))		\
+			break;					\
+	__obj;							\
+})
+
+#define PROTO_ID_MASK			GENMASK(31, 24)
+#define EVT_ID_MASK			GENMASK(23, 16)
+#define SRC_ID_MASK			GENMASK(15, 0)
+#define KEY_XTRACT_PROTO_ID(key)	FIELD_GET(PROTO_ID_MASK, (key))
+#define KEY_XTRACT_EVT_ID(key)		FIELD_GET(EVT_ID_MASK, (key))
+#define KEY_XTRACT_SRC_ID(key)		FIELD_GET(SRC_ID_MASK, (key))
+
+/**
+ * A set of macros used to access safely @registered_protocols and
+ * @registered_events arrays; these are fixed in size and each entry is possibly
+ * populated at protocols' registration time and then only read but NEVER
+ * modified or removed.
+ */
+#define SCMI_GET_PROTO(__ni, __pid)					\
+({									\
+	struct scmi_registered_protocol_events_desc *__pd = NULL;	\
+									\
+	if ((__ni) && (__pid) < SCMI_MAX_PROTO)				\
+		__pd = READ_ONCE((__ni)->registered_protocols[(__pid)]);\
+	__pd;								\
+})
+
+#define SCMI_GET_REVT_FROM_PD(__pd, __eid)				\
+({									\
+	struct scmi_registered_event *__revt = NULL;			\
+									\
+	if ((__pd) && (__eid) < (__pd)->num_events)			\
+		__revt = READ_ONCE((__pd)->registered_events[(__eid)]);	\
+	__revt;								\
+})
+
+#define SCMI_GET_REVT(__ni, __pid, __eid)				\
+({									\
+	struct scmi_registered_event *__revt = NULL;			\
+	struct scmi_registered_protocol_events_desc *__pd = NULL;	\
+									\
+	__pd = SCMI_GET_PROTO((__ni), (__pid));				\
+	__revt = SCMI_GET_REVT_FROM_PD(__pd, (__eid));			\
+	__revt;								\
+})
+
+/* A couple of utility macros to limit cruft when calling protocols' helpers */
+#define REVT_NOTIFY_ENABLE(revt, ...)	\
+	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
+						__VA_ARGS__))
+
 struct scmi_registered_protocol_events_desc;
 
 /**
@@ -60,16 +157,25 @@ struct scmi_registered_protocol_events_desc;
  * @initialized: A flag that indicates if the core resources have been allocated
  *		 and protocols are allowed to register their supported events
  * @enabled: A flag to indicate events can be enabled and start flowing
+ * @init_work: A work item to perform final initializations of pending handlers
+ * @pending_mtx: A mutex to protect @pending_events_handlers
  * @registered_protocols: An statically allocated array containing pointers to
  *			  all the registered protocol-level specific information
  *			  related to events' handling
+ * @pending_events_handlers: An hashtable containing all pending events'
+ *			     handlers descriptors
  */
 struct scmi_notify_instance {
 	void						*gid;
 	struct scmi_handle				*handle;
 	atomic_t					initialized;
 	atomic_t					enabled;
+
+	struct work_struct				init_work;
+
+	struct mutex					pending_mtx;
 	struct scmi_registered_protocol_events_desc	**registered_protocols;
+	DECLARE_HASHTABLE(pending_events_handlers, 8);
 };
 
 /**
@@ -132,6 +238,9 @@ struct scmi_registered_event;
  * @registered_events: A dynamically allocated array holding all the registered
  *		       events' descriptors, whose fixed-size is determined at
  *		       compile time.
+ * @registered_mtx: A mutex to protect @registered_events_handlers
+ * @registered_events_handlers: An hashtable containing all events' handlers
+ *				descriptors registered for this protocol
  */
 struct scmi_registered_protocol_events_desc {
 	u8					id;
@@ -143,6 +252,8 @@ struct scmi_registered_protocol_events_desc {
 	void					*in_flight;
 	int					num_events;
 	struct scmi_registered_event		**registered_events;
+	struct mutex				registered_mtx;
+	DECLARE_HASHTABLE(registered_events_handlers, 8);
 };
 
 /**
@@ -175,6 +286,38 @@ struct scmi_registered_event {
 	struct mutex					sources_mtx;
 };
 
+/**
+ * scmi_event_handler  - Event handler information
+ *
+ * This structure collects all the information needed to process a received
+ * event identified by the tuple (proto_id, evt_id, src_id).
+ * These descriptors are stored in a per-protocol @registered_events_handlers
+ * table using as a key a value derived from that tuple.
+ *
+ * @key: The used hashkey
+ * @users: A reference count for number of active users for this handler
+ * @r_evt: A reference to the associated registered event; when this is NULL
+ *	   this handler is pending, which means that identifies a set of
+ *	   callbacks intended to be attached to an event which is still not
+ *	   known nor registered by any protocol at that point in time
+ * @chain: The notification chain dedicated to this specific event tuple
+ * @hash: The hlist_node used for collision handling
+ * @enabled: A boolean which records if event's generation has been already
+ *	     enabled for this handler as a whole
+ */
+struct scmi_event_handler {
+	u32				key;
+	refcount_t			users;
+	struct scmi_registered_event	*r_evt;
+	struct blocking_notifier_head	chain;
+	struct hlist_node		hash;
+	bool				enabled;
+};
+
+#define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
+
+static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
+				      struct scmi_event_handler *hndl);
 /**
  * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
  *
@@ -252,6 +395,10 @@ scmi_allocate_registered_protocol_desc(struct scmi_notify_instance *ni,
 		return ERR_PTR(-ENOMEM);
 	pd->num_events = num_events;
 
+	/* Initialize per protocol handlers table */
+	mutex_init(&pd->registered_mtx);
+	hash_init(pd->registered_events_handlers);
+
 	return pd;
 }
 
@@ -338,6 +485,12 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
 
 	devres_close_group(ni->handle->dev, ni->gid);
 
+	/*
+	 * Finalize any pending events' handler which could have been waiting
+	 * for this protocol's events registration.
+	 */
+	schedule_work(&ni->init_work);
+
 	return 0;
 
 err:
@@ -349,6 +502,547 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
 	return -ENOMEM;
 }
 
+/**
+ * scmi_allocate_event_handler  - Allocate Event handler
+ *
+ * Allocate an event handler and related notification chain associated with
+ * the provided event handler key.
+ * Note that, at this point, a related registered_event is still to be
+ * associated to this handler descriptor (hndl->r_evt == NULL), so the handler
+ * is initialized as pending.
+ *
+ * Assumes to be called with @pending_mtx already acquired.
+ *
+ * @ni: A reference to the notification instance to use
+ * @evt_key: 32bit key uniquely bind to the event identified by the tuple
+ *	     (proto_id, evt_id, src_id)
+ *
+ * Return: the freshly allocated structure on Success
+ */
+static struct scmi_event_handler *
+scmi_allocate_event_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	struct scmi_event_handler *hndl;
+
+	hndl = kzalloc(sizeof(*hndl), GFP_KERNEL);
+	if (!hndl)
+		return ERR_PTR(-ENOMEM);
+	hndl->key = evt_key;
+	BLOCKING_INIT_NOTIFIER_HEAD(&hndl->chain);
+	refcount_set(&hndl->users, 1);
+	/* New handlers are created pending */
+	hash_add(ni->pending_events_handlers, &hndl->hash, hndl->key);
+
+	return hndl;
+}
+
+/**
+ * scmi_free_event_handler  - Free the provided Event handler
+ *
+ * Assumes to be called with proper locking acquired depending on the situation.
+ *
+ * @hndl: The event handler structure to free
+ */
+static void scmi_free_event_handler(struct scmi_event_handler *hndl)
+{
+	hash_del(&hndl->hash);
+	kfree(hndl);
+}
+
+/**
+ * scmi_bind_event_handler  - Helper to attempt binding an handler to an event
+ *
+ * If an associated registered event is found, move the handler from the pending
+ * into the registered table.
+ *
+ * Assumes to be called with @pending_mtx already acquired.
+ *
+ * @ni: A reference to the notification instance to use
+ * @hndl: The event handler to bind
+ *
+ * Return: True if bind was successful, False otherwise
+ */
+static inline bool scmi_bind_event_handler(struct scmi_notify_instance *ni,
+					   struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_event *r_evt;
+
+
+	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(hndl->key),
+			      KEY_XTRACT_EVT_ID(hndl->key));
+	if (unlikely(!r_evt))
+		return false;
+
+	/* Remove from pending and insert into registered */
+	hash_del(&hndl->hash);
+	hndl->r_evt = r_evt;
+	mutex_lock(&r_evt->proto->registered_mtx);
+	hash_add(r_evt->proto->registered_events_handlers,
+		 &hndl->hash, hndl->key);
+	mutex_unlock(&r_evt->proto->registered_mtx);
+
+	return true;
+}
+
+/**
+ * scmi_valid_pending_handler  - Helper to check pending status of handlers
+ *
+ * An handler is considered pending when its r_evt == NULL, because the related
+ * event was still unknown at handler's registration time; anyway, since all
+ * protocols register their supported events once for all at protocols'
+ * initialization time, a pending handler cannot be considered valid anymore if
+ * the underlying event (which it is waiting for), belongs to an already
+ * initialized and registered protocol.
+ *
+ * @ni: A reference to the notification instance to use
+ * @hndl: The event handler to check
+ *
+ * Return: True if pending registration is still valid, False otherwise.
+ */
+static inline bool scmi_valid_pending_handler(struct scmi_notify_instance *ni,
+					      struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_protocol_events_desc *pd;
+
+	if (unlikely(!IS_HNDL_PENDING(hndl)))
+		return false;
+
+	pd = SCMI_GET_PROTO(ni, KEY_XTRACT_PROTO_ID(hndl->key));
+	if (pd)
+		return false;
+
+	return true;
+}
+
+/**
+ * scmi_register_event_handler  - Register whenever possible an Event handler
+ *
+ * At first try to bind an event handler to its associated event, then check if
+ * it was at least a valid pending handler: if it was not bound nor valid return
+ * false.
+ *
+ * Valid pending incomplete bindings will be periodically retried by a dedicated
+ * worker which is kicked each time a new protocol completes its own
+ * registration phase.
+ *
+ * Assumes to be called with @pending_mtx acquired.
+ *
+ * @ni: A reference to the notification instance to use
+ * @hndl: The event handler to register
+ *
+ * Return: True if a normal or a valid pending registration has been completed,
+ *	   False otherwise
+ */
+static bool scmi_register_event_handler(struct scmi_notify_instance *ni,
+					struct scmi_event_handler *hndl)
+{
+	bool ret;
+
+	ret = scmi_bind_event_handler(ni, hndl);
+	if (ret) {
+		pr_info("SCMI Notifications: registered NEW handler - key:%X\n",
+			hndl->key);
+	} else {
+		ret = scmi_valid_pending_handler(ni, hndl);
+		if (ret)
+			pr_info("SCMI Notifications: registered PENDING handler - key:%X\n",
+				hndl->key);
+	}
+
+	return ret;
+}
+
+/**
+ * __scmi_event_handler_get_ops  - Utility to get or create an event handler
+ *
+ * Search for the desired handler matching the key in both the per-protocol
+ * registered table and the common pending table:
+ *  - if found adjust users refcount
+ *  - if not found and @create is true, create and register the new handler:
+ *    handler could end up being registered as pending if no matching event
+ *    could be found.
+ *
+ * An handler is guaranteed to reside in one and only one of the tables at
+ * any one time; to ensure this the whole search and create is performed
+ * holding the @pending_mtx lock, with @registered_mtx additionally acquired
+ * if needed.
+ * Note that when a nested acquisition of these mutexes is needed the locking
+ * order is always (same as in @init_work):
+ *	1. pending_mtx
+ *	2. registered_mtx
+ *
+ * Events generation is NOT enabled right after creation within this routine
+ * since at creation time we usually want to have all setup and ready before
+ * events really start flowing.
+ *
+ * @ni: A reference to the notification instance to use
+ * @evt_key: The event key to use
+ * @create: A boolean flag to specify if a handler must be created when
+ *	    not already existent
+ *
+ * Return: A properly refcounted handler on Success, NULL on Failure
+ */
+static inline struct scmi_event_handler *
+__scmi_event_handler_get_ops(struct scmi_notify_instance *ni,
+			     u32 evt_key, bool create)
+{
+	struct scmi_registered_event *r_evt;
+	struct scmi_event_handler *hndl = NULL;
+
+	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
+			      KEY_XTRACT_EVT_ID(evt_key));
+
+	mutex_lock(&ni->pending_mtx);
+	/* Search registered events at first ... if possible at all */
+	if (likely(r_evt)) {
+		mutex_lock(&r_evt->proto->registered_mtx);
+		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
+				hndl, evt_key);
+		if (likely(hndl))
+			refcount_inc(&hndl->users);
+		mutex_unlock(&r_evt->proto->registered_mtx);
+	}
+
+	/* ...then amongst pending. */
+	if (unlikely(!hndl)) {
+		hndl = KEY_FIND(ni->pending_events_handlers, hndl, evt_key);
+		if (likely(hndl))
+			refcount_inc(&hndl->users);
+	}
+
+	/* Create if still not found and required */
+	if (!hndl && create) {
+		hndl = scmi_allocate_event_handler(ni, evt_key);
+		if (!IS_ERR_OR_NULL(hndl)) {
+			if (!scmi_register_event_handler(ni, hndl)) {
+				pr_info("SCMI Notifications: purging UNKNOWN handler - key:%X\n",
+					hndl->key);
+				/* this hndl can be only a pending one */
+				scmi_put_handler_unlocked(ni, hndl);
+				hndl = NULL;
+			}
+		}
+	}
+	mutex_unlock(&ni->pending_mtx);
+
+	return hndl;
+}
+
+static struct scmi_event_handler *
+scmi_get_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	return __scmi_event_handler_get_ops(ni, evt_key, false);
+}
+
+static struct scmi_event_handler *
+scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	return __scmi_event_handler_get_ops(ni, evt_key, true);
+}
+
+/**
+ * __scmi_enable_evt  - Enable/disable events generation
+ *
+ * Takes care of proper refcounting while performing enable/disable: handles
+ * the special case of ALL sources requests by itself.
+ *
+ * @r_evt: The registered event to act upon
+ * @src_id: The src_id to act upon
+ * @enable: The action to perform: true->Enable, false->Disable
+ *
+ * Return: True when the required @action has been successfully executed
+ */
+static inline bool __scmi_enable_evt(struct scmi_registered_event *r_evt,
+				     u32 src_id, bool enable)
+{
+	int ret = 0;
+	u32 num_sources;
+	refcount_t *sid;
+
+	if (src_id == SCMI_ALL_SRC_IDS) {
+		src_id = 0;
+		num_sources = r_evt->num_sources;
+	} else if (src_id < r_evt->num_sources) {
+		num_sources = 1;
+	} else {
+		return ret;
+	}
+
+	mutex_lock(&r_evt->sources_mtx);
+	if (enable) {
+		for (; num_sources; src_id++, num_sources--) {
+			bool r;
+
+			sid = &r_evt->sources[src_id];
+			if (refcount_read(sid) == 0) {
+				r = REVT_NOTIFY_ENABLE(r_evt,
+						       r_evt->evt->id,
+						       src_id, enable);
+				if (r)
+					refcount_set(sid, 1);
+			} else {
+				refcount_inc(sid);
+				r = true;
+			}
+			ret += r;
+		}
+	} else {
+		for (; num_sources; src_id++, num_sources--) {
+			sid = &r_evt->sources[src_id];
+			if (refcount_dec_and_test(sid))
+				REVT_NOTIFY_ENABLE(r_evt,
+						   r_evt->evt->id,
+						   src_id, enable);
+		}
+		ret = 1;
+	}
+	mutex_unlock(&r_evt->sources_mtx);
+
+	return ret;
+}
+
+static bool scmi_enable_events(struct scmi_event_handler *hndl)
+{
+	if (!hndl->enabled)
+		hndl->enabled = __scmi_enable_evt(hndl->r_evt,
+						  KEY_XTRACT_SRC_ID(hndl->key),
+						  true);
+	return hndl->enabled;
+}
+
+static bool scmi_disable_events(struct scmi_event_handler *hndl)
+{
+	if (hndl->enabled)
+		hndl->enabled = !__scmi_enable_evt(hndl->r_evt,
+						   KEY_XTRACT_SRC_ID(hndl->key),
+						   false);
+	return !hndl->enabled;
+}
+
+/**
+ * scmi_put_handler_unlocked  - Put an event handler
+ *
+ * After having got exclusive access to the registered handlers hashtable,
+ * update the refcount and if @hndl is no more in use by anyone:
+ *
+ *  - ask for events' generation disabling
+ *  - unregister and free the handler itself
+ *
+ *  Assumes all the proper locking has been managed by the caller.
+ *
+ * @ni: A reference to the notification instance to use
+ * @hndl: The event handler to act upon
+ */
+
+static void
+scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
+				struct scmi_event_handler *hndl)
+{
+	if (refcount_dec_and_test(&hndl->users)) {
+		if (likely(!IS_HNDL_PENDING(hndl)))
+			scmi_disable_events(hndl);
+		scmi_free_event_handler(hndl);
+	}
+}
+
+static void scmi_put_handler(struct scmi_notify_instance *ni,
+			     struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_event *r_evt = hndl->r_evt;
+
+	mutex_lock(&ni->pending_mtx);
+	if (r_evt)
+		mutex_lock(&r_evt->proto->registered_mtx);
+
+	scmi_put_handler_unlocked(ni, hndl);
+
+	if (r_evt)
+		mutex_unlock(&r_evt->proto->registered_mtx);
+	mutex_unlock(&ni->pending_mtx);
+}
+
+/**
+ * scmi_event_handler_enable_events  - Enable events associated to an handler
+ *
+ * @hndl: The Event handler to act upon
+ *
+ * Return: True on success
+ */
+static bool scmi_event_handler_enable_events(struct scmi_event_handler *hndl)
+{
+	if (!scmi_enable_events(hndl)) {
+		pr_err("SCMI Notifications: Failed to ENABLE events for key:%X !\n",
+		       hndl->key);
+		return false;
+	}
+
+	return true;
+}
+
+/**
+ * scmi_register_notifier  - Register a notifier_block for an event
+ *
+ * Generic helper to register a notifier_block against a protocol event.
+ *
+ * A notifier_block @nb will be registered for each distinct event identified
+ * by the tuple (proto_id, evt_id, src_id) on a dedicated notification chain
+ * so that:
+ *
+ *	(proto_X, evt_Y, src_Z) --> chain_X_Y_Z
+ *
+ * @src_id meaning is protocol specific and identifies the origin of the event
+ * (like domain_id, sensor_id and so forth).
+ *
+ * @src_id can be NULL to signify that the caller is interested in receiving
+ * notifications from ALL the available sources for that protocol OR simply that
+ * the protocol does not support distinct sources.
+ *
+ * As soon as one user for the specified tuple appears, an handler is created,
+ * and that specific event's generation is enabled at the platform level, unless
+ * an associated registered event is found missing, meaning that the needed
+ * protocol is still to be initialized and the handler has just been registered
+ * as still pending.
+ *
+ * @handle: The handle identifying the platform instance against which the
+ *	    callback is registered
+ * @proto_id: Protocol ID
+ * @evt_id: Event ID
+ * @src_id: Source ID, when NULL register for events coming form ALL possible
+ *	    sources
+ * @nb: A standard notifier block to register for the specified event
+ *
+ * Return: Return 0 on Success
+ */
+static int scmi_register_notifier(const struct scmi_handle *handle,
+				  u8 proto_id, u8 evt_id, u32 *src_id,
+				  struct notifier_block *nb)
+{
+	int ret = 0;
+	u32 evt_key;
+	struct scmi_event_handler *hndl;
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	if (unlikely(!ni || !atomic_read(&ni->initialized)))
+		return 0;
+
+	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
+				src_id ? *src_id : SCMI_ALL_SRC_IDS);
+	hndl = scmi_get_or_create_handler(ni, evt_key);
+	if (IS_ERR_OR_NULL(hndl))
+		return PTR_ERR(hndl);
+
+	blocking_notifier_chain_register(&hndl->chain, nb);
+
+	/* Enable events for not pending handlers */
+	if (likely(!IS_HNDL_PENDING(hndl))) {
+		if (!scmi_event_handler_enable_events(hndl)) {
+			scmi_put_handler(ni, hndl);
+			ret = -EINVAL;
+		}
+	}
+
+	return ret;
+}
+
+/**
+ * scmi_unregister_notifier  - Unregister a notifier_block for an event
+ *
+ * Takes care to unregister the provided @nb from the notification chain
+ * associated to the specified event and, if there are no more users for the
+ * event handler, frees also the associated event handler structures.
+ * (this could possibly cause disabling of event's generation at platform level)
+ *
+ * @handle: The handle identifying the platform instance against which the
+ *	    callback is unregistered
+ * @proto_id: Protocol ID
+ * @evt_id: Event ID
+ * @src_id: Source ID
+ * @nb: The notifier_block to unregister
+ *
+ * Return: 0 on Success
+ */
+static int scmi_unregister_notifier(const struct scmi_handle *handle,
+				    u8 proto_id, u8 evt_id, u32 *src_id,
+				    struct notifier_block *nb)
+{
+	u32 evt_key;
+	struct scmi_event_handler *hndl;
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	if (unlikely(!ni || !atomic_read(&ni->initialized)))
+		return 0;
+
+	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
+				src_id ? *src_id : SCMI_ALL_SRC_IDS);
+	hndl = scmi_get_handler(ni, evt_key);
+	if (IS_ERR_OR_NULL(hndl))
+		return -EINVAL;
+
+	blocking_notifier_chain_unregister(&hndl->chain, nb);
+	scmi_put_handler(ni, hndl);
+
+	/*
+	 * Free the handler (and stop events) if this happens to be the last
+	 * known user callback for this handler; a possible concurrently ongoing
+	 * run of @scmi_lookup_and_call_event_chain will cause this to happen
+	 * in that context safely instead.
+	 */
+	scmi_put_handler(ni, hndl);
+
+	return 0;
+}
+
+/**
+ * scmi_protocols_late_init  - Worker for late initialization
+ *
+ * This kicks in whenever a new protocol has completed its own registration via
+ * scmi_register_protocol_events(): it is in charge of scanning the table of
+ * pending handlers (registered by users while the related protocol was still
+ * not initialized) and finalizing their initialization whenever possible;
+ * invalid pending handlers are purged at this point in time.
+ *
+ * @work: The work item to use associated to the proper SCMI instance
+ */
+static void scmi_protocols_late_init(struct work_struct *work)
+{
+	int bkt;
+	struct scmi_event_handler *hndl;
+	struct scmi_notify_instance *ni;
+	struct hlist_node *tmp;
+
+	ni = container_of(work, struct scmi_notify_instance, init_work);
+
+	mutex_lock(&ni->pending_mtx);
+	hash_for_each_safe(ni->pending_events_handlers, bkt, tmp, hndl, hash) {
+		bool ret;
+
+		ret = scmi_bind_event_handler(ni, hndl);
+		if (ret) {
+			pr_info("SCMI Notifications: finalized PENDING handler - key:%X\n",
+				hndl->key);
+			ret = scmi_event_handler_enable_events(hndl);
+		} else {
+			ret = scmi_valid_pending_handler(ni, hndl);
+		}
+		if (!ret) {
+			pr_info("SCMI Notifications: purging PENDING handler - key:%X\n",
+				hndl->key);
+			/* this hndl can be only a pending one */
+			scmi_put_handler_unlocked(ni, hndl);
+		}
+	}
+	mutex_unlock(&ni->pending_mtx);
+}
+
+/*
+ * notify_ops are attached to the handle so that can be accessed
+ * directly from an scmi_driver to register its own notifiers.
+ */
+static struct scmi_notify_ops notify_ops = {
+	.register_event_notifier = scmi_register_notifier,
+	.unregister_event_notifier = scmi_unregister_notifier,
+};
+
 /**
  * scmi_notification_init  - Initializes Notification Core Support
  *
@@ -398,7 +1092,13 @@ int scmi_notification_init(struct scmi_handle *handle)
 	if (!ni->registered_protocols)
 		goto err;
 
+	mutex_init(&ni->pending_mtx);
+	hash_init(ni->pending_events_handlers);
+
+	INIT_WORK(&ni->init_work, scmi_protocols_late_init);
+
 	handle->notify_priv = ni;
+	handle->notify_ops = &notify_ops;
 
 	atomic_set(&ni->initialized, 1);
 	atomic_set(&ni->enabled, 1);
diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
index a7ece64e8842..f765acda2311 100644
--- a/drivers/firmware/arm_scmi/notify.h
+++ b/drivers/firmware/arm_scmi/notify.h
@@ -9,9 +9,21 @@
 #ifndef _SCMI_NOTIFY_H
 #define _SCMI_NOTIFY_H
 
+#include <linux/bug.h>
 #include <linux/device.h>
 #include <linux/types.h>
 
+#define MAP_EVT_TO_ENABLE_CMD(id, map)			\
+({							\
+	int ret = -1;					\
+							\
+	if (likely((id) < ARRAY_SIZE((map))))		\
+		ret = (map)[(id)];			\
+	else						\
+		WARN(1, "UN-KNOWN evt_id:%d\n", (id));	\
+	ret;						\
+})
+
 /**
  * scmi_event  - Describes an event to be supported
  *
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 0679f10ab05e..797e1e03ae52 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -9,6 +9,8 @@
 #define _LINUX_SCMI_PROTOCOL_H
 
 #include <linux/device.h>
+#include <linux/ktime.h>
+#include <linux/notifier.h>
 #include <linux/types.h>
 
 #define SCMI_MAX_STR_SIZE	16
@@ -211,6 +213,52 @@ struct scmi_reset_ops {
 	int (*deassert)(const struct scmi_handle *handle, u32 domain);
 };
 
+/**
+ * scmi_notify_ops  - represents notifications' operations provided by SCMI core
+ *
+ * A user can register/unregister its own notifier_block against the wanted
+ * platform instance regarding the desired event identified by the
+ * tuple: (proto_id, evt_id, src_id)
+ *
+ * @register_event_notifier: Register a notifier_block for the requested event
+ * @unregister_event_notifier: Unregister a notifier_block for the requested
+ *			       event
+ *
+ * where:
+ *
+ * @handle: The handle identifying the platform instance to use
+ * @proto_id: The protocol ID as in SCMI Specification
+ * @evt_id: The message ID of the desired event as in SCMI Specification
+ * @src_id: A pointer to the desired source ID if different sources are
+ *	    possible for the protocol (like domain_id, sensor_id...etc)
+ *
+ * @src_id can be provided as NULL if it simply does NOT make sense for
+ * the protocol at hand, OR if the user is explicitly interested in
+ * receiving notifications from ANY existent source associated to the
+ * specified proto_id / evt_id.
+ *
+ * Received notifications are finally delivered to the registered users,
+ * invoking the callback provided with the notifier_block *nb as follows:
+ *
+ *	int user_cb(nb, evt_id, report)
+ *
+ * with:
+ *
+ * @nb: The notifier block provided by the user
+ * @evt_id: The message ID of the delivered event
+ * @report: A custom struct describing the specific event delivered
+ *
+ * Events' customized report structs are detailed in the following.
+ */
+struct scmi_notify_ops {
+	int (*register_event_notifier)(const struct scmi_handle *handle,
+				       u8 proto_id, u8 evt_id, u32 *src_id,
+				       struct notifier_block *nb);
+	int (*unregister_event_notifier)(const struct scmi_handle *handle,
+					 u8 proto_id, u8 evt_id, u32 *src_id,
+					 struct notifier_block *nb);
+};
+
 /**
  * struct scmi_handle - Handle returned to ARM SCMI clients for usage.
  *
@@ -221,6 +269,7 @@ struct scmi_reset_ops {
  * @clk_ops: pointer to set of clock protocol operations
  * @sensor_ops: pointer to set of sensor protocol operations
  * @reset_ops: pointer to set of reset protocol operations
+ * @notify_ops: pointer to set of notifications related operations
  * @perf_priv: pointer to private data structure specific to performance
  *	protocol(for internal use only)
  * @clk_priv: pointer to private data structure specific to clock
@@ -242,6 +291,7 @@ struct scmi_handle {
 	struct scmi_power_ops *power_ops;
 	struct scmi_sensor_ops *sensor_ops;
 	struct scmi_reset_ops *reset_ops;
+	struct scmi_notify_ops *notify_ops;
 	/* for protocol internal use */
 	void *perf_priv;
 	void *clk_priv;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 06/13] firmware: arm_scmi: Add notification callbacks-registration
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Add core SCMI Notifications callbacks-registration support: allow users
to register their own callbacks against the desired events.
Whenever a registration request is issued against a still non existent
event, mark such request as pending for later processing, in order to
account for possible late initializations of SCMI Protocols associated
to loadable drivers.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- split registered_handlers hashtable on a per-protocol basis to reduce
  unneeded contention
- introduced pending_handlers table and related late_init worker to finalize
  handlers registration upon effective protocols' registrations
- introduced further safe accessors macros for registered_protocols
  and registered_events arrays
V2 --> V3
- refactored get/put event_handler
- removed generic non-handle-based API
V1 --> V2
- splitted out of V1 patch 04
- moved from IDR maps to real HashTables to store event_handlers
- added proper enable_events refcounting via __scmi_enable_evt()
  [was broken in V1 when using ALL_SRCIDs notification chains]
- reviewed hashtable cleanup strategy in scmi_notifications_exit()
- added scmi_register_event_notifier()/scmi_unregister_event_notifier()
  to include/linux/scmi_protocol.h as a candidate user API
  [no EXPORTs still]
- added notify_ops to handle during initialization as an additional
  internal API for scmi_drivers
---
 drivers/firmware/arm_scmi/notify.c | 700 +++++++++++++++++++++++++++++
 drivers/firmware/arm_scmi/notify.h |  12 +
 include/linux/scmi_protocol.h      |  50 +++
 3 files changed, 762 insertions(+)

diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
index 31e49cb7d88e..d6c08cce3c63 100644
--- a/drivers/firmware/arm_scmi/notify.c
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -16,18 +16,50 @@
  * this core its set of supported events using @scmi_register_protocol_events():
  * all the needed descriptors are stored in the @registered_protocols and
  * @registered_events arrays.
+ *
+ * Kernel users interested in some specific event can register their callbacks
+ * providing the usual notifier_block descriptor, since this core implements
+ * events' delivery using the standard Kernel notification chains machinery.
+ *
+ * Given the number of possible events defined by SCMI and the extensibility
+ * of the SCMI Protocol itself, the underlying notification chains are created
+ * and destroyed dynamically on demand depending on the number of users
+ * effectively registered for an event, so that no support structures or chains
+ * are allocated until at least one user has registered a notifier_block for
+ * such event. Similarly, events' generation itself is enabled at the platform
+ * level only after at least one user has registered, and it is shutdown after
+ * the last user for that event has gone.
+ *
+ * All users provided callbacks and allocated notification-chains are stored in
+ * the @registered_events_handlers hashtable. Callbacks' registration requests
+ * for still to be registered events are instead kept in the dedicated common
+ * hashtable @pending_events_handlers.
+ *
+ * An event is identified univocally by the tuple (proto_id, evt_id, src_id)
+ * and is served by its own dedicated notification chain; information contained
+ * in such tuples is used, in a few different ways, to generate the needed
+ * hash-keys.
+ *
+ * Here proto_id and evt_id are simply the protocol_id and message_id numbers
+ * as described in the SCMI Protocol specification, while src_id represents an
+ * optional, protocol dependent, source identifier (like domain_id, perf_id
+ * or sensor_id and so forth).
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
 #include <linux/atomic.h>
+#include <linux/bitfield.h>
 #include <linux/bug.h>
 #include <linux/compiler.h>
 #include <linux/device.h>
 #include <linux/err.h>
+#include <linux/hashtable.h>
 #include <linux/kernel.h>
 #include <linux/kfifo.h>
+#include <linux/list.h>
 #include <linux/mutex.h>
+#include <linux/notifier.h>
 #include <linux/refcount.h>
 #include <linux/scmi_protocol.h>
 #include <linux/slab.h>
@@ -47,6 +79,71 @@
 #define MAKE_ALL_SRCS_KEY(p, e)			\
 	MAKE_HASH_KEY((p), (e), SCMI_ALL_SRC_IDS)
 
+/**
+ * Assumes that the stored obj includes its own hash-key in a field named 'key':
+ * with this simplification this macro can be equally used for all the objects'
+ * types hashed by this implementation.
+ *
+ * @__ht: The hashtable name
+ * @__obj: A pointer to the object type to be retrieved from the hashtable;
+ *	   it will be used as a cursor while scanning the hastable and it will
+ *	   be possibly left as NULL when @__k is not found
+ * @__k: The key to search for
+ */
+#define KEY_FIND(__ht, __obj, __k)				\
+({								\
+	hash_for_each_possible((__ht), (__obj), hash, (__k))	\
+		if (likely((__obj)->key == (__k)))		\
+			break;					\
+	__obj;							\
+})
+
+#define PROTO_ID_MASK			GENMASK(31, 24)
+#define EVT_ID_MASK			GENMASK(23, 16)
+#define SRC_ID_MASK			GENMASK(15, 0)
+#define KEY_XTRACT_PROTO_ID(key)	FIELD_GET(PROTO_ID_MASK, (key))
+#define KEY_XTRACT_EVT_ID(key)		FIELD_GET(EVT_ID_MASK, (key))
+#define KEY_XTRACT_SRC_ID(key)		FIELD_GET(SRC_ID_MASK, (key))
+
+/**
+ * A set of macros used to access safely @registered_protocols and
+ * @registered_events arrays; these are fixed in size and each entry is possibly
+ * populated at protocols' registration time and then only read but NEVER
+ * modified or removed.
+ */
+#define SCMI_GET_PROTO(__ni, __pid)					\
+({									\
+	struct scmi_registered_protocol_events_desc *__pd = NULL;	\
+									\
+	if ((__ni) && (__pid) < SCMI_MAX_PROTO)				\
+		__pd = READ_ONCE((__ni)->registered_protocols[(__pid)]);\
+	__pd;								\
+})
+
+#define SCMI_GET_REVT_FROM_PD(__pd, __eid)				\
+({									\
+	struct scmi_registered_event *__revt = NULL;			\
+									\
+	if ((__pd) && (__eid) < (__pd)->num_events)			\
+		__revt = READ_ONCE((__pd)->registered_events[(__eid)]);	\
+	__revt;								\
+})
+
+#define SCMI_GET_REVT(__ni, __pid, __eid)				\
+({									\
+	struct scmi_registered_event *__revt = NULL;			\
+	struct scmi_registered_protocol_events_desc *__pd = NULL;	\
+									\
+	__pd = SCMI_GET_PROTO((__ni), (__pid));				\
+	__revt = SCMI_GET_REVT_FROM_PD(__pd, (__eid));			\
+	__revt;								\
+})
+
+/* A couple of utility macros to limit cruft when calling protocols' helpers */
+#define REVT_NOTIFY_ENABLE(revt, ...)	\
+	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
+						__VA_ARGS__))
+
 struct scmi_registered_protocol_events_desc;
 
 /**
@@ -60,16 +157,25 @@ struct scmi_registered_protocol_events_desc;
  * @initialized: A flag that indicates if the core resources have been allocated
  *		 and protocols are allowed to register their supported events
  * @enabled: A flag to indicate events can be enabled and start flowing
+ * @init_work: A work item to perform final initializations of pending handlers
+ * @pending_mtx: A mutex to protect @pending_events_handlers
  * @registered_protocols: An statically allocated array containing pointers to
  *			  all the registered protocol-level specific information
  *			  related to events' handling
+ * @pending_events_handlers: An hashtable containing all pending events'
+ *			     handlers descriptors
  */
 struct scmi_notify_instance {
 	void						*gid;
 	struct scmi_handle				*handle;
 	atomic_t					initialized;
 	atomic_t					enabled;
+
+	struct work_struct				init_work;
+
+	struct mutex					pending_mtx;
 	struct scmi_registered_protocol_events_desc	**registered_protocols;
+	DECLARE_HASHTABLE(pending_events_handlers, 8);
 };
 
 /**
@@ -132,6 +238,9 @@ struct scmi_registered_event;
  * @registered_events: A dynamically allocated array holding all the registered
  *		       events' descriptors, whose fixed-size is determined at
  *		       compile time.
+ * @registered_mtx: A mutex to protect @registered_events_handlers
+ * @registered_events_handlers: An hashtable containing all events' handlers
+ *				descriptors registered for this protocol
  */
 struct scmi_registered_protocol_events_desc {
 	u8					id;
@@ -143,6 +252,8 @@ struct scmi_registered_protocol_events_desc {
 	void					*in_flight;
 	int					num_events;
 	struct scmi_registered_event		**registered_events;
+	struct mutex				registered_mtx;
+	DECLARE_HASHTABLE(registered_events_handlers, 8);
 };
 
 /**
@@ -175,6 +286,38 @@ struct scmi_registered_event {
 	struct mutex					sources_mtx;
 };
 
+/**
+ * scmi_event_handler  - Event handler information
+ *
+ * This structure collects all the information needed to process a received
+ * event identified by the tuple (proto_id, evt_id, src_id).
+ * These descriptors are stored in a per-protocol @registered_events_handlers
+ * table using as a key a value derived from that tuple.
+ *
+ * @key: The used hashkey
+ * @users: A reference count for number of active users for this handler
+ * @r_evt: A reference to the associated registered event; when this is NULL
+ *	   this handler is pending, which means that identifies a set of
+ *	   callbacks intended to be attached to an event which is still not
+ *	   known nor registered by any protocol at that point in time
+ * @chain: The notification chain dedicated to this specific event tuple
+ * @hash: The hlist_node used for collision handling
+ * @enabled: A boolean which records if event's generation has been already
+ *	     enabled for this handler as a whole
+ */
+struct scmi_event_handler {
+	u32				key;
+	refcount_t			users;
+	struct scmi_registered_event	*r_evt;
+	struct blocking_notifier_head	chain;
+	struct hlist_node		hash;
+	bool				enabled;
+};
+
+#define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
+
+static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
+				      struct scmi_event_handler *hndl);
 /**
  * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
  *
@@ -252,6 +395,10 @@ scmi_allocate_registered_protocol_desc(struct scmi_notify_instance *ni,
 		return ERR_PTR(-ENOMEM);
 	pd->num_events = num_events;
 
+	/* Initialize per protocol handlers table */
+	mutex_init(&pd->registered_mtx);
+	hash_init(pd->registered_events_handlers);
+
 	return pd;
 }
 
@@ -338,6 +485,12 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
 
 	devres_close_group(ni->handle->dev, ni->gid);
 
+	/*
+	 * Finalize any pending events' handler which could have been waiting
+	 * for this protocol's events registration.
+	 */
+	schedule_work(&ni->init_work);
+
 	return 0;
 
 err:
@@ -349,6 +502,547 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
 	return -ENOMEM;
 }
 
+/**
+ * scmi_allocate_event_handler  - Allocate Event handler
+ *
+ * Allocate an event handler and related notification chain associated with
+ * the provided event handler key.
+ * Note that, at this point, a related registered_event is still to be
+ * associated to this handler descriptor (hndl->r_evt == NULL), so the handler
+ * is initialized as pending.
+ *
+ * Assumes to be called with @pending_mtx already acquired.
+ *
+ * @ni: A reference to the notification instance to use
+ * @evt_key: 32bit key uniquely bind to the event identified by the tuple
+ *	     (proto_id, evt_id, src_id)
+ *
+ * Return: the freshly allocated structure on Success
+ */
+static struct scmi_event_handler *
+scmi_allocate_event_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	struct scmi_event_handler *hndl;
+
+	hndl = kzalloc(sizeof(*hndl), GFP_KERNEL);
+	if (!hndl)
+		return ERR_PTR(-ENOMEM);
+	hndl->key = evt_key;
+	BLOCKING_INIT_NOTIFIER_HEAD(&hndl->chain);
+	refcount_set(&hndl->users, 1);
+	/* New handlers are created pending */
+	hash_add(ni->pending_events_handlers, &hndl->hash, hndl->key);
+
+	return hndl;
+}
+
+/**
+ * scmi_free_event_handler  - Free the provided Event handler
+ *
+ * Assumes to be called with proper locking acquired depending on the situation.
+ *
+ * @hndl: The event handler structure to free
+ */
+static void scmi_free_event_handler(struct scmi_event_handler *hndl)
+{
+	hash_del(&hndl->hash);
+	kfree(hndl);
+}
+
+/**
+ * scmi_bind_event_handler  - Helper to attempt binding an handler to an event
+ *
+ * If an associated registered event is found, move the handler from the pending
+ * into the registered table.
+ *
+ * Assumes to be called with @pending_mtx already acquired.
+ *
+ * @ni: A reference to the notification instance to use
+ * @hndl: The event handler to bind
+ *
+ * Return: True if bind was successful, False otherwise
+ */
+static inline bool scmi_bind_event_handler(struct scmi_notify_instance *ni,
+					   struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_event *r_evt;
+
+
+	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(hndl->key),
+			      KEY_XTRACT_EVT_ID(hndl->key));
+	if (unlikely(!r_evt))
+		return false;
+
+	/* Remove from pending and insert into registered */
+	hash_del(&hndl->hash);
+	hndl->r_evt = r_evt;
+	mutex_lock(&r_evt->proto->registered_mtx);
+	hash_add(r_evt->proto->registered_events_handlers,
+		 &hndl->hash, hndl->key);
+	mutex_unlock(&r_evt->proto->registered_mtx);
+
+	return true;
+}
+
+/**
+ * scmi_valid_pending_handler  - Helper to check pending status of handlers
+ *
+ * An handler is considered pending when its r_evt == NULL, because the related
+ * event was still unknown at handler's registration time; anyway, since all
+ * protocols register their supported events once for all at protocols'
+ * initialization time, a pending handler cannot be considered valid anymore if
+ * the underlying event (which it is waiting for), belongs to an already
+ * initialized and registered protocol.
+ *
+ * @ni: A reference to the notification instance to use
+ * @hndl: The event handler to check
+ *
+ * Return: True if pending registration is still valid, False otherwise.
+ */
+static inline bool scmi_valid_pending_handler(struct scmi_notify_instance *ni,
+					      struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_protocol_events_desc *pd;
+
+	if (unlikely(!IS_HNDL_PENDING(hndl)))
+		return false;
+
+	pd = SCMI_GET_PROTO(ni, KEY_XTRACT_PROTO_ID(hndl->key));
+	if (pd)
+		return false;
+
+	return true;
+}
+
+/**
+ * scmi_register_event_handler  - Register whenever possible an Event handler
+ *
+ * At first try to bind an event handler to its associated event, then check if
+ * it was at least a valid pending handler: if it was not bound nor valid return
+ * false.
+ *
+ * Valid pending incomplete bindings will be periodically retried by a dedicated
+ * worker which is kicked each time a new protocol completes its own
+ * registration phase.
+ *
+ * Assumes to be called with @pending_mtx acquired.
+ *
+ * @ni: A reference to the notification instance to use
+ * @hndl: The event handler to register
+ *
+ * Return: True if a normal or a valid pending registration has been completed,
+ *	   False otherwise
+ */
+static bool scmi_register_event_handler(struct scmi_notify_instance *ni,
+					struct scmi_event_handler *hndl)
+{
+	bool ret;
+
+	ret = scmi_bind_event_handler(ni, hndl);
+	if (ret) {
+		pr_info("SCMI Notifications: registered NEW handler - key:%X\n",
+			hndl->key);
+	} else {
+		ret = scmi_valid_pending_handler(ni, hndl);
+		if (ret)
+			pr_info("SCMI Notifications: registered PENDING handler - key:%X\n",
+				hndl->key);
+	}
+
+	return ret;
+}
+
+/**
+ * __scmi_event_handler_get_ops  - Utility to get or create an event handler
+ *
+ * Search for the desired handler matching the key in both the per-protocol
+ * registered table and the common pending table:
+ *  - if found adjust users refcount
+ *  - if not found and @create is true, create and register the new handler:
+ *    handler could end up being registered as pending if no matching event
+ *    could be found.
+ *
+ * An handler is guaranteed to reside in one and only one of the tables at
+ * any one time; to ensure this the whole search and create is performed
+ * holding the @pending_mtx lock, with @registered_mtx additionally acquired
+ * if needed.
+ * Note that when a nested acquisition of these mutexes is needed the locking
+ * order is always (same as in @init_work):
+ *	1. pending_mtx
+ *	2. registered_mtx
+ *
+ * Events generation is NOT enabled right after creation within this routine
+ * since at creation time we usually want to have all setup and ready before
+ * events really start flowing.
+ *
+ * @ni: A reference to the notification instance to use
+ * @evt_key: The event key to use
+ * @create: A boolean flag to specify if a handler must be created when
+ *	    not already existent
+ *
+ * Return: A properly refcounted handler on Success, NULL on Failure
+ */
+static inline struct scmi_event_handler *
+__scmi_event_handler_get_ops(struct scmi_notify_instance *ni,
+			     u32 evt_key, bool create)
+{
+	struct scmi_registered_event *r_evt;
+	struct scmi_event_handler *hndl = NULL;
+
+	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
+			      KEY_XTRACT_EVT_ID(evt_key));
+
+	mutex_lock(&ni->pending_mtx);
+	/* Search registered events at first ... if possible at all */
+	if (likely(r_evt)) {
+		mutex_lock(&r_evt->proto->registered_mtx);
+		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
+				hndl, evt_key);
+		if (likely(hndl))
+			refcount_inc(&hndl->users);
+		mutex_unlock(&r_evt->proto->registered_mtx);
+	}
+
+	/* ...then amongst pending. */
+	if (unlikely(!hndl)) {
+		hndl = KEY_FIND(ni->pending_events_handlers, hndl, evt_key);
+		if (likely(hndl))
+			refcount_inc(&hndl->users);
+	}
+
+	/* Create if still not found and required */
+	if (!hndl && create) {
+		hndl = scmi_allocate_event_handler(ni, evt_key);
+		if (!IS_ERR_OR_NULL(hndl)) {
+			if (!scmi_register_event_handler(ni, hndl)) {
+				pr_info("SCMI Notifications: purging UNKNOWN handler - key:%X\n",
+					hndl->key);
+				/* this hndl can be only a pending one */
+				scmi_put_handler_unlocked(ni, hndl);
+				hndl = NULL;
+			}
+		}
+	}
+	mutex_unlock(&ni->pending_mtx);
+
+	return hndl;
+}
+
+static struct scmi_event_handler *
+scmi_get_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	return __scmi_event_handler_get_ops(ni, evt_key, false);
+}
+
+static struct scmi_event_handler *
+scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	return __scmi_event_handler_get_ops(ni, evt_key, true);
+}
+
+/**
+ * __scmi_enable_evt  - Enable/disable events generation
+ *
+ * Takes care of proper refcounting while performing enable/disable: handles
+ * the special case of ALL sources requests by itself.
+ *
+ * @r_evt: The registered event to act upon
+ * @src_id: The src_id to act upon
+ * @enable: The action to perform: true->Enable, false->Disable
+ *
+ * Return: True when the required @action has been successfully executed
+ */
+static inline bool __scmi_enable_evt(struct scmi_registered_event *r_evt,
+				     u32 src_id, bool enable)
+{
+	int ret = 0;
+	u32 num_sources;
+	refcount_t *sid;
+
+	if (src_id == SCMI_ALL_SRC_IDS) {
+		src_id = 0;
+		num_sources = r_evt->num_sources;
+	} else if (src_id < r_evt->num_sources) {
+		num_sources = 1;
+	} else {
+		return ret;
+	}
+
+	mutex_lock(&r_evt->sources_mtx);
+	if (enable) {
+		for (; num_sources; src_id++, num_sources--) {
+			bool r;
+
+			sid = &r_evt->sources[src_id];
+			if (refcount_read(sid) == 0) {
+				r = REVT_NOTIFY_ENABLE(r_evt,
+						       r_evt->evt->id,
+						       src_id, enable);
+				if (r)
+					refcount_set(sid, 1);
+			} else {
+				refcount_inc(sid);
+				r = true;
+			}
+			ret += r;
+		}
+	} else {
+		for (; num_sources; src_id++, num_sources--) {
+			sid = &r_evt->sources[src_id];
+			if (refcount_dec_and_test(sid))
+				REVT_NOTIFY_ENABLE(r_evt,
+						   r_evt->evt->id,
+						   src_id, enable);
+		}
+		ret = 1;
+	}
+	mutex_unlock(&r_evt->sources_mtx);
+
+	return ret;
+}
+
+static bool scmi_enable_events(struct scmi_event_handler *hndl)
+{
+	if (!hndl->enabled)
+		hndl->enabled = __scmi_enable_evt(hndl->r_evt,
+						  KEY_XTRACT_SRC_ID(hndl->key),
+						  true);
+	return hndl->enabled;
+}
+
+static bool scmi_disable_events(struct scmi_event_handler *hndl)
+{
+	if (hndl->enabled)
+		hndl->enabled = !__scmi_enable_evt(hndl->r_evt,
+						   KEY_XTRACT_SRC_ID(hndl->key),
+						   false);
+	return !hndl->enabled;
+}
+
+/**
+ * scmi_put_handler_unlocked  - Put an event handler
+ *
+ * After having got exclusive access to the registered handlers hashtable,
+ * update the refcount and if @hndl is no more in use by anyone:
+ *
+ *  - ask for events' generation disabling
+ *  - unregister and free the handler itself
+ *
+ *  Assumes all the proper locking has been managed by the caller.
+ *
+ * @ni: A reference to the notification instance to use
+ * @hndl: The event handler to act upon
+ */
+
+static void
+scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
+				struct scmi_event_handler *hndl)
+{
+	if (refcount_dec_and_test(&hndl->users)) {
+		if (likely(!IS_HNDL_PENDING(hndl)))
+			scmi_disable_events(hndl);
+		scmi_free_event_handler(hndl);
+	}
+}
+
+static void scmi_put_handler(struct scmi_notify_instance *ni,
+			     struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_event *r_evt = hndl->r_evt;
+
+	mutex_lock(&ni->pending_mtx);
+	if (r_evt)
+		mutex_lock(&r_evt->proto->registered_mtx);
+
+	scmi_put_handler_unlocked(ni, hndl);
+
+	if (r_evt)
+		mutex_unlock(&r_evt->proto->registered_mtx);
+	mutex_unlock(&ni->pending_mtx);
+}
+
+/**
+ * scmi_event_handler_enable_events  - Enable events associated to an handler
+ *
+ * @hndl: The Event handler to act upon
+ *
+ * Return: True on success
+ */
+static bool scmi_event_handler_enable_events(struct scmi_event_handler *hndl)
+{
+	if (!scmi_enable_events(hndl)) {
+		pr_err("SCMI Notifications: Failed to ENABLE events for key:%X !\n",
+		       hndl->key);
+		return false;
+	}
+
+	return true;
+}
+
+/**
+ * scmi_register_notifier  - Register a notifier_block for an event
+ *
+ * Generic helper to register a notifier_block against a protocol event.
+ *
+ * A notifier_block @nb will be registered for each distinct event identified
+ * by the tuple (proto_id, evt_id, src_id) on a dedicated notification chain
+ * so that:
+ *
+ *	(proto_X, evt_Y, src_Z) --> chain_X_Y_Z
+ *
+ * @src_id meaning is protocol specific and identifies the origin of the event
+ * (like domain_id, sensor_id and so forth).
+ *
+ * @src_id can be NULL to signify that the caller is interested in receiving
+ * notifications from ALL the available sources for that protocol OR simply that
+ * the protocol does not support distinct sources.
+ *
+ * As soon as one user for the specified tuple appears, an handler is created,
+ * and that specific event's generation is enabled at the platform level, unless
+ * an associated registered event is found missing, meaning that the needed
+ * protocol is still to be initialized and the handler has just been registered
+ * as still pending.
+ *
+ * @handle: The handle identifying the platform instance against which the
+ *	    callback is registered
+ * @proto_id: Protocol ID
+ * @evt_id: Event ID
+ * @src_id: Source ID, when NULL register for events coming form ALL possible
+ *	    sources
+ * @nb: A standard notifier block to register for the specified event
+ *
+ * Return: Return 0 on Success
+ */
+static int scmi_register_notifier(const struct scmi_handle *handle,
+				  u8 proto_id, u8 evt_id, u32 *src_id,
+				  struct notifier_block *nb)
+{
+	int ret = 0;
+	u32 evt_key;
+	struct scmi_event_handler *hndl;
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	if (unlikely(!ni || !atomic_read(&ni->initialized)))
+		return 0;
+
+	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
+				src_id ? *src_id : SCMI_ALL_SRC_IDS);
+	hndl = scmi_get_or_create_handler(ni, evt_key);
+	if (IS_ERR_OR_NULL(hndl))
+		return PTR_ERR(hndl);
+
+	blocking_notifier_chain_register(&hndl->chain, nb);
+
+	/* Enable events for not pending handlers */
+	if (likely(!IS_HNDL_PENDING(hndl))) {
+		if (!scmi_event_handler_enable_events(hndl)) {
+			scmi_put_handler(ni, hndl);
+			ret = -EINVAL;
+		}
+	}
+
+	return ret;
+}
+
+/**
+ * scmi_unregister_notifier  - Unregister a notifier_block for an event
+ *
+ * Takes care to unregister the provided @nb from the notification chain
+ * associated to the specified event and, if there are no more users for the
+ * event handler, frees also the associated event handler structures.
+ * (this could possibly cause disabling of event's generation at platform level)
+ *
+ * @handle: The handle identifying the platform instance against which the
+ *	    callback is unregistered
+ * @proto_id: Protocol ID
+ * @evt_id: Event ID
+ * @src_id: Source ID
+ * @nb: The notifier_block to unregister
+ *
+ * Return: 0 on Success
+ */
+static int scmi_unregister_notifier(const struct scmi_handle *handle,
+				    u8 proto_id, u8 evt_id, u32 *src_id,
+				    struct notifier_block *nb)
+{
+	u32 evt_key;
+	struct scmi_event_handler *hndl;
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	if (unlikely(!ni || !atomic_read(&ni->initialized)))
+		return 0;
+
+	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
+				src_id ? *src_id : SCMI_ALL_SRC_IDS);
+	hndl = scmi_get_handler(ni, evt_key);
+	if (IS_ERR_OR_NULL(hndl))
+		return -EINVAL;
+
+	blocking_notifier_chain_unregister(&hndl->chain, nb);
+	scmi_put_handler(ni, hndl);
+
+	/*
+	 * Free the handler (and stop events) if this happens to be the last
+	 * known user callback for this handler; a possible concurrently ongoing
+	 * run of @scmi_lookup_and_call_event_chain will cause this to happen
+	 * in that context safely instead.
+	 */
+	scmi_put_handler(ni, hndl);
+
+	return 0;
+}
+
+/**
+ * scmi_protocols_late_init  - Worker for late initialization
+ *
+ * This kicks in whenever a new protocol has completed its own registration via
+ * scmi_register_protocol_events(): it is in charge of scanning the table of
+ * pending handlers (registered by users while the related protocol was still
+ * not initialized) and finalizing their initialization whenever possible;
+ * invalid pending handlers are purged at this point in time.
+ *
+ * @work: The work item to use associated to the proper SCMI instance
+ */
+static void scmi_protocols_late_init(struct work_struct *work)
+{
+	int bkt;
+	struct scmi_event_handler *hndl;
+	struct scmi_notify_instance *ni;
+	struct hlist_node *tmp;
+
+	ni = container_of(work, struct scmi_notify_instance, init_work);
+
+	mutex_lock(&ni->pending_mtx);
+	hash_for_each_safe(ni->pending_events_handlers, bkt, tmp, hndl, hash) {
+		bool ret;
+
+		ret = scmi_bind_event_handler(ni, hndl);
+		if (ret) {
+			pr_info("SCMI Notifications: finalized PENDING handler - key:%X\n",
+				hndl->key);
+			ret = scmi_event_handler_enable_events(hndl);
+		} else {
+			ret = scmi_valid_pending_handler(ni, hndl);
+		}
+		if (!ret) {
+			pr_info("SCMI Notifications: purging PENDING handler - key:%X\n",
+				hndl->key);
+			/* this hndl can be only a pending one */
+			scmi_put_handler_unlocked(ni, hndl);
+		}
+	}
+	mutex_unlock(&ni->pending_mtx);
+}
+
+/*
+ * notify_ops are attached to the handle so that can be accessed
+ * directly from an scmi_driver to register its own notifiers.
+ */
+static struct scmi_notify_ops notify_ops = {
+	.register_event_notifier = scmi_register_notifier,
+	.unregister_event_notifier = scmi_unregister_notifier,
+};
+
 /**
  * scmi_notification_init  - Initializes Notification Core Support
  *
@@ -398,7 +1092,13 @@ int scmi_notification_init(struct scmi_handle *handle)
 	if (!ni->registered_protocols)
 		goto err;
 
+	mutex_init(&ni->pending_mtx);
+	hash_init(ni->pending_events_handlers);
+
+	INIT_WORK(&ni->init_work, scmi_protocols_late_init);
+
 	handle->notify_priv = ni;
+	handle->notify_ops = &notify_ops;
 
 	atomic_set(&ni->initialized, 1);
 	atomic_set(&ni->enabled, 1);
diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
index a7ece64e8842..f765acda2311 100644
--- a/drivers/firmware/arm_scmi/notify.h
+++ b/drivers/firmware/arm_scmi/notify.h
@@ -9,9 +9,21 @@
 #ifndef _SCMI_NOTIFY_H
 #define _SCMI_NOTIFY_H
 
+#include <linux/bug.h>
 #include <linux/device.h>
 #include <linux/types.h>
 
+#define MAP_EVT_TO_ENABLE_CMD(id, map)			\
+({							\
+	int ret = -1;					\
+							\
+	if (likely((id) < ARRAY_SIZE((map))))		\
+		ret = (map)[(id)];			\
+	else						\
+		WARN(1, "UN-KNOWN evt_id:%d\n", (id));	\
+	ret;						\
+})
+
 /**
  * scmi_event  - Describes an event to be supported
  *
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 0679f10ab05e..797e1e03ae52 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -9,6 +9,8 @@
 #define _LINUX_SCMI_PROTOCOL_H
 
 #include <linux/device.h>
+#include <linux/ktime.h>
+#include <linux/notifier.h>
 #include <linux/types.h>
 
 #define SCMI_MAX_STR_SIZE	16
@@ -211,6 +213,52 @@ struct scmi_reset_ops {
 	int (*deassert)(const struct scmi_handle *handle, u32 domain);
 };
 
+/**
+ * scmi_notify_ops  - represents notifications' operations provided by SCMI core
+ *
+ * A user can register/unregister its own notifier_block against the wanted
+ * platform instance regarding the desired event identified by the
+ * tuple: (proto_id, evt_id, src_id)
+ *
+ * @register_event_notifier: Register a notifier_block for the requested event
+ * @unregister_event_notifier: Unregister a notifier_block for the requested
+ *			       event
+ *
+ * where:
+ *
+ * @handle: The handle identifying the platform instance to use
+ * @proto_id: The protocol ID as in SCMI Specification
+ * @evt_id: The message ID of the desired event as in SCMI Specification
+ * @src_id: A pointer to the desired source ID if different sources are
+ *	    possible for the protocol (like domain_id, sensor_id...etc)
+ *
+ * @src_id can be provided as NULL if it simply does NOT make sense for
+ * the protocol at hand, OR if the user is explicitly interested in
+ * receiving notifications from ANY existent source associated to the
+ * specified proto_id / evt_id.
+ *
+ * Received notifications are finally delivered to the registered users,
+ * invoking the callback provided with the notifier_block *nb as follows:
+ *
+ *	int user_cb(nb, evt_id, report)
+ *
+ * with:
+ *
+ * @nb: The notifier block provided by the user
+ * @evt_id: The message ID of the delivered event
+ * @report: A custom struct describing the specific event delivered
+ *
+ * Events' customized report structs are detailed in the following.
+ */
+struct scmi_notify_ops {
+	int (*register_event_notifier)(const struct scmi_handle *handle,
+				       u8 proto_id, u8 evt_id, u32 *src_id,
+				       struct notifier_block *nb);
+	int (*unregister_event_notifier)(const struct scmi_handle *handle,
+					 u8 proto_id, u8 evt_id, u32 *src_id,
+					 struct notifier_block *nb);
+};
+
 /**
  * struct scmi_handle - Handle returned to ARM SCMI clients for usage.
  *
@@ -221,6 +269,7 @@ struct scmi_reset_ops {
  * @clk_ops: pointer to set of clock protocol operations
  * @sensor_ops: pointer to set of sensor protocol operations
  * @reset_ops: pointer to set of reset protocol operations
+ * @notify_ops: pointer to set of notifications related operations
  * @perf_priv: pointer to private data structure specific to performance
  *	protocol(for internal use only)
  * @clk_priv: pointer to private data structure specific to clock
@@ -242,6 +291,7 @@ struct scmi_handle {
 	struct scmi_power_ops *power_ops;
 	struct scmi_sensor_ops *sensor_ops;
 	struct scmi_reset_ops *reset_ops;
+	struct scmi_notify_ops *notify_ops;
 	/* for protocol internal use */
 	void *perf_priv;
 	void *clk_priv;
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Add core SCMI Notifications dispatch and delivery support logic which is
able, at first, to dispatch well-known received events from the RX ISR to
the dedicated deferred worker, and then, from there, to final deliver the
events to the registered users' callbacks.

Dispatch and delivery is just added here, still not enabled.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- dispatcher now handles dequeuing of events in chunks (header+payload):
  handling of these in_flight events let us remove one unneeded memcpy
  on RX interrupt path (scmi_notify)
- deferred dispatcher now access their own per-protocol handlers' table
  reducing locking contention on the RX path
V2 --> V3
- exposing wq in sysfs via WQ_SYSFS
V1 --> V2
- splitted out of V1 patch 04
- moved from IDR maps to real HashTables to store event_handlers
- simplified delivery logic
---
 drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
 drivers/firmware/arm_scmi/notify.h |   9 +
 2 files changed, 342 insertions(+), 1 deletion(-)

diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
index d6c08cce3c63..0854d48d5886 100644
--- a/drivers/firmware/arm_scmi/notify.c
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -44,6 +44,27 @@
  * as described in the SCMI Protocol specification, while src_id represents an
  * optional, protocol dependent, source identifier (like domain_id, perf_id
  * or sensor_id and so forth).
+ *
+ * Upon reception of a notification message from the platform the SCMI RX ISR
+ * passes the received message payload and some ancillary information (including
+ * an arrival timestamp in nanoseconds) to the core via @scmi_notify() which
+ * pushes the event-data itself on a protocol-dedicated kfifo queue for further
+ * deferred processing as specified in @scmi_events_dispatcher().
+ *
+ * Each protocol has it own dedicated work_struct and worker which, once kicked
+ * by the ISR, takes care to empty its own dedicated queue, deliverying the
+ * queued items into the proper notification-chain: notifications processing can
+ * proceed concurrently on distinct workers only between events belonging to
+ * different protocols while delivery of events within the same protocol is
+ * still strictly sequentially ordered by time of arrival.
+ *
+ * Events' information is then extracted from the SCMI Notification messages and
+ * conveyed, converted into a custom per-event report struct, as the void *data
+ * param to the user callback provided by the registered notifier_block, so that
+ * from the user perspective his callback will look invoked like:
+ *
+ * int user_cb(struct notifier_block *nb, unsigned long event_id, void *report)
+ *
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -64,6 +85,7 @@
 #include <linux/scmi_protocol.h>
 #include <linux/slab.h>
 #include <linux/types.h>
+#include <linux/workqueue.h>
 
 #include "notify.h"
 
@@ -143,6 +165,8 @@
 #define REVT_NOTIFY_ENABLE(revt, ...)	\
 	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
 						__VA_ARGS__))
+#define REVT_FILL_REPORT(revt, ...)	\
+	((revt)->proto->ops->fill_custom_report(__VA_ARGS__))
 
 struct scmi_registered_protocol_events_desc;
 
@@ -158,6 +182,7 @@ struct scmi_registered_protocol_events_desc;
  *		 and protocols are allowed to register their supported events
  * @enabled: A flag to indicate events can be enabled and start flowing
  * @init_work: A work item to perform final initializations of pending handlers
+ * @notify_wq: A reference to the allocated Kernel cmwq
  * @pending_mtx: A mutex to protect @pending_events_handlers
  * @registered_protocols: An statically allocated array containing pointers to
  *			  all the registered protocol-level specific information
@@ -173,6 +198,8 @@ struct scmi_notify_instance {
 
 	struct work_struct				init_work;
 
+	struct workqueue_struct				*notify_wq;
+
 	struct mutex					pending_mtx;
 	struct scmi_registered_protocol_events_desc	**registered_protocols;
 	DECLARE_HASHTABLE(pending_events_handlers, 8);
@@ -186,11 +213,15 @@ struct scmi_notify_instance {
  * @sz: Size in bytes of the related kfifo
  * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
  * @kfifo: A dedicated Kernel kfifo descriptor
+ * @notify_work: A custom work item bound to this queue
+ * @wq: A reference to the associated workqueue
  */
 struct events_queue {
 	size_t				sz;
 	u8				*qbuf;
 	struct kfifo			kfifo;
+	struct work_struct		notify_work;
+	struct workqueue_struct		*wq;
 };
 
 /**
@@ -316,8 +347,249 @@ struct scmi_event_handler {
 
 #define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
 
+static struct scmi_event_handler *
+scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key);
+static void scmi_put_active_handler(struct scmi_notify_instance *ni,
+				    struct scmi_event_handler *hndl);
 static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
 				      struct scmi_event_handler *hndl);
+
+/**
+ * scmi_lookup_and_call_event_chain  - Lookup the proper chain and call it
+ *
+ * @ni: A reference to the notification instance to use
+ * @evt_key: The key to use to lookup the related notification chain
+ * @report: The customized event-specific report to pass down to the callbacks
+ *	    as their *data parameter.
+ */
+static inline void
+scmi_lookup_and_call_event_chain(struct scmi_notify_instance *ni,
+				 u32 evt_key, void *report)
+{
+	int ret;
+	struct scmi_event_handler *hndl;
+
+	/* Here ensure the event handler cannot vanish while using it */
+	hndl = scmi_get_active_handler(ni, evt_key);
+	if (IS_ERR_OR_NULL(hndl))
+		return;
+
+	ret = blocking_notifier_call_chain(&hndl->chain,
+					   KEY_XTRACT_EVT_ID(evt_key),
+					   report);
+	/* Notifiers are NOT supposed to cut the chain ... */
+	WARN_ON_ONCE(ret & NOTIFY_STOP_MASK);
+
+	scmi_put_active_handler(ni, hndl);
+}
+
+/**
+ * scmi_process_event_header  - Dequeue and process an event header
+ *
+ * Read an event header from the protocol queue into the dedicated scratch
+ * buffer and looks for a matching registered event; in case an anomalously
+ * sized read is detected just flush the queue.
+ *
+ * @eq: The queue to use
+ * @pd: The protocol descriptor to use
+ *
+ * Returns:
+ *  - a reference to the matching registered event when found
+ *  - ERR_PTR(-EINVAL) when NO registered event could be found
+ *  - NULL when the queue is empty
+ */
+static inline struct scmi_registered_event *
+scmi_process_event_header(struct events_queue *eq,
+			  struct scmi_registered_protocol_events_desc *pd)
+{
+	unsigned int outs;
+	struct scmi_registered_event *r_evt;
+
+	outs = kfifo_out(&eq->kfifo, pd->eh,
+			 sizeof(struct scmi_event_header));
+	if (!outs)
+		return NULL;
+	if (outs != sizeof(struct scmi_event_header)) {
+		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
+		kfifo_reset_out(&eq->kfifo);
+		return NULL;
+	}
+
+	r_evt = SCMI_GET_REVT_FROM_PD(pd, pd->eh->evt_id);
+	if (!r_evt)
+		r_evt = ERR_PTR(-EINVAL);
+
+	return r_evt;
+}
+
+/**
+ * scmi_process_event_payload  - Dequeue and process an event payload
+ *
+ * Read an event payload from the protocol queue into the dedicated scratch
+ * buffer, fills a custom report and then look for matching event handlers and
+ * call them; skip any unknown event (as marked by scmi_process_event_header())
+ * and in case an anomalously sized read is detected just flush the queue.
+ *
+ * @eq: The queue to use
+ * @pd: The protocol descriptor to use
+ * @r_evt: The registered event descriptor to use
+ *
+ * Return: False when the queue is empty
+ */
+static inline bool
+scmi_process_event_payload(struct events_queue *eq,
+			   struct scmi_registered_protocol_events_desc *pd,
+			   struct scmi_registered_event *r_evt)
+{
+	u32 src_id, key;
+	unsigned int outs;
+	void *report = NULL;
+
+	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
+	if (unlikely(!outs))
+		return false;
+
+	/* Any in-flight event has now been officially processed */
+	pd->in_flight = NULL;
+
+	if (unlikely(outs != pd->eh->payld_sz)) {
+		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
+		kfifo_reset_out(&eq->kfifo);
+		return false;
+	}
+
+	if (IS_ERR(r_evt)) {
+		pr_warn("SCMI Notifications: SKIP UNKNOWN EVT - proto:%X  evt:%d\n",
+			pd->id, pd->eh->evt_id);
+		return true;
+	}
+
+	report = REVT_FILL_REPORT(r_evt, pd->eh->evt_id, pd->eh->timestamp,
+				  pd->eh->payld, pd->eh->payld_sz,
+				  r_evt->report, &src_id);
+	if (!report) {
+		pr_err("SCMI Notifications: Report not available - proto:%X  evt:%d\n",
+		       pd->id, pd->eh->evt_id);
+		return true;
+	}
+
+	/* At first search for a generic ALL src_ids handler... */
+	key = MAKE_ALL_SRCS_KEY(pd->id, pd->eh->evt_id);
+	scmi_lookup_and_call_event_chain(pd->ni, key, report);
+
+	/* ...then search for any specific src_id */
+	key = MAKE_HASH_KEY(pd->id, pd->eh->evt_id, src_id);
+	scmi_lookup_and_call_event_chain(pd->ni, key, report);
+
+	return true;
+}
+
+/**
+ * scmi_events_dispatcher  - Common worker logic for all work items.
+ *
+ *  1. dequeue one pending RX notification (queued in SCMI RX ISR context)
+ *  2. generate a custom event report from the received event message
+ *  3. lookup for any registered ALL_SRC_IDs handler
+ *     - > call the related notification chain passing in the report
+ *  4. lookup for any registered specific SRC_ID handler
+ *     - > call the related notification chain passing in the report
+ *
+ * Note that:
+ * - a dedicated per-protocol kfifo queue is used: in this way an anomalous
+ *   flood of events cannot saturate other protocols' queues.
+ *
+ * - each per-protocol queue is associated to a distinct work_item, which
+ *   means, in turn, that:
+ *   + all protocols can process their dedicated queues concurrently
+ *     (since notify_wq:max_active != 1)
+ *   + anyway at most one worker instance is allowed to run on the same queue
+ *     concurrently: this ensures that we can have only one concurrent
+ *     reader/writer on the associated kfifo, so that we can use it lock-less
+ *
+ * @work: The work item to use, which is associated to a dedicated events_queue
+ */
+static void scmi_events_dispatcher(struct work_struct *work)
+{
+	struct events_queue *eq;
+	struct scmi_registered_protocol_events_desc *pd;
+	struct scmi_registered_event *r_evt;
+
+	eq = container_of(work, struct events_queue, notify_work);
+	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
+			  equeue);
+	/*
+	 * In order to keep the queue lock-less and the number of memcopies
+	 * to the bare minimum needed, the dispatcher accounts for the
+	 * possibility of per-protocol in-flight events: i.e. an event whose
+	 * reception could end up being split across two subsequent runs of this
+	 * worker, first the header, then the payload.
+	 */
+	do {
+		if (likely(!pd->in_flight)) {
+			r_evt = scmi_process_event_header(eq, pd);
+			if (!r_evt)
+				break;
+			pd->in_flight = r_evt;
+		} else {
+			r_evt = pd->in_flight;
+		}
+	} while (scmi_process_event_payload(eq, pd, r_evt));
+}
+
+/**
+ * scmi_notify  - Queues a notification for further deferred processing
+ *
+ * This is called in interrupt context to queue a received event for
+ * deferred processing.
+ *
+ * @handle: The handle identifying the platform instance from which the
+ *	    dispatched event is generated
+ * @proto_id: Protocol ID
+ * @evt_id: Event ID (msgID)
+ * @buf: Event Message Payload (without the header)
+ * @len: Event Message Payload size
+ * @ts: RX Timestamp in nanoseconds (boottime)
+ *
+ * Return: 0 on Success
+ */
+int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
+		const void *buf, size_t len, u64 ts)
+{
+	struct scmi_registered_event *r_evt;
+	struct scmi_event_header eh;
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	/* Ensure atomic value is updated */
+	smp_mb__before_atomic();
+	if (unlikely(!atomic_read(&ni->enabled)))
+		return 0;
+
+	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
+	if (unlikely(!r_evt))
+		return -EINVAL;
+
+	if (unlikely(len > r_evt->evt->max_payld_sz)) {
+		pr_err("SCMI Notifications: discard badly sized message\n");
+		return -EINVAL;
+	}
+	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
+		     sizeof(eh) + len)) {
+		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
+			proto_id, evt_id, ts);
+		return -ENOMEM;
+	}
+
+	eh.timestamp = ts;
+	eh.evt_id = evt_id;
+	eh.payld_sz = len;
+	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
+	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
+	queue_work(r_evt->proto->equeue.wq,
+		   &r_evt->proto->equeue.notify_work);
+
+	return 0;
+}
+
 /**
  * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
  *
@@ -332,12 +604,21 @@ static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
 static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
 					struct events_queue *equeue, size_t sz)
 {
+	int ret = 0;
+
 	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
 	if (!equeue->qbuf)
 		return -ENOMEM;
 	equeue->sz = sz;
 
-	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
+	ret = kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
+	if (ret)
+		return ret;
+
+	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
+	equeue->wq = ni->notify_wq;
+
+	return ret;
 }
 
 /**
@@ -740,6 +1021,38 @@ scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
 	return __scmi_event_handler_get_ops(ni, evt_key, true);
 }
 
+/**
+ * scmi_get_active_handler  - Helper to get active handlers only
+ *
+ * Search for the desired handler matching the key only in the per-protocol
+ * table of registered handlers: this is called only from the dispatching path
+ * so want to be as quick as possible and do not care about pending.
+ *
+ * @ni: A reference to the notification instance to use
+ * @evt_key: The event key to use
+ *
+ * Return: A properly refcounted active handler
+ */
+static struct scmi_event_handler *
+scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	struct scmi_registered_event *r_evt;
+	struct scmi_event_handler *hndl = NULL;
+
+	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
+			      KEY_XTRACT_EVT_ID(evt_key));
+	if (likely(r_evt)) {
+		mutex_lock(&r_evt->proto->registered_mtx);
+		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
+				hndl, evt_key);
+		if (likely(hndl))
+			refcount_inc(&hndl->users);
+		mutex_unlock(&r_evt->proto->registered_mtx);
+	}
+
+	return hndl;
+}
+
 /**
  * __scmi_enable_evt  - Enable/disable events generation
  *
@@ -861,6 +1174,16 @@ static void scmi_put_handler(struct scmi_notify_instance *ni,
 	mutex_unlock(&ni->pending_mtx);
 }
 
+static void scmi_put_active_handler(struct scmi_notify_instance *ni,
+					  struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_event *r_evt = hndl->r_evt;
+
+	mutex_lock(&r_evt->proto->registered_mtx);
+	scmi_put_handler_unlocked(ni, hndl);
+	mutex_unlock(&r_evt->proto->registered_mtx);
+}
+
 /**
  * scmi_event_handler_enable_events  - Enable events associated to an handler
  *
@@ -1087,6 +1410,12 @@ int scmi_notification_init(struct scmi_handle *handle)
 	ni->gid = gid;
 	ni->handle = handle;
 
+	ni->notify_wq = alloc_workqueue("scmi_notify",
+					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
+					0);
+	if (!ni->notify_wq)
+		goto err;
+
 	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
 						sizeof(char *), GFP_KERNEL);
 	if (!ni->registered_protocols)
@@ -1133,6 +1462,9 @@ void scmi_notification_exit(struct scmi_handle *handle)
 	/* Ensure atomic values are updated */
 	smp_mb__after_atomic();
 
+	/* Destroy while letting pending work complete */
+	destroy_workqueue(ni->notify_wq);
+
 	devres_release_group(ni->handle->dev, ni->gid);
 
 	pr_info("SCMI Notifications Core Shutdown.\n");
diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
index f765acda2311..6cd386649d5a 100644
--- a/drivers/firmware/arm_scmi/notify.h
+++ b/drivers/firmware/arm_scmi/notify.h
@@ -51,10 +51,17 @@ struct scmi_event {
  *			using the proper custom protocol commands.
  *			Return true if at least one the required src_id
  *			has been successfully enabled/disabled
+ * @fill_custom_report: fills a custom event report from the provided
+ *			event message payld identifying the event
+ *			specific src_id.
+ *			Return NULL on failure otherwise @report now fully
+ *			populated
  */
 struct scmi_protocol_event_ops {
 	bool (*set_notify_enabled)(const struct scmi_handle *handle,
 				   u8 evt_id, u32 src_id, bool enabled);
+	void *(*fill_custom_report)(u8 evt_id, u64 timestamp, const void *payld,
+				    size_t payld_sz, void *report, u32 *src_id);
 };
 
 int scmi_notification_init(struct scmi_handle *handle);
@@ -65,5 +72,7 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
 				  const struct scmi_protocol_event_ops *ops,
 				  const struct scmi_event *evt, int num_events,
 				  int num_sources);
+int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
+		const void *buf, size_t len, u64 ts);
 
 #endif /* _SCMI_NOTIFY_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Add core SCMI Notifications dispatch and delivery support logic which is
able, at first, to dispatch well-known received events from the RX ISR to
the dedicated deferred worker, and then, from there, to final deliver the
events to the registered users' callbacks.

Dispatch and delivery is just added here, still not enabled.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- dispatcher now handles dequeuing of events in chunks (header+payload):
  handling of these in_flight events let us remove one unneeded memcpy
  on RX interrupt path (scmi_notify)
- deferred dispatcher now access their own per-protocol handlers' table
  reducing locking contention on the RX path
V2 --> V3
- exposing wq in sysfs via WQ_SYSFS
V1 --> V2
- splitted out of V1 patch 04
- moved from IDR maps to real HashTables to store event_handlers
- simplified delivery logic
---
 drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
 drivers/firmware/arm_scmi/notify.h |   9 +
 2 files changed, 342 insertions(+), 1 deletion(-)

diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
index d6c08cce3c63..0854d48d5886 100644
--- a/drivers/firmware/arm_scmi/notify.c
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -44,6 +44,27 @@
  * as described in the SCMI Protocol specification, while src_id represents an
  * optional, protocol dependent, source identifier (like domain_id, perf_id
  * or sensor_id and so forth).
+ *
+ * Upon reception of a notification message from the platform the SCMI RX ISR
+ * passes the received message payload and some ancillary information (including
+ * an arrival timestamp in nanoseconds) to the core via @scmi_notify() which
+ * pushes the event-data itself on a protocol-dedicated kfifo queue for further
+ * deferred processing as specified in @scmi_events_dispatcher().
+ *
+ * Each protocol has it own dedicated work_struct and worker which, once kicked
+ * by the ISR, takes care to empty its own dedicated queue, deliverying the
+ * queued items into the proper notification-chain: notifications processing can
+ * proceed concurrently on distinct workers only between events belonging to
+ * different protocols while delivery of events within the same protocol is
+ * still strictly sequentially ordered by time of arrival.
+ *
+ * Events' information is then extracted from the SCMI Notification messages and
+ * conveyed, converted into a custom per-event report struct, as the void *data
+ * param to the user callback provided by the registered notifier_block, so that
+ * from the user perspective his callback will look invoked like:
+ *
+ * int user_cb(struct notifier_block *nb, unsigned long event_id, void *report)
+ *
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -64,6 +85,7 @@
 #include <linux/scmi_protocol.h>
 #include <linux/slab.h>
 #include <linux/types.h>
+#include <linux/workqueue.h>
 
 #include "notify.h"
 
@@ -143,6 +165,8 @@
 #define REVT_NOTIFY_ENABLE(revt, ...)	\
 	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
 						__VA_ARGS__))
+#define REVT_FILL_REPORT(revt, ...)	\
+	((revt)->proto->ops->fill_custom_report(__VA_ARGS__))
 
 struct scmi_registered_protocol_events_desc;
 
@@ -158,6 +182,7 @@ struct scmi_registered_protocol_events_desc;
  *		 and protocols are allowed to register their supported events
  * @enabled: A flag to indicate events can be enabled and start flowing
  * @init_work: A work item to perform final initializations of pending handlers
+ * @notify_wq: A reference to the allocated Kernel cmwq
  * @pending_mtx: A mutex to protect @pending_events_handlers
  * @registered_protocols: An statically allocated array containing pointers to
  *			  all the registered protocol-level specific information
@@ -173,6 +198,8 @@ struct scmi_notify_instance {
 
 	struct work_struct				init_work;
 
+	struct workqueue_struct				*notify_wq;
+
 	struct mutex					pending_mtx;
 	struct scmi_registered_protocol_events_desc	**registered_protocols;
 	DECLARE_HASHTABLE(pending_events_handlers, 8);
@@ -186,11 +213,15 @@ struct scmi_notify_instance {
  * @sz: Size in bytes of the related kfifo
  * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
  * @kfifo: A dedicated Kernel kfifo descriptor
+ * @notify_work: A custom work item bound to this queue
+ * @wq: A reference to the associated workqueue
  */
 struct events_queue {
 	size_t				sz;
 	u8				*qbuf;
 	struct kfifo			kfifo;
+	struct work_struct		notify_work;
+	struct workqueue_struct		*wq;
 };
 
 /**
@@ -316,8 +347,249 @@ struct scmi_event_handler {
 
 #define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
 
+static struct scmi_event_handler *
+scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key);
+static void scmi_put_active_handler(struct scmi_notify_instance *ni,
+				    struct scmi_event_handler *hndl);
 static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
 				      struct scmi_event_handler *hndl);
+
+/**
+ * scmi_lookup_and_call_event_chain  - Lookup the proper chain and call it
+ *
+ * @ni: A reference to the notification instance to use
+ * @evt_key: The key to use to lookup the related notification chain
+ * @report: The customized event-specific report to pass down to the callbacks
+ *	    as their *data parameter.
+ */
+static inline void
+scmi_lookup_and_call_event_chain(struct scmi_notify_instance *ni,
+				 u32 evt_key, void *report)
+{
+	int ret;
+	struct scmi_event_handler *hndl;
+
+	/* Here ensure the event handler cannot vanish while using it */
+	hndl = scmi_get_active_handler(ni, evt_key);
+	if (IS_ERR_OR_NULL(hndl))
+		return;
+
+	ret = blocking_notifier_call_chain(&hndl->chain,
+					   KEY_XTRACT_EVT_ID(evt_key),
+					   report);
+	/* Notifiers are NOT supposed to cut the chain ... */
+	WARN_ON_ONCE(ret & NOTIFY_STOP_MASK);
+
+	scmi_put_active_handler(ni, hndl);
+}
+
+/**
+ * scmi_process_event_header  - Dequeue and process an event header
+ *
+ * Read an event header from the protocol queue into the dedicated scratch
+ * buffer and looks for a matching registered event; in case an anomalously
+ * sized read is detected just flush the queue.
+ *
+ * @eq: The queue to use
+ * @pd: The protocol descriptor to use
+ *
+ * Returns:
+ *  - a reference to the matching registered event when found
+ *  - ERR_PTR(-EINVAL) when NO registered event could be found
+ *  - NULL when the queue is empty
+ */
+static inline struct scmi_registered_event *
+scmi_process_event_header(struct events_queue *eq,
+			  struct scmi_registered_protocol_events_desc *pd)
+{
+	unsigned int outs;
+	struct scmi_registered_event *r_evt;
+
+	outs = kfifo_out(&eq->kfifo, pd->eh,
+			 sizeof(struct scmi_event_header));
+	if (!outs)
+		return NULL;
+	if (outs != sizeof(struct scmi_event_header)) {
+		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
+		kfifo_reset_out(&eq->kfifo);
+		return NULL;
+	}
+
+	r_evt = SCMI_GET_REVT_FROM_PD(pd, pd->eh->evt_id);
+	if (!r_evt)
+		r_evt = ERR_PTR(-EINVAL);
+
+	return r_evt;
+}
+
+/**
+ * scmi_process_event_payload  - Dequeue and process an event payload
+ *
+ * Read an event payload from the protocol queue into the dedicated scratch
+ * buffer, fills a custom report and then look for matching event handlers and
+ * call them; skip any unknown event (as marked by scmi_process_event_header())
+ * and in case an anomalously sized read is detected just flush the queue.
+ *
+ * @eq: The queue to use
+ * @pd: The protocol descriptor to use
+ * @r_evt: The registered event descriptor to use
+ *
+ * Return: False when the queue is empty
+ */
+static inline bool
+scmi_process_event_payload(struct events_queue *eq,
+			   struct scmi_registered_protocol_events_desc *pd,
+			   struct scmi_registered_event *r_evt)
+{
+	u32 src_id, key;
+	unsigned int outs;
+	void *report = NULL;
+
+	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
+	if (unlikely(!outs))
+		return false;
+
+	/* Any in-flight event has now been officially processed */
+	pd->in_flight = NULL;
+
+	if (unlikely(outs != pd->eh->payld_sz)) {
+		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
+		kfifo_reset_out(&eq->kfifo);
+		return false;
+	}
+
+	if (IS_ERR(r_evt)) {
+		pr_warn("SCMI Notifications: SKIP UNKNOWN EVT - proto:%X  evt:%d\n",
+			pd->id, pd->eh->evt_id);
+		return true;
+	}
+
+	report = REVT_FILL_REPORT(r_evt, pd->eh->evt_id, pd->eh->timestamp,
+				  pd->eh->payld, pd->eh->payld_sz,
+				  r_evt->report, &src_id);
+	if (!report) {
+		pr_err("SCMI Notifications: Report not available - proto:%X  evt:%d\n",
+		       pd->id, pd->eh->evt_id);
+		return true;
+	}
+
+	/* At first search for a generic ALL src_ids handler... */
+	key = MAKE_ALL_SRCS_KEY(pd->id, pd->eh->evt_id);
+	scmi_lookup_and_call_event_chain(pd->ni, key, report);
+
+	/* ...then search for any specific src_id */
+	key = MAKE_HASH_KEY(pd->id, pd->eh->evt_id, src_id);
+	scmi_lookup_and_call_event_chain(pd->ni, key, report);
+
+	return true;
+}
+
+/**
+ * scmi_events_dispatcher  - Common worker logic for all work items.
+ *
+ *  1. dequeue one pending RX notification (queued in SCMI RX ISR context)
+ *  2. generate a custom event report from the received event message
+ *  3. lookup for any registered ALL_SRC_IDs handler
+ *     - > call the related notification chain passing in the report
+ *  4. lookup for any registered specific SRC_ID handler
+ *     - > call the related notification chain passing in the report
+ *
+ * Note that:
+ * - a dedicated per-protocol kfifo queue is used: in this way an anomalous
+ *   flood of events cannot saturate other protocols' queues.
+ *
+ * - each per-protocol queue is associated to a distinct work_item, which
+ *   means, in turn, that:
+ *   + all protocols can process their dedicated queues concurrently
+ *     (since notify_wq:max_active != 1)
+ *   + anyway at most one worker instance is allowed to run on the same queue
+ *     concurrently: this ensures that we can have only one concurrent
+ *     reader/writer on the associated kfifo, so that we can use it lock-less
+ *
+ * @work: The work item to use, which is associated to a dedicated events_queue
+ */
+static void scmi_events_dispatcher(struct work_struct *work)
+{
+	struct events_queue *eq;
+	struct scmi_registered_protocol_events_desc *pd;
+	struct scmi_registered_event *r_evt;
+
+	eq = container_of(work, struct events_queue, notify_work);
+	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
+			  equeue);
+	/*
+	 * In order to keep the queue lock-less and the number of memcopies
+	 * to the bare minimum needed, the dispatcher accounts for the
+	 * possibility of per-protocol in-flight events: i.e. an event whose
+	 * reception could end up being split across two subsequent runs of this
+	 * worker, first the header, then the payload.
+	 */
+	do {
+		if (likely(!pd->in_flight)) {
+			r_evt = scmi_process_event_header(eq, pd);
+			if (!r_evt)
+				break;
+			pd->in_flight = r_evt;
+		} else {
+			r_evt = pd->in_flight;
+		}
+	} while (scmi_process_event_payload(eq, pd, r_evt));
+}
+
+/**
+ * scmi_notify  - Queues a notification for further deferred processing
+ *
+ * This is called in interrupt context to queue a received event for
+ * deferred processing.
+ *
+ * @handle: The handle identifying the platform instance from which the
+ *	    dispatched event is generated
+ * @proto_id: Protocol ID
+ * @evt_id: Event ID (msgID)
+ * @buf: Event Message Payload (without the header)
+ * @len: Event Message Payload size
+ * @ts: RX Timestamp in nanoseconds (boottime)
+ *
+ * Return: 0 on Success
+ */
+int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
+		const void *buf, size_t len, u64 ts)
+{
+	struct scmi_registered_event *r_evt;
+	struct scmi_event_header eh;
+	struct scmi_notify_instance *ni = handle->notify_priv;
+
+	/* Ensure atomic value is updated */
+	smp_mb__before_atomic();
+	if (unlikely(!atomic_read(&ni->enabled)))
+		return 0;
+
+	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
+	if (unlikely(!r_evt))
+		return -EINVAL;
+
+	if (unlikely(len > r_evt->evt->max_payld_sz)) {
+		pr_err("SCMI Notifications: discard badly sized message\n");
+		return -EINVAL;
+	}
+	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
+		     sizeof(eh) + len)) {
+		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
+			proto_id, evt_id, ts);
+		return -ENOMEM;
+	}
+
+	eh.timestamp = ts;
+	eh.evt_id = evt_id;
+	eh.payld_sz = len;
+	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
+	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
+	queue_work(r_evt->proto->equeue.wq,
+		   &r_evt->proto->equeue.notify_work);
+
+	return 0;
+}
+
 /**
  * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
  *
@@ -332,12 +604,21 @@ static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
 static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
 					struct events_queue *equeue, size_t sz)
 {
+	int ret = 0;
+
 	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
 	if (!equeue->qbuf)
 		return -ENOMEM;
 	equeue->sz = sz;
 
-	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
+	ret = kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
+	if (ret)
+		return ret;
+
+	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
+	equeue->wq = ni->notify_wq;
+
+	return ret;
 }
 
 /**
@@ -740,6 +1021,38 @@ scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
 	return __scmi_event_handler_get_ops(ni, evt_key, true);
 }
 
+/**
+ * scmi_get_active_handler  - Helper to get active handlers only
+ *
+ * Search for the desired handler matching the key only in the per-protocol
+ * table of registered handlers: this is called only from the dispatching path
+ * so want to be as quick as possible and do not care about pending.
+ *
+ * @ni: A reference to the notification instance to use
+ * @evt_key: The event key to use
+ *
+ * Return: A properly refcounted active handler
+ */
+static struct scmi_event_handler *
+scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	struct scmi_registered_event *r_evt;
+	struct scmi_event_handler *hndl = NULL;
+
+	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
+			      KEY_XTRACT_EVT_ID(evt_key));
+	if (likely(r_evt)) {
+		mutex_lock(&r_evt->proto->registered_mtx);
+		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
+				hndl, evt_key);
+		if (likely(hndl))
+			refcount_inc(&hndl->users);
+		mutex_unlock(&r_evt->proto->registered_mtx);
+	}
+
+	return hndl;
+}
+
 /**
  * __scmi_enable_evt  - Enable/disable events generation
  *
@@ -861,6 +1174,16 @@ static void scmi_put_handler(struct scmi_notify_instance *ni,
 	mutex_unlock(&ni->pending_mtx);
 }
 
+static void scmi_put_active_handler(struct scmi_notify_instance *ni,
+					  struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_event *r_evt = hndl->r_evt;
+
+	mutex_lock(&r_evt->proto->registered_mtx);
+	scmi_put_handler_unlocked(ni, hndl);
+	mutex_unlock(&r_evt->proto->registered_mtx);
+}
+
 /**
  * scmi_event_handler_enable_events  - Enable events associated to an handler
  *
@@ -1087,6 +1410,12 @@ int scmi_notification_init(struct scmi_handle *handle)
 	ni->gid = gid;
 	ni->handle = handle;
 
+	ni->notify_wq = alloc_workqueue("scmi_notify",
+					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
+					0);
+	if (!ni->notify_wq)
+		goto err;
+
 	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
 						sizeof(char *), GFP_KERNEL);
 	if (!ni->registered_protocols)
@@ -1133,6 +1462,9 @@ void scmi_notification_exit(struct scmi_handle *handle)
 	/* Ensure atomic values are updated */
 	smp_mb__after_atomic();
 
+	/* Destroy while letting pending work complete */
+	destroy_workqueue(ni->notify_wq);
+
 	devres_release_group(ni->handle->dev, ni->gid);
 
 	pr_info("SCMI Notifications Core Shutdown.\n");
diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
index f765acda2311..6cd386649d5a 100644
--- a/drivers/firmware/arm_scmi/notify.h
+++ b/drivers/firmware/arm_scmi/notify.h
@@ -51,10 +51,17 @@ struct scmi_event {
  *			using the proper custom protocol commands.
  *			Return true if at least one the required src_id
  *			has been successfully enabled/disabled
+ * @fill_custom_report: fills a custom event report from the provided
+ *			event message payld identifying the event
+ *			specific src_id.
+ *			Return NULL on failure otherwise @report now fully
+ *			populated
  */
 struct scmi_protocol_event_ops {
 	bool (*set_notify_enabled)(const struct scmi_handle *handle,
 				   u8 evt_id, u32 src_id, bool enabled);
+	void *(*fill_custom_report)(u8 evt_id, u64 timestamp, const void *payld,
+				    size_t payld_sz, void *report, u32 *src_id);
 };
 
 int scmi_notification_init(struct scmi_handle *handle);
@@ -65,5 +72,7 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
 				  const struct scmi_protocol_event_ops *ops,
 				  const struct scmi_event *evt, int num_events,
 				  int num_sources);
+int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
+		const void *buf, size_t len, u64 ts);
 
 #endif /* _SCMI_NOTIFY_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 08/13] firmware: arm_scmi: Enable notification core
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Initialize and enable SCMI Notifications core support during bus/driver
probe phase, so that protocols can start registering their supported
events during their initialization.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- simplified core initialization: protocols events' registrations is now
  disjoint from users' callback registrations, so that events' generation
  can be enabled earlier for registered events and delayed for pending
  ones in order to support deferred (or missing) protocol initialization
V2 --> V3
- reviewed core initialization: all implemented protocols must complete
  their protocol-events registration phases before notifications can be
  enabled as a whole; in the meantime any user's callback registration
  requests possibly issued while the notifications were not enabled
  remain pending: a dedicated worker completes the handlers registration
  once all protocols have been initialized.
  NOTE THAT this can lead to ISSUES with late inserted or missing SCMI
  modules (i.e. for protocols defined in the DT and implemented by the
  platform but lazily loaded or not loaded at all.), since in these
  scenarios notifications dispatching will be enabled later or never.
- reviewed core exit: protocol users (devices) are accounted on probe/
  remove, and protocols' events are unregisteredonce last user go
  (can happen only at shutdown)
V1 --> V2
- added timestamping
- moved notification init/exit and using devres
---
 drivers/firmware/arm_scmi/driver.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index 868cc36a07c9..5c43d82e3260 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -26,6 +26,7 @@
 #include <linux/slab.h>
 
 #include "common.h"
+#include "notify.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/scmi.h>
@@ -204,11 +205,13 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
 
 static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
 {
+	u64 ts;
 	struct scmi_xfer *xfer;
 	struct device *dev = cinfo->dev;
 	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
 	struct scmi_xfers_info *minfo = &info->rx_minfo;
 
+	ts = ktime_get_boottime_ns();
 	xfer = scmi_xfer_get(cinfo->handle, minfo);
 	if (IS_ERR(xfer)) {
 		dev_err(dev, "failed to get free message slot (%ld)\n",
@@ -221,6 +224,8 @@ static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
 	scmi_dump_header_dbg(dev, &xfer->hdr);
 	info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size,
 					    xfer);
+	scmi_notify(cinfo->handle, xfer->hdr.protocol_id,
+		    xfer->hdr.id, xfer->rx.buf, xfer->rx.len, ts);
 
 	trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
 			   xfer->hdr.protocol_id, xfer->hdr.seq,
@@ -771,6 +776,9 @@ static int scmi_probe(struct platform_device *pdev)
 	if (ret)
 		return ret;
 
+	if (scmi_notification_init(handle))
+		dev_err(dev, "SCMI Notifications NOT available.\n");
+
 	ret = scmi_base_protocol_init(handle);
 	if (ret) {
 		dev_err(dev, "unable to communicate with SCMI(%d)\n", ret);
@@ -813,6 +821,8 @@ static int scmi_remove(struct platform_device *pdev)
 	struct scmi_info *info = platform_get_drvdata(pdev);
 	struct idr *idr = &info->tx_idr;
 
+	scmi_notification_exit(&info->handle);
+
 	mutex_lock(&scmi_list_mutex);
 	if (info->users)
 		ret = -EBUSY;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 08/13] firmware: arm_scmi: Enable notification core
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Initialize and enable SCMI Notifications core support during bus/driver
probe phase, so that protocols can start registering their supported
events during their initialization.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- simplified core initialization: protocols events' registrations is now
  disjoint from users' callback registrations, so that events' generation
  can be enabled earlier for registered events and delayed for pending
  ones in order to support deferred (or missing) protocol initialization
V2 --> V3
- reviewed core initialization: all implemented protocols must complete
  their protocol-events registration phases before notifications can be
  enabled as a whole; in the meantime any user's callback registration
  requests possibly issued while the notifications were not enabled
  remain pending: a dedicated worker completes the handlers registration
  once all protocols have been initialized.
  NOTE THAT this can lead to ISSUES with late inserted or missing SCMI
  modules (i.e. for protocols defined in the DT and implemented by the
  platform but lazily loaded or not loaded at all.), since in these
  scenarios notifications dispatching will be enabled later or never.
- reviewed core exit: protocol users (devices) are accounted on probe/
  remove, and protocols' events are unregisteredonce last user go
  (can happen only at shutdown)
V1 --> V2
- added timestamping
- moved notification init/exit and using devres
---
 drivers/firmware/arm_scmi/driver.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index 868cc36a07c9..5c43d82e3260 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -26,6 +26,7 @@
 #include <linux/slab.h>
 
 #include "common.h"
+#include "notify.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/scmi.h>
@@ -204,11 +205,13 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
 
 static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
 {
+	u64 ts;
 	struct scmi_xfer *xfer;
 	struct device *dev = cinfo->dev;
 	struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
 	struct scmi_xfers_info *minfo = &info->rx_minfo;
 
+	ts = ktime_get_boottime_ns();
 	xfer = scmi_xfer_get(cinfo->handle, minfo);
 	if (IS_ERR(xfer)) {
 		dev_err(dev, "failed to get free message slot (%ld)\n",
@@ -221,6 +224,8 @@ static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
 	scmi_dump_header_dbg(dev, &xfer->hdr);
 	info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size,
 					    xfer);
+	scmi_notify(cinfo->handle, xfer->hdr.protocol_id,
+		    xfer->hdr.id, xfer->rx.buf, xfer->rx.len, ts);
 
 	trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
 			   xfer->hdr.protocol_id, xfer->hdr.seq,
@@ -771,6 +776,9 @@ static int scmi_probe(struct platform_device *pdev)
 	if (ret)
 		return ret;
 
+	if (scmi_notification_init(handle))
+		dev_err(dev, "SCMI Notifications NOT available.\n");
+
 	ret = scmi_base_protocol_init(handle);
 	if (ret) {
 		dev_err(dev, "unable to communicate with SCMI(%d)\n", ret);
@@ -813,6 +821,8 @@ static int scmi_remove(struct platform_device *pdev)
 	struct scmi_info *info = platform_get_drvdata(pdev);
 	struct idr *idr = &info->tx_idr;
 
+	scmi_notification_exit(&info->handle);
+
 	mutex_lock(&scmi_list_mutex);
 	if (info->users)
 		ret = -EBUSY;
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 09/13] firmware: arm_scmi: Add Power notifications support
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Make SCMI Power protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/power.c | 123 ++++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h     |  15 ++++
 2 files changed, 138 insertions(+)

diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
index cf7f0312381b..281da7e7e33a 100644
--- a/drivers/firmware/arm_scmi/power.c
+++ b/drivers/firmware/arm_scmi/power.c
@@ -6,6 +6,7 @@
  */
 
 #include "common.h"
+#include "notify.h"
 
 enum scmi_power_protocol_cmd {
 	POWER_DOMAIN_ATTRIBUTES = 0x3,
@@ -48,6 +49,12 @@ struct scmi_power_state_notify {
 	__le32 notify_enable;
 };
 
+struct scmi_power_state_notify_payld {
+	__le32 agent_id;
+	__le32 domain_id;
+	__le32 power_state;
+};
+
 struct power_dom_info {
 	bool state_set_sync;
 	bool state_set_async;
@@ -63,6 +70,11 @@ struct scmi_power_info {
 	struct power_dom_info *dom_info;
 };
 
+static enum scmi_power_protocol_cmd evt_2_cmd[] = {
+	POWER_STATE_NOTIFY,
+	POWER_STATE_CHANGE_REQUESTED_NOTIFY,
+};
+
 static int scmi_power_attributes_get(const struct scmi_handle *handle,
 				     struct scmi_power_info *pi)
 {
@@ -186,6 +198,111 @@ static struct scmi_power_ops power_ops = {
 	.state_get = scmi_power_state_get,
 };
 
+static int scmi_power_request_notify(const struct scmi_handle *handle,
+				     u32 domain, int message_id, bool enable)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_power_state_notify *notify;
+
+	ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_POWER,
+				 sizeof(*notify), 0, &t);
+	if (ret)
+		return ret;
+
+	notify = t->tx.buf;
+	notify->domain = cpu_to_le32(domain);
+	notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_xfer_put(handle, t);
+	return ret;
+}
+
+static bool scmi_power_set_notify_enabled(const struct scmi_handle *handle,
+					  u8 evt_id, u32 src_id, bool enable)
+{
+	int ret, cmd_id;
+
+	cmd_id = MAP_EVT_TO_ENABLE_CMD(evt_id, evt_2_cmd);
+	if (cmd_id < 0)
+		return false;
+
+	ret = scmi_power_request_notify(handle, src_id, cmd_id, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n",
+				SCMI_PROTOCOL_POWER, evt_id, src_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_power_fill_custom_report(u8 evt_id, u64 timestamp,
+					   const void *payld, size_t payld_sz,
+					   void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case POWER_STATE_CHANGED:
+	{
+		const struct scmi_power_state_notify_payld *p = payld;
+		struct scmi_power_state_changed_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->power_state = le32_to_cpu(p->power_state);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	case POWER_STATE_CHANGE_REQUESTED:
+	{
+		const struct scmi_power_state_notify_payld *p = payld;
+		struct scmi_power_state_change_requested_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->power_state = le32_to_cpu(p->power_state);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event power_events[] = {
+	{
+		.id = POWER_STATE_CHANGED,
+		.max_payld_sz = 12,
+		.max_report_sz =
+			sizeof(struct scmi_power_state_changed_report),
+	},
+	{
+		.id = POWER_STATE_CHANGE_REQUESTED,
+		.max_payld_sz = 12,
+		.max_report_sz =
+			sizeof(struct scmi_power_state_change_requested_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops power_event_ops = {
+	.set_notify_enabled = scmi_power_set_notify_enabled,
+	.fill_custom_report = scmi_power_fill_custom_report,
+};
+
 static int scmi_power_protocol_init(struct scmi_handle *handle)
 {
 	int domain;
@@ -214,6 +331,12 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
 		scmi_power_domain_attributes_get(handle, domain, dom);
 	}
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_POWER, PAGE_SIZE,
+				      &power_event_ops, power_events,
+				      ARRAY_SIZE(power_events),
+				      pinfo->num_domains);
+
 	pinfo->version = version;
 	handle->power_ops = &power_ops;
 	handle->power_priv = pinfo;
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 797e1e03ae52..baa117f9eda3 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -377,4 +377,19 @@ typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
 int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
 void scmi_protocol_unregister(int protocol_id);
 
+/* SCMI Notification API - Custom Event Reports */
+struct scmi_power_state_changed_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	domain_id;
+	u32	power_state;
+};
+
+struct scmi_power_state_change_requested_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	domain_id;
+	u32	power_state;
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 09/13] firmware: arm_scmi: Add Power notifications support
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Make SCMI Power protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/power.c | 123 ++++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h     |  15 ++++
 2 files changed, 138 insertions(+)

diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
index cf7f0312381b..281da7e7e33a 100644
--- a/drivers/firmware/arm_scmi/power.c
+++ b/drivers/firmware/arm_scmi/power.c
@@ -6,6 +6,7 @@
  */
 
 #include "common.h"
+#include "notify.h"
 
 enum scmi_power_protocol_cmd {
 	POWER_DOMAIN_ATTRIBUTES = 0x3,
@@ -48,6 +49,12 @@ struct scmi_power_state_notify {
 	__le32 notify_enable;
 };
 
+struct scmi_power_state_notify_payld {
+	__le32 agent_id;
+	__le32 domain_id;
+	__le32 power_state;
+};
+
 struct power_dom_info {
 	bool state_set_sync;
 	bool state_set_async;
@@ -63,6 +70,11 @@ struct scmi_power_info {
 	struct power_dom_info *dom_info;
 };
 
+static enum scmi_power_protocol_cmd evt_2_cmd[] = {
+	POWER_STATE_NOTIFY,
+	POWER_STATE_CHANGE_REQUESTED_NOTIFY,
+};
+
 static int scmi_power_attributes_get(const struct scmi_handle *handle,
 				     struct scmi_power_info *pi)
 {
@@ -186,6 +198,111 @@ static struct scmi_power_ops power_ops = {
 	.state_get = scmi_power_state_get,
 };
 
+static int scmi_power_request_notify(const struct scmi_handle *handle,
+				     u32 domain, int message_id, bool enable)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_power_state_notify *notify;
+
+	ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_POWER,
+				 sizeof(*notify), 0, &t);
+	if (ret)
+		return ret;
+
+	notify = t->tx.buf;
+	notify->domain = cpu_to_le32(domain);
+	notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_xfer_put(handle, t);
+	return ret;
+}
+
+static bool scmi_power_set_notify_enabled(const struct scmi_handle *handle,
+					  u8 evt_id, u32 src_id, bool enable)
+{
+	int ret, cmd_id;
+
+	cmd_id = MAP_EVT_TO_ENABLE_CMD(evt_id, evt_2_cmd);
+	if (cmd_id < 0)
+		return false;
+
+	ret = scmi_power_request_notify(handle, src_id, cmd_id, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n",
+				SCMI_PROTOCOL_POWER, evt_id, src_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_power_fill_custom_report(u8 evt_id, u64 timestamp,
+					   const void *payld, size_t payld_sz,
+					   void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case POWER_STATE_CHANGED:
+	{
+		const struct scmi_power_state_notify_payld *p = payld;
+		struct scmi_power_state_changed_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->power_state = le32_to_cpu(p->power_state);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	case POWER_STATE_CHANGE_REQUESTED:
+	{
+		const struct scmi_power_state_notify_payld *p = payld;
+		struct scmi_power_state_change_requested_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->power_state = le32_to_cpu(p->power_state);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event power_events[] = {
+	{
+		.id = POWER_STATE_CHANGED,
+		.max_payld_sz = 12,
+		.max_report_sz =
+			sizeof(struct scmi_power_state_changed_report),
+	},
+	{
+		.id = POWER_STATE_CHANGE_REQUESTED,
+		.max_payld_sz = 12,
+		.max_report_sz =
+			sizeof(struct scmi_power_state_change_requested_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops power_event_ops = {
+	.set_notify_enabled = scmi_power_set_notify_enabled,
+	.fill_custom_report = scmi_power_fill_custom_report,
+};
+
 static int scmi_power_protocol_init(struct scmi_handle *handle)
 {
 	int domain;
@@ -214,6 +331,12 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
 		scmi_power_domain_attributes_get(handle, domain, dom);
 	}
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_POWER, PAGE_SIZE,
+				      &power_event_ops, power_events,
+				      ARRAY_SIZE(power_events),
+				      pinfo->num_domains);
+
 	pinfo->version = version;
 	handle->power_ops = &power_ops;
 	handle->power_priv = pinfo;
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 797e1e03ae52..baa117f9eda3 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -377,4 +377,19 @@ typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
 int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
 void scmi_protocol_unregister(int protocol_id);
 
+/* SCMI Notification API - Custom Event Reports */
+struct scmi_power_state_changed_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	domain_id;
+	u32	power_state;
+};
+
+struct scmi_power_state_change_requested_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	domain_id;
+	u32	power_state;
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 10/13] firmware: arm_scmi: Add Perf notifications support
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Make SCMI Perf protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/perf.c | 130 +++++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h    |  15 ++++
 2 files changed, 145 insertions(+)

diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
index 88509ec637d0..1187cff7ed16 100644
--- a/drivers/firmware/arm_scmi/perf.c
+++ b/drivers/firmware/arm_scmi/perf.c
@@ -14,6 +14,7 @@
 #include <linux/sort.h>
 
 #include "common.h"
+#include "notify.h"
 
 enum scmi_performance_protocol_cmd {
 	PERF_DOMAIN_ATTRIBUTES = 0x3,
@@ -86,6 +87,19 @@ struct scmi_perf_notify_level_or_limits {
 	__le32 notify_enable;
 };
 
+struct scmi_perf_limits_notify_payld {
+	__le32 agent_id;
+	__le32 domain_id;
+	__le32 range_max;
+	__le32 range_min;
+};
+
+struct scmi_perf_level_notify_payld {
+	__le32 agent_id;
+	__le32 domain_id;
+	__le32 performance_level;
+};
+
 struct scmi_msg_resp_perf_describe_levels {
 	__le16 num_returned;
 	__le16 num_remaining;
@@ -158,6 +172,11 @@ struct scmi_perf_info {
 	struct perf_dom_info *dom_info;
 };
 
+static enum scmi_performance_protocol_cmd evt_2_cmd[] = {
+	PERF_NOTIFY_LIMITS,
+	PERF_NOTIFY_LEVEL,
+};
+
 static int scmi_perf_attributes_get(const struct scmi_handle *handle,
 				    struct scmi_perf_info *pi)
 {
@@ -488,6 +507,29 @@ static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
 	return scmi_perf_mb_level_get(handle, domain, level, poll);
 }
 
+static int scmi_perf_level_limits_notify(const struct scmi_handle *handle,
+					 u32 domain, int message_id,
+					 bool enable)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_perf_notify_level_or_limits *notify;
+
+	ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_PERF,
+				 sizeof(*notify), 0, &t);
+	if (ret)
+		return ret;
+
+	notify = t->tx.buf;
+	notify->domain = cpu_to_le32(domain);
+	notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_xfer_put(handle, t);
+	return ret;
+}
+
 static bool scmi_perf_fc_size_is_valid(u32 msg, u32 size)
 {
 	if ((msg == PERF_LEVEL_GET || msg == PERF_LEVEL_SET) && size == 4)
@@ -710,6 +752,88 @@ static struct scmi_perf_ops perf_ops = {
 	.est_power_get = scmi_dvfs_est_power_get,
 };
 
+static bool scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
+					 u8 evt_id, u32 src_id, bool enable)
+{
+	int ret, cmd_id;
+
+	cmd_id = MAP_EVT_TO_ENABLE_CMD(evt_id, evt_2_cmd);
+	if (cmd_id < 0)
+		return false;
+
+	ret = scmi_perf_level_limits_notify(handle, src_id, cmd_id, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
+				SCMI_PROTOCOL_PERF, evt_id, src_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_perf_fill_custom_report(u8 evt_id, u64 timestamp,
+					  const void *payld, size_t payld_sz,
+					  void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case PERFORMANCE_LIMITS_CHANGED:
+	{
+		const struct scmi_perf_limits_notify_payld *p = payld;
+		struct scmi_perf_limits_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->range_max = le32_to_cpu(p->range_max);
+		r->range_min = le32_to_cpu(p->range_min);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	case PERFORMANCE_LEVEL_CHANGED:
+	{
+		const struct scmi_perf_level_notify_payld *p = payld;
+		struct scmi_perf_level_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->performance_level = le32_to_cpu(p->performance_level);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event perf_events[] = {
+	{
+		.id = PERFORMANCE_LIMITS_CHANGED,
+		.max_payld_sz = 16,
+		.max_report_sz = sizeof(struct scmi_perf_limits_report),
+	},
+	{
+		.id = PERFORMANCE_LEVEL_CHANGED,
+		.max_payld_sz = 12,
+		.max_report_sz = sizeof(struct scmi_perf_level_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops perf_event_ops = {
+	.set_notify_enabled = scmi_perf_set_notify_enabled,
+	.fill_custom_report = scmi_perf_fill_custom_report,
+};
+
 static int scmi_perf_protocol_init(struct scmi_handle *handle)
 {
 	int domain;
@@ -742,6 +866,12 @@ static int scmi_perf_protocol_init(struct scmi_handle *handle)
 			scmi_perf_domain_init_fc(handle, domain, &dom->fc_info);
 	}
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_PERF, PAGE_SIZE,
+				      &perf_event_ops, perf_events,
+				      ARRAY_SIZE(perf_events),
+				      pinfo->num_domains);
+
 	pinfo->version = version;
 	handle->perf_ops = &perf_ops;
 	handle->perf_priv = pinfo;
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index baa117f9eda3..5e7c28c8bcac 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -392,4 +392,19 @@ struct scmi_power_state_change_requested_report {
 	u32	power_state;
 };
 
+struct scmi_perf_limits_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	domain_id;
+	u32	range_max;
+	u32	range_min;
+};
+
+struct scmi_perf_level_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	domain_id;
+	u32	performance_level;
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 10/13] firmware: arm_scmi: Add Perf notifications support
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Make SCMI Perf protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/perf.c | 130 +++++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h    |  15 ++++
 2 files changed, 145 insertions(+)

diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
index 88509ec637d0..1187cff7ed16 100644
--- a/drivers/firmware/arm_scmi/perf.c
+++ b/drivers/firmware/arm_scmi/perf.c
@@ -14,6 +14,7 @@
 #include <linux/sort.h>
 
 #include "common.h"
+#include "notify.h"
 
 enum scmi_performance_protocol_cmd {
 	PERF_DOMAIN_ATTRIBUTES = 0x3,
@@ -86,6 +87,19 @@ struct scmi_perf_notify_level_or_limits {
 	__le32 notify_enable;
 };
 
+struct scmi_perf_limits_notify_payld {
+	__le32 agent_id;
+	__le32 domain_id;
+	__le32 range_max;
+	__le32 range_min;
+};
+
+struct scmi_perf_level_notify_payld {
+	__le32 agent_id;
+	__le32 domain_id;
+	__le32 performance_level;
+};
+
 struct scmi_msg_resp_perf_describe_levels {
 	__le16 num_returned;
 	__le16 num_remaining;
@@ -158,6 +172,11 @@ struct scmi_perf_info {
 	struct perf_dom_info *dom_info;
 };
 
+static enum scmi_performance_protocol_cmd evt_2_cmd[] = {
+	PERF_NOTIFY_LIMITS,
+	PERF_NOTIFY_LEVEL,
+};
+
 static int scmi_perf_attributes_get(const struct scmi_handle *handle,
 				    struct scmi_perf_info *pi)
 {
@@ -488,6 +507,29 @@ static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
 	return scmi_perf_mb_level_get(handle, domain, level, poll);
 }
 
+static int scmi_perf_level_limits_notify(const struct scmi_handle *handle,
+					 u32 domain, int message_id,
+					 bool enable)
+{
+	int ret;
+	struct scmi_xfer *t;
+	struct scmi_perf_notify_level_or_limits *notify;
+
+	ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_PERF,
+				 sizeof(*notify), 0, &t);
+	if (ret)
+		return ret;
+
+	notify = t->tx.buf;
+	notify->domain = cpu_to_le32(domain);
+	notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_xfer_put(handle, t);
+	return ret;
+}
+
 static bool scmi_perf_fc_size_is_valid(u32 msg, u32 size)
 {
 	if ((msg == PERF_LEVEL_GET || msg == PERF_LEVEL_SET) && size == 4)
@@ -710,6 +752,88 @@ static struct scmi_perf_ops perf_ops = {
 	.est_power_get = scmi_dvfs_est_power_get,
 };
 
+static bool scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
+					 u8 evt_id, u32 src_id, bool enable)
+{
+	int ret, cmd_id;
+
+	cmd_id = MAP_EVT_TO_ENABLE_CMD(evt_id, evt_2_cmd);
+	if (cmd_id < 0)
+		return false;
+
+	ret = scmi_perf_level_limits_notify(handle, src_id, cmd_id, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
+				SCMI_PROTOCOL_PERF, evt_id, src_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_perf_fill_custom_report(u8 evt_id, u64 timestamp,
+					  const void *payld, size_t payld_sz,
+					  void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case PERFORMANCE_LIMITS_CHANGED:
+	{
+		const struct scmi_perf_limits_notify_payld *p = payld;
+		struct scmi_perf_limits_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->range_max = le32_to_cpu(p->range_max);
+		r->range_min = le32_to_cpu(p->range_min);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	case PERFORMANCE_LEVEL_CHANGED:
+	{
+		const struct scmi_perf_level_notify_payld *p = payld;
+		struct scmi_perf_level_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->performance_level = le32_to_cpu(p->performance_level);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event perf_events[] = {
+	{
+		.id = PERFORMANCE_LIMITS_CHANGED,
+		.max_payld_sz = 16,
+		.max_report_sz = sizeof(struct scmi_perf_limits_report),
+	},
+	{
+		.id = PERFORMANCE_LEVEL_CHANGED,
+		.max_payld_sz = 12,
+		.max_report_sz = sizeof(struct scmi_perf_level_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops perf_event_ops = {
+	.set_notify_enabled = scmi_perf_set_notify_enabled,
+	.fill_custom_report = scmi_perf_fill_custom_report,
+};
+
 static int scmi_perf_protocol_init(struct scmi_handle *handle)
 {
 	int domain;
@@ -742,6 +866,12 @@ static int scmi_perf_protocol_init(struct scmi_handle *handle)
 			scmi_perf_domain_init_fc(handle, domain, &dom->fc_info);
 	}
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_PERF, PAGE_SIZE,
+				      &perf_event_ops, perf_events,
+				      ARRAY_SIZE(perf_events),
+				      pinfo->num_domains);
+
 	pinfo->version = version;
 	handle->perf_ops = &perf_ops;
 	handle->perf_priv = pinfo;
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index baa117f9eda3..5e7c28c8bcac 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -392,4 +392,19 @@ struct scmi_power_state_change_requested_report {
 	u32	power_state;
 };
 
+struct scmi_perf_limits_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	domain_id;
+	u32	range_max;
+	u32	range_min;
+};
+
+struct scmi_perf_level_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	domain_id;
+	u32	performance_level;
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 11/13] firmware: arm_scmi: Add Sensor notifications support
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Make SCMI Sensor protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/sensors.c | 69 +++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h       |  7 +++
 2 files changed, 76 insertions(+)

diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c
index db1b1ab303da..aa7e8e017125 100644
--- a/drivers/firmware/arm_scmi/sensors.c
+++ b/drivers/firmware/arm_scmi/sensors.c
@@ -6,6 +6,7 @@
  */
 
 #include "common.h"
+#include "notify.h"
 
 enum scmi_sensor_protocol_cmd {
 	SENSOR_DESCRIPTION_GET = 0x3,
@@ -71,6 +72,12 @@ struct scmi_msg_sensor_reading_get {
 #define SENSOR_READ_ASYNC	BIT(0)
 };
 
+struct scmi_sensor_trip_notify_payld {
+	__le32 agent_id;
+	__le32 sensor_id;
+	__le32 trip_point_desc;
+};
+
 struct sensors_info {
 	u32 version;
 	int num_sensors;
@@ -276,6 +283,62 @@ static struct scmi_sensor_ops sensor_ops = {
 	.reading_get = scmi_sensor_reading_get,
 };
 
+static bool scmi_sensor_set_notify_enabled(const struct scmi_handle *handle,
+					   u8 evt_id, u32 src_id, bool enable)
+{
+	int ret;
+
+	ret = scmi_sensor_trip_point_notify(handle, src_id, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
+			SCMI_PROTOCOL_SENSOR, evt_id, src_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_sensor_fill_custom_report(u8 evt_id, u64 timestamp,
+					   const void *payld, size_t payld_sz,
+					   void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case SENSOR_TRIP_POINT_EVENT:
+	{
+		const struct scmi_sensor_trip_notify_payld *p = payld;
+		struct scmi_sensor_trip_point_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->sensor_id = le32_to_cpu(p->sensor_id);
+		r->trip_point_desc = le32_to_cpu(p->trip_point_desc);
+		*src_id = r->sensor_id;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event sensor_events[] = {
+	{
+		.id = SENSOR_TRIP_POINT_EVENT,
+		.max_payld_sz = 12,
+		.max_report_sz = sizeof(struct scmi_sensor_trip_point_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops sensor_event_ops = {
+	.set_notify_enabled = scmi_sensor_set_notify_enabled,
+	.fill_custom_report = scmi_sensor_fill_custom_report,
+};
+
 static int scmi_sensors_protocol_init(struct scmi_handle *handle)
 {
 	u32 version;
@@ -299,6 +362,12 @@ static int scmi_sensors_protocol_init(struct scmi_handle *handle)
 
 	scmi_sensor_description_get(handle, sinfo);
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_SENSOR, PAGE_SIZE,
+				      &sensor_event_ops, sensor_events,
+				      ARRAY_SIZE(sensor_events),
+				      sinfo->num_sensors);
+
 	sinfo->version = version;
 	handle->sensor_ops = &sensor_ops;
 	handle->sensor_priv = sinfo;
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 5e7c28c8bcac..23408dacc69d 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -407,4 +407,11 @@ struct scmi_perf_level_report {
 	u32	performance_level;
 };
 
+struct scmi_sensor_trip_point_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	sensor_id;
+	u32	trip_point_desc;
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 11/13] firmware: arm_scmi: Add Sensor notifications support
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Make SCMI Sensor protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/sensors.c | 69 +++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h       |  7 +++
 2 files changed, 76 insertions(+)

diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c
index db1b1ab303da..aa7e8e017125 100644
--- a/drivers/firmware/arm_scmi/sensors.c
+++ b/drivers/firmware/arm_scmi/sensors.c
@@ -6,6 +6,7 @@
  */
 
 #include "common.h"
+#include "notify.h"
 
 enum scmi_sensor_protocol_cmd {
 	SENSOR_DESCRIPTION_GET = 0x3,
@@ -71,6 +72,12 @@ struct scmi_msg_sensor_reading_get {
 #define SENSOR_READ_ASYNC	BIT(0)
 };
 
+struct scmi_sensor_trip_notify_payld {
+	__le32 agent_id;
+	__le32 sensor_id;
+	__le32 trip_point_desc;
+};
+
 struct sensors_info {
 	u32 version;
 	int num_sensors;
@@ -276,6 +283,62 @@ static struct scmi_sensor_ops sensor_ops = {
 	.reading_get = scmi_sensor_reading_get,
 };
 
+static bool scmi_sensor_set_notify_enabled(const struct scmi_handle *handle,
+					   u8 evt_id, u32 src_id, bool enable)
+{
+	int ret;
+
+	ret = scmi_sensor_trip_point_notify(handle, src_id, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
+			SCMI_PROTOCOL_SENSOR, evt_id, src_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_sensor_fill_custom_report(u8 evt_id, u64 timestamp,
+					   const void *payld, size_t payld_sz,
+					   void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case SENSOR_TRIP_POINT_EVENT:
+	{
+		const struct scmi_sensor_trip_notify_payld *p = payld;
+		struct scmi_sensor_trip_point_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->sensor_id = le32_to_cpu(p->sensor_id);
+		r->trip_point_desc = le32_to_cpu(p->trip_point_desc);
+		*src_id = r->sensor_id;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event sensor_events[] = {
+	{
+		.id = SENSOR_TRIP_POINT_EVENT,
+		.max_payld_sz = 12,
+		.max_report_sz = sizeof(struct scmi_sensor_trip_point_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops sensor_event_ops = {
+	.set_notify_enabled = scmi_sensor_set_notify_enabled,
+	.fill_custom_report = scmi_sensor_fill_custom_report,
+};
+
 static int scmi_sensors_protocol_init(struct scmi_handle *handle)
 {
 	u32 version;
@@ -299,6 +362,12 @@ static int scmi_sensors_protocol_init(struct scmi_handle *handle)
 
 	scmi_sensor_description_get(handle, sinfo);
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_SENSOR, PAGE_SIZE,
+				      &sensor_event_ops, sensor_events,
+				      ARRAY_SIZE(sensor_events),
+				      sinfo->num_sensors);
+
 	sinfo->version = version;
 	handle->sensor_ops = &sensor_ops;
 	handle->sensor_priv = sinfo;
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 5e7c28c8bcac..23408dacc69d 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -407,4 +407,11 @@ struct scmi_perf_level_report {
 	u32	performance_level;
 };
 
+struct scmi_sensor_trip_point_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	u32	sensor_id;
+	u32	trip_point_desc;
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 12/13] firmware: arm_scmi: Add Reset notifications support
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Make SCMI Reset protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/reset.c | 96 +++++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h     |  6 ++
 2 files changed, 102 insertions(+)

diff --git a/drivers/firmware/arm_scmi/reset.c b/drivers/firmware/arm_scmi/reset.c
index de73054554f3..4d6987920617 100644
--- a/drivers/firmware/arm_scmi/reset.c
+++ b/drivers/firmware/arm_scmi/reset.c
@@ -6,6 +6,7 @@
  */
 
 #include "common.h"
+#include "notify.h"
 
 enum scmi_reset_protocol_cmd {
 	RESET_DOMAIN_ATTRIBUTES = 0x3,
@@ -40,6 +41,17 @@ struct scmi_msg_reset_domain_reset {
 #define ARCH_COLD_RESET		(ARCH_RESET_TYPE | COLD_RESET_STATE)
 };
 
+struct scmi_msg_reset_notify {
+	__le32 id;
+	__le32 event_control;
+#define RESET_TP_NOTIFY_ALL	BIT(0)
+};
+
+struct scmi_reset_issued_notify_payld {
+	__le32 domain_id;
+	__le32 reset_state;
+};
+
 struct reset_dom_info {
 	bool async_reset;
 	bool reset_notify;
@@ -190,6 +202,84 @@ static struct scmi_reset_ops reset_ops = {
 	.deassert = scmi_reset_domain_deassert,
 };
 
+static int scmi_reset_notify(const struct scmi_handle *handle, u32 domain_id,
+			     bool enable)
+{
+	int ret;
+	u32 evt_cntl = enable ? RESET_TP_NOTIFY_ALL : 0;
+	struct scmi_xfer *t;
+	struct scmi_msg_reset_notify *cfg;
+
+	ret = scmi_xfer_get_init(handle, RESET_NOTIFY,
+				 SCMI_PROTOCOL_RESET, sizeof(*cfg), 0, &t);
+	if (ret)
+		return ret;
+
+	cfg = t->tx.buf;
+	cfg->id = cpu_to_le32(domain_id);
+	cfg->event_control = cpu_to_le32(evt_cntl);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_xfer_put(handle, t);
+	return ret;
+}
+
+static bool scmi_reset_set_notify_enabled(const struct scmi_handle *handle,
+					  u8 evt_id, u32 src_id, bool enable)
+{
+	int ret;
+
+	ret = scmi_reset_notify(handle, src_id, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
+			SCMI_PROTOCOL_RESET, evt_id, src_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_reset_fill_custom_report(u8 evt_id, u64 timestamp,
+					   const void *payld, size_t payld_sz,
+					   void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case RESET_ISSUED:
+	{
+		const struct scmi_reset_issued_notify_payld *p = payld;
+		struct scmi_reset_issued_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->reset_state = le32_to_cpu(p->reset_state);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event reset_events[] = {
+	{
+		.id = RESET_NOTIFY,
+		.max_payld_sz = 8,
+		.max_report_sz = sizeof(struct scmi_reset_issued_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops reset_event_ops = {
+	.set_notify_enabled = scmi_reset_set_notify_enabled,
+	.fill_custom_report = scmi_reset_fill_custom_report,
+};
+
 static int scmi_reset_protocol_init(struct scmi_handle *handle)
 {
 	int domain;
@@ -218,6 +308,12 @@ static int scmi_reset_protocol_init(struct scmi_handle *handle)
 		scmi_reset_domain_attributes_get(handle, domain, dom);
 	}
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_RESET, PAGE_SIZE,
+				      &reset_event_ops, reset_events,
+				      ARRAY_SIZE(reset_events),
+				      pinfo->num_domains);
+
 	pinfo->version = version;
 	handle->reset_ops = &reset_ops;
 	handle->reset_priv = pinfo;
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 23408dacc69d..91c5fdf567d5 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -414,4 +414,10 @@ struct scmi_sensor_trip_point_report {
 	u32	trip_point_desc;
 };
 
+struct scmi_reset_issued_report {
+	ktime_t	timestamp;
+	u32	domain_id;
+	u32	reset_state;
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 12/13] firmware: arm_scmi: Add Reset notifications support
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Make SCMI Reset protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/reset.c | 96 +++++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h     |  6 ++
 2 files changed, 102 insertions(+)

diff --git a/drivers/firmware/arm_scmi/reset.c b/drivers/firmware/arm_scmi/reset.c
index de73054554f3..4d6987920617 100644
--- a/drivers/firmware/arm_scmi/reset.c
+++ b/drivers/firmware/arm_scmi/reset.c
@@ -6,6 +6,7 @@
  */
 
 #include "common.h"
+#include "notify.h"
 
 enum scmi_reset_protocol_cmd {
 	RESET_DOMAIN_ATTRIBUTES = 0x3,
@@ -40,6 +41,17 @@ struct scmi_msg_reset_domain_reset {
 #define ARCH_COLD_RESET		(ARCH_RESET_TYPE | COLD_RESET_STATE)
 };
 
+struct scmi_msg_reset_notify {
+	__le32 id;
+	__le32 event_control;
+#define RESET_TP_NOTIFY_ALL	BIT(0)
+};
+
+struct scmi_reset_issued_notify_payld {
+	__le32 domain_id;
+	__le32 reset_state;
+};
+
 struct reset_dom_info {
 	bool async_reset;
 	bool reset_notify;
@@ -190,6 +202,84 @@ static struct scmi_reset_ops reset_ops = {
 	.deassert = scmi_reset_domain_deassert,
 };
 
+static int scmi_reset_notify(const struct scmi_handle *handle, u32 domain_id,
+			     bool enable)
+{
+	int ret;
+	u32 evt_cntl = enable ? RESET_TP_NOTIFY_ALL : 0;
+	struct scmi_xfer *t;
+	struct scmi_msg_reset_notify *cfg;
+
+	ret = scmi_xfer_get_init(handle, RESET_NOTIFY,
+				 SCMI_PROTOCOL_RESET, sizeof(*cfg), 0, &t);
+	if (ret)
+		return ret;
+
+	cfg = t->tx.buf;
+	cfg->id = cpu_to_le32(domain_id);
+	cfg->event_control = cpu_to_le32(evt_cntl);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_xfer_put(handle, t);
+	return ret;
+}
+
+static bool scmi_reset_set_notify_enabled(const struct scmi_handle *handle,
+					  u8 evt_id, u32 src_id, bool enable)
+{
+	int ret;
+
+	ret = scmi_reset_notify(handle, src_id, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
+			SCMI_PROTOCOL_RESET, evt_id, src_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_reset_fill_custom_report(u8 evt_id, u64 timestamp,
+					   const void *payld, size_t payld_sz,
+					   void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case RESET_ISSUED:
+	{
+		const struct scmi_reset_issued_notify_payld *p = payld;
+		struct scmi_reset_issued_report *r = report;
+
+		if (sizeof(*p) != payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->domain_id = le32_to_cpu(p->domain_id);
+		r->reset_state = le32_to_cpu(p->reset_state);
+		*src_id = r->domain_id;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event reset_events[] = {
+	{
+		.id = RESET_NOTIFY,
+		.max_payld_sz = 8,
+		.max_report_sz = sizeof(struct scmi_reset_issued_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops reset_event_ops = {
+	.set_notify_enabled = scmi_reset_set_notify_enabled,
+	.fill_custom_report = scmi_reset_fill_custom_report,
+};
+
 static int scmi_reset_protocol_init(struct scmi_handle *handle)
 {
 	int domain;
@@ -218,6 +308,12 @@ static int scmi_reset_protocol_init(struct scmi_handle *handle)
 		scmi_reset_domain_attributes_get(handle, domain, dom);
 	}
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_RESET, PAGE_SIZE,
+				      &reset_event_ops, reset_events,
+				      ARRAY_SIZE(reset_events),
+				      pinfo->num_domains);
+
 	pinfo->version = version;
 	handle->reset_ops = &reset_ops;
 	handle->reset_priv = pinfo;
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 23408dacc69d..91c5fdf567d5 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -414,4 +414,10 @@ struct scmi_sensor_trip_point_report {
 	u32	trip_point_desc;
 };
 
+struct scmi_reset_issued_report {
+	ktime_t	timestamp;
+	u32	domain_id;
+	u32	reset_state;
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 13/13] firmware: arm_scmi: Add Base notifications support
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-04 16:25   ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, lukasz.luba, james.quinlan, Jonathan.Cameron,
	cristian.marussi

Make SCMI Base protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/base.c | 109 +++++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h    |   8 +++
 2 files changed, 117 insertions(+)

diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
index ce7d9203e41b..0582df9e0512 100644
--- a/drivers/firmware/arm_scmi/base.c
+++ b/drivers/firmware/arm_scmi/base.c
@@ -6,6 +6,9 @@
  */
 
 #include "common.h"
+#include "notify.h"
+
+#define SCMI_BASE_NUM_SOURCES	1
 
 enum scmi_base_protocol_cmd {
 	BASE_DISCOVER_VENDOR = 0x3,
@@ -29,6 +32,19 @@ struct scmi_msg_resp_base_attributes {
 	__le16 reserved;
 };
 
+struct scmi_msg_base_error_notify {
+	__le32 event_control;
+#define BASE_TP_NOTIFY_ALL	BIT(0)
+};
+
+struct scmi_base_error_notify_payld {
+	__le32 agent_id;
+	__le32 error_status;
+#define IS_FATAL_ERROR(x)	((x) & BIT(31))
+#define ERROR_CMD_COUNT(x)	FIELD_GET(GENMASK(9, 0), (x))
+	__le64 msg_reports[8192];
+};
+
 /**
  * scmi_base_attributes_get() - gets the implementation details
  *	that are associated with the base protocol.
@@ -222,6 +238,93 @@ static int scmi_base_discover_agent_get(const struct scmi_handle *handle,
 	return ret;
 }
 
+static int scmi_base_error_notify(const struct scmi_handle *handle, bool enable)
+{
+	int ret;
+	u32 evt_cntl = enable ? BASE_TP_NOTIFY_ALL : 0;
+	struct scmi_xfer *t;
+	struct scmi_msg_base_error_notify *cfg;
+
+	ret = scmi_xfer_get_init(handle, BASE_NOTIFY_ERRORS,
+				 SCMI_PROTOCOL_BASE, sizeof(*cfg), 0, &t);
+	if (ret)
+		return ret;
+
+	cfg = t->tx.buf;
+	cfg->event_control = cpu_to_le32(evt_cntl);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_xfer_put(handle, t);
+	return ret;
+}
+
+static bool scmi_base_set_notify_enabled(const struct scmi_handle *handle,
+					 u8 evt_id, u32 src_id, bool enable)
+{
+	int ret;
+
+	ret = scmi_base_error_notify(handle, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLED - evt[%X] ret:%d\n",
+			SCMI_PROTOCOL_BASE, evt_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_base_fill_custom_report(u8 evt_id, u64 timestamp,
+					  const void *payld, size_t payld_sz,
+					  void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case BASE_ERROR_EVENT:
+	{
+		int i;
+		const struct scmi_base_error_notify_payld *p = payld;
+		struct scmi_base_error_report *r = report;
+
+		/*
+		 * BaseError notification payload is variable in size but
+		 * up to a maximum length determined by the struct ponted by p.
+		 * Instead payld_sz is the effective length of this notification
+		 * payload so cannot be greater of the maximum allowed size as
+		 * pointed by p.
+		 */
+		if (sizeof(*p) < payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->fatal = IS_FATAL_ERROR(le32_to_cpu(p->error_status));
+		r->cmd_count = ERROR_CMD_COUNT(le32_to_cpu(p->error_status));
+		for (i = 0; i < r->cmd_count; i++)
+			r->reports[i] = le64_to_cpu(p->msg_reports[i]);
+		*src_id = 0;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event base_events[] = {
+	{
+		.id = BASE_ERROR_EVENT,
+		.max_payld_sz = 8192,
+		.max_report_sz = sizeof(struct scmi_base_error_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops base_event_ops = {
+	.set_notify_enabled = scmi_base_set_notify_enabled,
+	.fill_custom_report = scmi_base_fill_custom_report,
+};
+
 int scmi_base_protocol_init(struct scmi_handle *h)
 {
 	int id, ret;
@@ -256,6 +359,12 @@ int scmi_base_protocol_init(struct scmi_handle *h)
 	dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols,
 		rev->num_agents);
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_BASE, (4 * PAGE_SIZE),
+				      &base_event_ops, base_events,
+				      ARRAY_SIZE(base_events),
+				      SCMI_BASE_NUM_SOURCES);
+
 	for (id = 0; id < rev->num_agents; id++) {
 		scmi_base_discover_agent_get(handle, id, name);
 		dev_dbg(dev, "Agent %d: %s\n", id, name);
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 91c5fdf567d5..3a1bc8014f51 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -420,4 +420,12 @@ struct scmi_reset_issued_report {
 	u32	reset_state;
 };
 
+struct scmi_base_error_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	bool	fatal;
+	u16	cmd_count;
+	u64	reports[8192];
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v4 13/13] firmware: arm_scmi: Add Base notifications support
@ 2020-03-04 16:25   ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-04 16:25 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, cristian.marussi, james.quinlan, lukasz.luba,
	sudeep.holla

Make SCMI Base protocol register with the notification core.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V3 --> V4
- scmi_event field renamed
V2 --> V3
- added handle awareness
V1 --> V2
- simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
  logic out of protocol. ALL_SRCIDs logic is now in charge of the
  notification core, together with proper reference counting of enables
- switched to devres protocol-registration
---
 drivers/firmware/arm_scmi/base.c | 109 +++++++++++++++++++++++++++++++
 include/linux/scmi_protocol.h    |   8 +++
 2 files changed, 117 insertions(+)

diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
index ce7d9203e41b..0582df9e0512 100644
--- a/drivers/firmware/arm_scmi/base.c
+++ b/drivers/firmware/arm_scmi/base.c
@@ -6,6 +6,9 @@
  */
 
 #include "common.h"
+#include "notify.h"
+
+#define SCMI_BASE_NUM_SOURCES	1
 
 enum scmi_base_protocol_cmd {
 	BASE_DISCOVER_VENDOR = 0x3,
@@ -29,6 +32,19 @@ struct scmi_msg_resp_base_attributes {
 	__le16 reserved;
 };
 
+struct scmi_msg_base_error_notify {
+	__le32 event_control;
+#define BASE_TP_NOTIFY_ALL	BIT(0)
+};
+
+struct scmi_base_error_notify_payld {
+	__le32 agent_id;
+	__le32 error_status;
+#define IS_FATAL_ERROR(x)	((x) & BIT(31))
+#define ERROR_CMD_COUNT(x)	FIELD_GET(GENMASK(9, 0), (x))
+	__le64 msg_reports[8192];
+};
+
 /**
  * scmi_base_attributes_get() - gets the implementation details
  *	that are associated with the base protocol.
@@ -222,6 +238,93 @@ static int scmi_base_discover_agent_get(const struct scmi_handle *handle,
 	return ret;
 }
 
+static int scmi_base_error_notify(const struct scmi_handle *handle, bool enable)
+{
+	int ret;
+	u32 evt_cntl = enable ? BASE_TP_NOTIFY_ALL : 0;
+	struct scmi_xfer *t;
+	struct scmi_msg_base_error_notify *cfg;
+
+	ret = scmi_xfer_get_init(handle, BASE_NOTIFY_ERRORS,
+				 SCMI_PROTOCOL_BASE, sizeof(*cfg), 0, &t);
+	if (ret)
+		return ret;
+
+	cfg = t->tx.buf;
+	cfg->event_control = cpu_to_le32(evt_cntl);
+
+	ret = scmi_do_xfer(handle, t);
+
+	scmi_xfer_put(handle, t);
+	return ret;
+}
+
+static bool scmi_base_set_notify_enabled(const struct scmi_handle *handle,
+					 u8 evt_id, u32 src_id, bool enable)
+{
+	int ret;
+
+	ret = scmi_base_error_notify(handle, enable);
+	if (ret)
+		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLED - evt[%X] ret:%d\n",
+			SCMI_PROTOCOL_BASE, evt_id, ret);
+
+	return !ret ? true : false;
+}
+
+static void *scmi_base_fill_custom_report(u8 evt_id, u64 timestamp,
+					  const void *payld, size_t payld_sz,
+					  void *report, u32 *src_id)
+{
+	void *rep = NULL;
+
+	switch (evt_id) {
+	case BASE_ERROR_EVENT:
+	{
+		int i;
+		const struct scmi_base_error_notify_payld *p = payld;
+		struct scmi_base_error_report *r = report;
+
+		/*
+		 * BaseError notification payload is variable in size but
+		 * up to a maximum length determined by the struct ponted by p.
+		 * Instead payld_sz is the effective length of this notification
+		 * payload so cannot be greater of the maximum allowed size as
+		 * pointed by p.
+		 */
+		if (sizeof(*p) < payld_sz)
+			break;
+
+		r->timestamp = timestamp;
+		r->agent_id = le32_to_cpu(p->agent_id);
+		r->fatal = IS_FATAL_ERROR(le32_to_cpu(p->error_status));
+		r->cmd_count = ERROR_CMD_COUNT(le32_to_cpu(p->error_status));
+		for (i = 0; i < r->cmd_count; i++)
+			r->reports[i] = le64_to_cpu(p->msg_reports[i]);
+		*src_id = 0;
+		rep = r;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return rep;
+}
+
+static const struct scmi_event base_events[] = {
+	{
+		.id = BASE_ERROR_EVENT,
+		.max_payld_sz = 8192,
+		.max_report_sz = sizeof(struct scmi_base_error_report),
+	},
+};
+
+static const struct scmi_protocol_event_ops base_event_ops = {
+	.set_notify_enabled = scmi_base_set_notify_enabled,
+	.fill_custom_report = scmi_base_fill_custom_report,
+};
+
 int scmi_base_protocol_init(struct scmi_handle *h)
 {
 	int id, ret;
@@ -256,6 +359,12 @@ int scmi_base_protocol_init(struct scmi_handle *h)
 	dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols,
 		rev->num_agents);
 
+	scmi_register_protocol_events(handle,
+				      SCMI_PROTOCOL_BASE, (4 * PAGE_SIZE),
+				      &base_event_ops, base_events,
+				      ARRAY_SIZE(base_events),
+				      SCMI_BASE_NUM_SOURCES);
+
 	for (id = 0; id < rev->num_agents; id++) {
 		scmi_base_discover_agent_get(handle, id, name);
 		dev_dbg(dev, "Agent %d: %s\n", id, name);
diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
index 91c5fdf567d5..3a1bc8014f51 100644
--- a/include/linux/scmi_protocol.h
+++ b/include/linux/scmi_protocol.h
@@ -420,4 +420,12 @@ struct scmi_reset_issued_report {
 	u32	reset_state;
 };
 
+struct scmi_base_error_report {
+	ktime_t	timestamp;
+	u32	agent_id;
+	bool	fatal;
+	u16	cmd_count;
+	u64	reports[8192];
+};
+
 #endif /* _LINUX_SCMI_PROTOCOL_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 05/13] firmware: arm_scmi: Add notification protocol-registration
  2020-03-04 16:25   ` Cristian Marussi
@ 2020-03-09 11:33     ` Jonathan Cameron
  -1 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 11:33 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

On Wed, 4 Mar 2020 16:25:50 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Add core SCMI Notifications protocol-registration support: allow protocols
> to register their own set of supported events, during their initialization
> phase. Notification core can track multiple platform instances by their
> handles.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>

Hi.

A few minor things inline.  Fairly sure kernel-doc needs
struct before the heading for each structure comment block.

Also, the events queue init looks like it could just be done with
a kfifo_alloc call.  Perhaps that makes sense given later patches...

Thanks,

Jonathan

> ---
> V3 --> V4
> - removed scratch ISR buffer, move scratch BH buffer into protocol
>   descriptor
> - converted registered_protocols and registered_events from hashtables
>   into bare fixed-sized arrays
> - removed unregister protocols' routines (never called really)
> V2 --> V3
> - added scmi_notify_instance to track target platform instance
> V1 --> V2
> - splitted out of V1 patch 04
> - moved from IDR maps to real HashTables to store events
> - scmi_notifications_initialized is now an atomic_t
> - reviewed protocol registration/unregistration to use devres
> - fixed:
>   drivers/firmware/arm_scmi/notify.c:483:18-23: ERROR:
>   	reference preceded by free on line 482
> 
> Reported-by: kbuild test robot <lkp@intel.com>
> Reported-by: Julia Lawall <julia.lawall@lip6.fr>
> ---
>  drivers/firmware/arm_scmi/Makefile |   2 +-
>  drivers/firmware/arm_scmi/common.h |   4 +
>  drivers/firmware/arm_scmi/notify.c | 439 +++++++++++++++++++++++++++++
>  drivers/firmware/arm_scmi/notify.h |  57 ++++
>  include/linux/scmi_protocol.h      |   9 +
>  5 files changed, 510 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/firmware/arm_scmi/notify.c
>  create mode 100644 drivers/firmware/arm_scmi/notify.h
> 
> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
> index 6694d0d908d6..24a03a36aee4 100644
> --- a/drivers/firmware/arm_scmi/Makefile
> +++ b/drivers/firmware/arm_scmi/Makefile
> @@ -1,7 +1,7 @@
>  # SPDX-License-Identifier: GPL-2.0-only
>  obj-y	= scmi-bus.o scmi-driver.o scmi-protocols.o scmi-transport.o
>  scmi-bus-y = bus.o
> -scmi-driver-y = driver.o
> +scmi-driver-y = driver.o notify.o
>  scmi-transport-y = mailbox.o shmem.o
>  scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o
>  obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
> index 3c2e5d0d7b68..2106c35195ce 100644
> --- a/drivers/firmware/arm_scmi/common.h
> +++ b/drivers/firmware/arm_scmi/common.h
> @@ -6,6 +6,8 @@
>   *
>   * Copyright (C) 2018 ARM Ltd.
>   */
> +#ifndef _SCMI_COMMON_H
> +#define _SCMI_COMMON_H
>  
>  #include <linux/bitfield.h>
>  #include <linux/completion.h>
> @@ -232,3 +234,5 @@ void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
>  void shmem_clear_notification(struct scmi_shared_mem __iomem *shmem);
>  bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
>  		     struct scmi_xfer *xfer);
> +
> +#endif /* _SCMI_COMMON_H */
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> new file mode 100644
> index 000000000000..31e49cb7d88e
> --- /dev/null
> +++ b/drivers/firmware/arm_scmi/notify.c
> @@ -0,0 +1,439 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * System Control and Management Interface (SCMI) Notification support
> + *
> + * Copyright (C) 2020 ARM Ltd.
> + *
> + * SCMI Protocol specification allows the platform to signal events to
> + * interested agents via notification messages: this is an implementation
> + * of the dispatch and delivery of such notifications to the interested users
> + * inside the Linux kernel.
> + *
> + * An SCMI Notification core instance is initialized for each active platform
> + * instance identified by the means of the usual @scmi_handle.
> + *
> + * Each SCMI Protocol implementation, during its initialization, registers with
> + * this core its set of supported events using @scmi_register_protocol_events():
> + * all the needed descriptors are stored in the @registered_protocols and
> + * @registered_events arrays.
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/atomic.h>
> +#include <linux/bug.h>
> +#include <linux/compiler.h>
> +#include <linux/device.h>
> +#include <linux/err.h>
> +#include <linux/kernel.h>
> +#include <linux/kfifo.h>
> +#include <linux/mutex.h>
> +#include <linux/refcount.h>
> +#include <linux/scmi_protocol.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +
> +#include "notify.h"
> +
> +#define	SCMI_MAX_PROTO			256
> +#define	SCMI_ALL_SRC_IDS		0xffffUL
> +/*
> + * Builds an unsigned 32bit key from the given input tuple to be used
> + * as a key in hashtables.
> + */
> +#define MAKE_HASH_KEY(p, e, s)			\
> +	((u32)(((p) << 24) | ((e) << 16) | ((s) & SCMI_ALL_SRC_IDS)))
> +
> +#define MAKE_ALL_SRCS_KEY(p, e)			\
> +	MAKE_HASH_KEY((p), (e), SCMI_ALL_SRC_IDS)
> +
> +struct scmi_registered_protocol_events_desc;
> +
> +/**
> + * scmi_notify_instance  - Represents an instance of the notification core
> + *
> + * Each platform instance, represented by a handle, has its own instance of
> + * the notification subsystem represented by this structure.
> + *
> + * @gid: GroupID used for devres
> + * @handle: A reference to the platform instance
> + * @initialized: A flag that indicates if the core resources have been allocated
> + *		 and protocols are allowed to register their supported events
> + * @enabled: A flag to indicate events can be enabled and start flowing
> + * @registered_protocols: An statically allocated array containing pointers to
> + *			  all the registered protocol-level specific information
> + *			  related to events' handling
> + */
> +struct scmi_notify_instance {
> +	void						*gid;
> +	struct scmi_handle				*handle;
> +	atomic_t					initialized;
> +	atomic_t					enabled;
> +	struct scmi_registered_protocol_events_desc	**registered_protocols;
> +};
> +
> +/**
> + * events_queue  - Describes a queue and its associated worker

I guess this might become clear later, but right now this just looks like
we are open code what could be handled automatically by just using
kfifo_alloc

> + *
> + * Each protocol has its own dedicated events_queue descriptor.
> + *
> + * @sz: Size in bytes of the related kfifo
> + * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
> + * @kfifo: A dedicated Kernel kfifo descriptor
> + */
> +struct events_queue {
> +	size_t				sz;
> +	u8				*qbuf;
> +	struct kfifo			kfifo;
> +};
> +
> +/**
> + * scmi_event_header  - A utility header

struct scmi...

> + *
> + * This header is prepended to each received event message payload before
> + * queueing it on the related events_queue.
> + *
> + * @timestamp: The timestamp, in nanoseconds (boottime), which was associated
> + *	       to this event as soon as it entered the SCMI RX ISR
> + * @evt_id: Event ID (corresponds to the Event MsgID for this Protocol)
> + * @payld_sz: Effective size of the embedded message payload which follows
> + * @payld: A reference to the embedded event payload
> + */
> +struct scmi_event_header {
> +	u64	timestamp;
> +	u8	evt_id;
> +	size_t	payld_sz;
> +	u8	payld[];
> +} __packed;
> +
> +struct scmi_registered_event;
> +
> +/**
> + * scmi_registered_protocol_events_desc  - Protocol Specific information
> + *
> + * All protocols that registers at least one event have their protocol-specific
> + * information stored here, together with the embedded allocated events_queue.
> + * These descriptors are stored in the @registered_protocols array at protocol
> + * registration time.
> + *
> + * Once these descriptors are successfully registered, they are NEVER again
> + * removed or modified since protocols do not unregister ever, so that once we
> + * safely grab a NON-NULL reference from the array we can keep it and use it.
> + *
> + * @id: Protocol ID
> + * @ops: Protocol specific and event-related operations
> + * @equeue: The embedded per-protocol events_queue
> + * @ni: A reference to the initialized instance descriptor
> + * @eh: A reference to pre-allocated buffer to be used as a scratch area by the
> + *	deferred worker when fetching data from the kfifo
> + * @eh_sz: Size of the pre-allocated buffer @eh
> + * @in_flight: A reference to an in flight @scmi_registered_event
> + * @num_events: Number of events in @registered_events
> + * @registered_events: A dynamically allocated array holding all the registered
> + *		       events' descriptors, whose fixed-size is determined at
> + *		       compile time.
> + */
> +struct scmi_registered_protocol_events_desc {
> +	u8					id;
> +	const struct scmi_protocol_event_ops	*ops;
> +	struct events_queue			equeue;
> +	struct scmi_notify_instance		*ni;
> +	struct scmi_event_header		*eh;
> +	size_t					eh_sz;
> +	void					*in_flight;
> +	int					num_events;
> +	struct scmi_registered_event		**registered_events;
> +};
> +
> +/**
> + * scmi_registered_event  - Event Specific Information

struct scmi_registered_event - Event...

> + *
> + * All registered events are represented by one of these structures that are
> + * stored in the @registered_events array at protocol registration time.
> + *
> + * Once these descriptors are successfully registered, they are NEVER again
> + * removed or modified since protocols do not unregister ever, so that once we
> + * safely grab a NON-NULL reference from the table we can keep it and use it.
> + *
> + * @proto: A reference to the associated protocol descriptor
> + * @evt: A reference to the associated event descriptor (as provided at
> + *       registration time)
> + * @report: A pre-allocated buffer used by the deferred worker to fill a
> + *	    customized event report
> + * @num_sources: The number of possible sources for this event as stated at
> + *		 events' registration time
> + * @sources: A reference to a dynamically allocated array used to refcount the
> + *	     events' enable requests for all the existing sources
> + * @sources_mtx: A mutex to serialize the access to @sources
> + */
> +struct scmi_registered_event {
> +	struct scmi_registered_protocol_events_desc	*proto;
> +	const struct scmi_event				*evt;
> +	void						*report;
> +	u32						num_sources;
> +	refcount_t					*sources;
> +	struct mutex					sources_mtx;
> +};
> +
> +/**
> + * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
> + *
> + * Allocate a buffer for the kfifo and initialize it.
> + *
> + * @ni: A reference to the notification instance to use
> + * @equeue: The events_queue to initialize
> + * @sz: Size of the kfifo buffer to allocate
> + *
> + * Return: 0 on Success
> + */
> +static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
> +					struct events_queue *equeue, size_t sz)
> +{
> +	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
> +	if (!equeue->qbuf)
> +		return -ENOMEM;
> +	equeue->sz = sz;
> +
> +	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);

This seems like a slightly odd dance.  Why not use kfifo_alloc?

If it's because of the lack of devm_kfifo_alloc, maybe use a devm_add_action_or_reset
to handle that.

> +}
> +
> +/**
> + * scmi_allocate_registered_protocol_desc  - Allocate a registered protocol
> + * events' descriptor
> + *
> + * It is supposed to be called only once for each protocol at protocol
> + * initialization time, so it warns if the requested protocol is found
> + * already registered.
> + *
> + * @ni: A reference to the notification instance to use
> + * @proto_id: Protocol ID
> + * @queue_sz: Size of the associated queue to allocate
> + * @eh_sz: Size of the event header scratch area to pre-allocate
> + * @num_events: Number of events to support (size of @registered_events)
> + * @ops: Pointer to a struct holding references to protocol specific helpers
> + *	 needed during events handling
> + *
> + * Returns the allocated and registered descriptor on Success
> + */
> +static struct scmi_registered_protocol_events_desc *
> +scmi_allocate_registered_protocol_desc(struct scmi_notify_instance *ni,
> +				       u8 proto_id, size_t queue_sz,
> +				       size_t eh_sz, int num_events,
> +				const struct scmi_protocol_event_ops *ops)
> +{
> +	int ret;
> +	struct scmi_registered_protocol_events_desc *pd;
> +
> +	pd = READ_ONCE(ni->registered_protocols[proto_id]);
> +	if (pd) {
> +		WARN_ON(1);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	pd = devm_kzalloc(ni->handle->dev, sizeof(*pd), GFP_KERNEL);
> +	if (!pd)
> +		return ERR_PTR(-ENOMEM);
> +	pd->id = proto_id;
> +	pd->ops = ops;
> +	pd->ni = ni;
> +
> +	ret = scmi_initialize_events_queue(ni, &pd->equeue, queue_sz);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	pd->eh = devm_kzalloc(ni->handle->dev, eh_sz, GFP_KERNEL);
> +	if (!pd->eh)
> +		return ERR_PTR(-ENOMEM);
> +	pd->eh_sz = eh_sz;
> +
> +	pd->registered_events = devm_kcalloc(ni->handle->dev, num_events,
> +					     sizeof(char *), GFP_KERNEL);
> +	if (!pd->registered_events)
> +		return ERR_PTR(-ENOMEM);
> +	pd->num_events = num_events;
> +
> +	return pd;
> +}
> +
> +/**
> + * scmi_register_protocol_events  - Register Protocol Events with the core
> + *
> + * Used by SCMI Protocols initialization code to register with the notification
> + * core the list of supported events and their descriptors: takes care to
> + * pre-allocate and store all needed descriptors, scratch buffers and event
> + * queues.
> + *
> + * @handle: The handle identifying the platform instance against which the
> + *	    the protocol's events are registered
> + * @proto_id: Protocol ID
> + * @queue_sz: Size in bytes of the associated queue to be allocated
> + * @ops: Protocol specific event-related operations
> + * @evt: Event descriptor array
> + * @num_events: Number of events in @evt array
> + * @num_sources: Number of possible sources for this protocol on this
> + *		 platform.
> + *
> + * Return: 0 on Success
> + */
> +int scmi_register_protocol_events(const struct scmi_handle *handle,
> +				  u8 proto_id, size_t queue_sz,
> +				  const struct scmi_protocol_event_ops *ops,
> +				  const struct scmi_event *evt, int num_events,
> +				  int num_sources)
> +{
> +	int i;
> +	size_t payld_sz = 0;
> +	struct scmi_registered_protocol_events_desc *pd;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	if (!ops || !evt || proto_id >= SCMI_MAX_PROTO)
> +		return -EINVAL;
> +
> +	/* Ensure atomic value is updated */
> +	smp_mb__before_atomic();
> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
> +		return -EAGAIN;
> +
> +	/* Attach to the notification main devres group */
> +	if (!devres_open_group(ni->handle->dev, ni->gid, GFP_KERNEL))
> +		return -ENOMEM;
> +
> +	for (i = 0; i < num_events; i++)
> +		payld_sz = max_t(size_t, payld_sz, evt[i].max_payld_sz);
> +	pd = scmi_allocate_registered_protocol_desc(ni, proto_id, queue_sz,
> +				    sizeof(struct scmi_event_header) + payld_sz,
> +						    num_events, ops);
> +	if (IS_ERR(pd))
> +		goto err;
> +
> +	for (i = 0; i < num_events; i++, evt++) {
> +		struct scmi_registered_event *r_evt;
> +
> +		r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt),
> +				     GFP_KERNEL);
> +		if (!r_evt)
> +			goto err;
> +		r_evt->proto = pd;
> +		r_evt->evt = evt;
> +
> +		r_evt->sources = devm_kcalloc(ni->handle->dev, num_sources,
> +					      sizeof(refcount_t), GFP_KERNEL);
> +		if (!r_evt->sources)
> +			goto err;
> +		r_evt->num_sources = num_sources;
> +		mutex_init(&r_evt->sources_mtx);
> +
> +		r_evt->report = devm_kzalloc(ni->handle->dev,
> +					     evt->max_report_sz, GFP_KERNEL);
> +		if (!r_evt->report)
> +			goto err;
> +
> +		WRITE_ONCE(pd->registered_events[i], r_evt);
> +		pr_info("SCMI Notifications: registered event - %X\n",
> +			MAKE_ALL_SRCS_KEY(r_evt->proto->id, r_evt->evt->id));
> +	}
> +
> +	/* Register protocol and events...it will never be removed */
> +	WRITE_ONCE(ni->registered_protocols[proto_id], pd);
> +
> +	devres_close_group(ni->handle->dev, ni->gid);
> +
> +	return 0;
> +
> +err:
> +	pr_warn("SCMI Notifications - Proto:%X - Registration Failed !\n",
> +		proto_id);
> +	/* A failing protocol registration does not trigger full failure */
> +	devres_close_group(ni->handle->dev, ni->gid);
> +
> +	return -ENOMEM;
> +}
> +
> +/**
> + * scmi_notification_init  - Initializes Notification Core Support
> + *
> + * This function lays out all the basic resources needed by the notification
> + * core instance identified by the provided handle: once done, all of the
> + * SCMI Protocols can register their events with the core during their own
> + * initializations.
> + *
> + * Note that failing to initialize the core notifications support does not
> + * cause the whole SCMI Protocols stack to fail its initialization.
> + *
> + * SCMI Notification Initialization happens in 2 steps:
> + *
> + *  - initialization: basic common allocations (this function) -> .initialized
> + *  - registration: protocols asynchronously come into life and registers their
> + *		    own supported list of events with the core; this causes
> + *		    further per-protocol allocations.
> + *
> + * Any user's callback registration attempt, referring a still not registered
> + * event, will be registered as pending and finalized later (if possible)
> + * by @scmi_protocols_late_init work.
> + * This allows for lazy initialization of SCMI Protocols due to late (or
> + * missing) SCMI drivers' modules loading.
> + *
> + * @handle: The handle identifying the platform instance to initialize
> + *
> + * Return: 0 on Success
> + */
> +int scmi_notification_init(struct scmi_handle *handle)
> +{
> +	void *gid;
> +	struct scmi_notify_instance *ni;
> +
> +	gid = devres_open_group(handle->dev, NULL, GFP_KERNEL);
> +	if (!gid)
> +		return -ENOMEM;
> +
> +	ni = devm_kzalloc(handle->dev, sizeof(*ni), GFP_KERNEL);
> +	if (!ni)
> +		goto err;
> +
> +	ni->gid = gid;
> +	ni->handle = handle;
> +
> +	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
> +						sizeof(char *), GFP_KERNEL);
> +	if (!ni->registered_protocols)
> +		goto err;
> +
> +	handle->notify_priv = ni;
> +
> +	atomic_set(&ni->initialized, 1);
> +	atomic_set(&ni->enabled, 1);
> +	/* Ensure atomic values are updated */
> +	smp_mb__after_atomic();
> +
> +	pr_info("SCMI Notifications Core Initialized.\n");
> +
> +	devres_close_group(handle->dev, ni->gid);
> +
> +	return 0;
> +
> +err:
> +	pr_warn("SCMI Notifications - Initialization Failed.\n");
> +	devres_release_group(handle->dev, NULL);
> +	return -ENOMEM;
> +}
> +
> +/**
> + * scmi_notification_exit  - Shutdown and clean Notification core
> + *
> + * @handle: The handle identifying the platform instance to shutdown
> + */
> +void scmi_notification_exit(struct scmi_handle *handle)
> +{
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
> +		return;
> +
> +	atomic_set(&ni->enabled, 0);
> +	/* Ensure atomic values are updated */
> +	smp_mb__after_atomic();
> +
> +	devres_release_group(ni->handle->dev, ni->gid);
> +
> +	pr_info("SCMI Notifications Core Shutdown.\n");

Is this actually useful?  Seems like noise to me, maybe pr_debug is more appopriate.

> +}
> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
> new file mode 100644
> index 000000000000..a7ece64e8842
> --- /dev/null
> +++ b/drivers/firmware/arm_scmi/notify.h
> @@ -0,0 +1,57 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * System Control and Management Interface (SCMI) Message Protocol
> + * notification header file containing some definitions, structures
> + * and function prototypes related to SCMI Notification handling.
> + *
> + * Copyright (C) 2019 ARM Ltd.

Update the dates given you are still changing this stuff?

> + */
> +#ifndef _SCMI_NOTIFY_H
> +#define _SCMI_NOTIFY_H
> +
> +#include <linux/device.h>
> +#include <linux/types.h>
> +
> +/**
> + * scmi_event  - Describes an event to be supported

Fairly sure this isn't valid kernel-doc.

   * struct scmi_event - ...

Make sure to run the kernel-doc scripts over any files you've added kernel-doc to
and tidy up the warnings.

> + *
> + * Each SCMI protocol, during its initialization phase, can describe the events
> + * it wishes to support in a few struct scmi_event and pass them to the core
> + * using scmi_register_protocol_events().
> + *
> + * @id: Event ID
> + * @max_payld_sz: Max possible size for the payload of a notif msg of this kind
> + * @max_report_sz: Max possible size for the report of a notif msg of this kind
> + */
> +struct scmi_event {
> +	u8	id;
> +	size_t	max_payld_sz;
> +	size_t	max_report_sz;
> +

Nitpick: Blank line isn't adding anything

> +};
> +
> +/**
> + * scmi_protocol_event_ops  - Helpers called by notification core.
> + *
> + * These are called only in process context.
> + *
> + * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications
> + *			using the proper custom protocol commands.
> + *			Return true if at least one the required src_id
> + *			has been successfully enabled/disabled
> + */
> +struct scmi_protocol_event_ops {
> +	bool (*set_notify_enabled)(const struct scmi_handle *handle,
> +				   u8 evt_id, u32 src_id, bool enabled);
> +};
> +
> +int scmi_notification_init(struct scmi_handle *handle);
> +void scmi_notification_exit(struct scmi_handle *handle);
> +
> +int scmi_register_protocol_events(const struct scmi_handle *handle,
> +				  u8 proto_id, size_t queue_sz,
> +				  const struct scmi_protocol_event_ops *ops,
> +				  const struct scmi_event *evt, int num_events,
> +				  int num_sources);
> +
> +#endif /* _SCMI_NOTIFY_H */
> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
> index 5c873a59b387..0679f10ab05e 100644
> --- a/include/linux/scmi_protocol.h
> +++ b/include/linux/scmi_protocol.h
> @@ -4,6 +4,10 @@
>   *
>   * Copyright (C) 2018 ARM Ltd.
>   */
> +
> +#ifndef _LINUX_SCMI_PROTOCOL_H
> +#define _LINUX_SCMI_PROTOCOL_H
> +
>  #include <linux/device.h>
>  #include <linux/types.h>
>  
> @@ -227,6 +231,8 @@ struct scmi_reset_ops {
>   *	protocol(for internal use only)
>   * @reset_priv: pointer to private data structure specific to reset
>   *	protocol(for internal use only)
> + * @notify_priv: pointer to private data structure specific to notifications
> + *	(for internal use only)
>   */
>  struct scmi_handle {
>  	struct device *dev;
> @@ -242,6 +248,7 @@ struct scmi_handle {
>  	void *power_priv;
>  	void *sensor_priv;
>  	void *reset_priv;
> +	void *notify_priv;
>  };
>  
>  enum scmi_std_protocol {
> @@ -319,3 +326,5 @@ static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
>  typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
>  int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
>  void scmi_protocol_unregister(int protocol_id);
> +
> +#endif /* _LINUX_SCMI_PROTOCOL_H */



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 05/13] firmware: arm_scmi: Add notification protocol-registration
@ 2020-03-09 11:33     ` Jonathan Cameron
  0 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 11:33 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

On Wed, 4 Mar 2020 16:25:50 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Add core SCMI Notifications protocol-registration support: allow protocols
> to register their own set of supported events, during their initialization
> phase. Notification core can track multiple platform instances by their
> handles.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>

Hi.

A few minor things inline.  Fairly sure kernel-doc needs
struct before the heading for each structure comment block.

Also, the events queue init looks like it could just be done with
a kfifo_alloc call.  Perhaps that makes sense given later patches...

Thanks,

Jonathan

> ---
> V3 --> V4
> - removed scratch ISR buffer, move scratch BH buffer into protocol
>   descriptor
> - converted registered_protocols and registered_events from hashtables
>   into bare fixed-sized arrays
> - removed unregister protocols' routines (never called really)
> V2 --> V3
> - added scmi_notify_instance to track target platform instance
> V1 --> V2
> - splitted out of V1 patch 04
> - moved from IDR maps to real HashTables to store events
> - scmi_notifications_initialized is now an atomic_t
> - reviewed protocol registration/unregistration to use devres
> - fixed:
>   drivers/firmware/arm_scmi/notify.c:483:18-23: ERROR:
>   	reference preceded by free on line 482
> 
> Reported-by: kbuild test robot <lkp@intel.com>
> Reported-by: Julia Lawall <julia.lawall@lip6.fr>
> ---
>  drivers/firmware/arm_scmi/Makefile |   2 +-
>  drivers/firmware/arm_scmi/common.h |   4 +
>  drivers/firmware/arm_scmi/notify.c | 439 +++++++++++++++++++++++++++++
>  drivers/firmware/arm_scmi/notify.h |  57 ++++
>  include/linux/scmi_protocol.h      |   9 +
>  5 files changed, 510 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/firmware/arm_scmi/notify.c
>  create mode 100644 drivers/firmware/arm_scmi/notify.h
> 
> diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
> index 6694d0d908d6..24a03a36aee4 100644
> --- a/drivers/firmware/arm_scmi/Makefile
> +++ b/drivers/firmware/arm_scmi/Makefile
> @@ -1,7 +1,7 @@
>  # SPDX-License-Identifier: GPL-2.0-only
>  obj-y	= scmi-bus.o scmi-driver.o scmi-protocols.o scmi-transport.o
>  scmi-bus-y = bus.o
> -scmi-driver-y = driver.o
> +scmi-driver-y = driver.o notify.o
>  scmi-transport-y = mailbox.o shmem.o
>  scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o
>  obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
> diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
> index 3c2e5d0d7b68..2106c35195ce 100644
> --- a/drivers/firmware/arm_scmi/common.h
> +++ b/drivers/firmware/arm_scmi/common.h
> @@ -6,6 +6,8 @@
>   *
>   * Copyright (C) 2018 ARM Ltd.
>   */
> +#ifndef _SCMI_COMMON_H
> +#define _SCMI_COMMON_H
>  
>  #include <linux/bitfield.h>
>  #include <linux/completion.h>
> @@ -232,3 +234,5 @@ void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
>  void shmem_clear_notification(struct scmi_shared_mem __iomem *shmem);
>  bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
>  		     struct scmi_xfer *xfer);
> +
> +#endif /* _SCMI_COMMON_H */
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> new file mode 100644
> index 000000000000..31e49cb7d88e
> --- /dev/null
> +++ b/drivers/firmware/arm_scmi/notify.c
> @@ -0,0 +1,439 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * System Control and Management Interface (SCMI) Notification support
> + *
> + * Copyright (C) 2020 ARM Ltd.
> + *
> + * SCMI Protocol specification allows the platform to signal events to
> + * interested agents via notification messages: this is an implementation
> + * of the dispatch and delivery of such notifications to the interested users
> + * inside the Linux kernel.
> + *
> + * An SCMI Notification core instance is initialized for each active platform
> + * instance identified by the means of the usual @scmi_handle.
> + *
> + * Each SCMI Protocol implementation, during its initialization, registers with
> + * this core its set of supported events using @scmi_register_protocol_events():
> + * all the needed descriptors are stored in the @registered_protocols and
> + * @registered_events arrays.
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/atomic.h>
> +#include <linux/bug.h>
> +#include <linux/compiler.h>
> +#include <linux/device.h>
> +#include <linux/err.h>
> +#include <linux/kernel.h>
> +#include <linux/kfifo.h>
> +#include <linux/mutex.h>
> +#include <linux/refcount.h>
> +#include <linux/scmi_protocol.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +
> +#include "notify.h"
> +
> +#define	SCMI_MAX_PROTO			256
> +#define	SCMI_ALL_SRC_IDS		0xffffUL
> +/*
> + * Builds an unsigned 32bit key from the given input tuple to be used
> + * as a key in hashtables.
> + */
> +#define MAKE_HASH_KEY(p, e, s)			\
> +	((u32)(((p) << 24) | ((e) << 16) | ((s) & SCMI_ALL_SRC_IDS)))
> +
> +#define MAKE_ALL_SRCS_KEY(p, e)			\
> +	MAKE_HASH_KEY((p), (e), SCMI_ALL_SRC_IDS)
> +
> +struct scmi_registered_protocol_events_desc;
> +
> +/**
> + * scmi_notify_instance  - Represents an instance of the notification core
> + *
> + * Each platform instance, represented by a handle, has its own instance of
> + * the notification subsystem represented by this structure.
> + *
> + * @gid: GroupID used for devres
> + * @handle: A reference to the platform instance
> + * @initialized: A flag that indicates if the core resources have been allocated
> + *		 and protocols are allowed to register their supported events
> + * @enabled: A flag to indicate events can be enabled and start flowing
> + * @registered_protocols: An statically allocated array containing pointers to
> + *			  all the registered protocol-level specific information
> + *			  related to events' handling
> + */
> +struct scmi_notify_instance {
> +	void						*gid;
> +	struct scmi_handle				*handle;
> +	atomic_t					initialized;
> +	atomic_t					enabled;
> +	struct scmi_registered_protocol_events_desc	**registered_protocols;
> +};
> +
> +/**
> + * events_queue  - Describes a queue and its associated worker

I guess this might become clear later, but right now this just looks like
we are open code what could be handled automatically by just using
kfifo_alloc

> + *
> + * Each protocol has its own dedicated events_queue descriptor.
> + *
> + * @sz: Size in bytes of the related kfifo
> + * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
> + * @kfifo: A dedicated Kernel kfifo descriptor
> + */
> +struct events_queue {
> +	size_t				sz;
> +	u8				*qbuf;
> +	struct kfifo			kfifo;
> +};
> +
> +/**
> + * scmi_event_header  - A utility header

struct scmi...

> + *
> + * This header is prepended to each received event message payload before
> + * queueing it on the related events_queue.
> + *
> + * @timestamp: The timestamp, in nanoseconds (boottime), which was associated
> + *	       to this event as soon as it entered the SCMI RX ISR
> + * @evt_id: Event ID (corresponds to the Event MsgID for this Protocol)
> + * @payld_sz: Effective size of the embedded message payload which follows
> + * @payld: A reference to the embedded event payload
> + */
> +struct scmi_event_header {
> +	u64	timestamp;
> +	u8	evt_id;
> +	size_t	payld_sz;
> +	u8	payld[];
> +} __packed;
> +
> +struct scmi_registered_event;
> +
> +/**
> + * scmi_registered_protocol_events_desc  - Protocol Specific information
> + *
> + * All protocols that registers at least one event have their protocol-specific
> + * information stored here, together with the embedded allocated events_queue.
> + * These descriptors are stored in the @registered_protocols array at protocol
> + * registration time.
> + *
> + * Once these descriptors are successfully registered, they are NEVER again
> + * removed or modified since protocols do not unregister ever, so that once we
> + * safely grab a NON-NULL reference from the array we can keep it and use it.
> + *
> + * @id: Protocol ID
> + * @ops: Protocol specific and event-related operations
> + * @equeue: The embedded per-protocol events_queue
> + * @ni: A reference to the initialized instance descriptor
> + * @eh: A reference to pre-allocated buffer to be used as a scratch area by the
> + *	deferred worker when fetching data from the kfifo
> + * @eh_sz: Size of the pre-allocated buffer @eh
> + * @in_flight: A reference to an in flight @scmi_registered_event
> + * @num_events: Number of events in @registered_events
> + * @registered_events: A dynamically allocated array holding all the registered
> + *		       events' descriptors, whose fixed-size is determined at
> + *		       compile time.
> + */
> +struct scmi_registered_protocol_events_desc {
> +	u8					id;
> +	const struct scmi_protocol_event_ops	*ops;
> +	struct events_queue			equeue;
> +	struct scmi_notify_instance		*ni;
> +	struct scmi_event_header		*eh;
> +	size_t					eh_sz;
> +	void					*in_flight;
> +	int					num_events;
> +	struct scmi_registered_event		**registered_events;
> +};
> +
> +/**
> + * scmi_registered_event  - Event Specific Information

struct scmi_registered_event - Event...

> + *
> + * All registered events are represented by one of these structures that are
> + * stored in the @registered_events array at protocol registration time.
> + *
> + * Once these descriptors are successfully registered, they are NEVER again
> + * removed or modified since protocols do not unregister ever, so that once we
> + * safely grab a NON-NULL reference from the table we can keep it and use it.
> + *
> + * @proto: A reference to the associated protocol descriptor
> + * @evt: A reference to the associated event descriptor (as provided at
> + *       registration time)
> + * @report: A pre-allocated buffer used by the deferred worker to fill a
> + *	    customized event report
> + * @num_sources: The number of possible sources for this event as stated at
> + *		 events' registration time
> + * @sources: A reference to a dynamically allocated array used to refcount the
> + *	     events' enable requests for all the existing sources
> + * @sources_mtx: A mutex to serialize the access to @sources
> + */
> +struct scmi_registered_event {
> +	struct scmi_registered_protocol_events_desc	*proto;
> +	const struct scmi_event				*evt;
> +	void						*report;
> +	u32						num_sources;
> +	refcount_t					*sources;
> +	struct mutex					sources_mtx;
> +};
> +
> +/**
> + * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
> + *
> + * Allocate a buffer for the kfifo and initialize it.
> + *
> + * @ni: A reference to the notification instance to use
> + * @equeue: The events_queue to initialize
> + * @sz: Size of the kfifo buffer to allocate
> + *
> + * Return: 0 on Success
> + */
> +static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
> +					struct events_queue *equeue, size_t sz)
> +{
> +	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
> +	if (!equeue->qbuf)
> +		return -ENOMEM;
> +	equeue->sz = sz;
> +
> +	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);

This seems like a slightly odd dance.  Why not use kfifo_alloc?

If it's because of the lack of devm_kfifo_alloc, maybe use a devm_add_action_or_reset
to handle that.

> +}
> +
> +/**
> + * scmi_allocate_registered_protocol_desc  - Allocate a registered protocol
> + * events' descriptor
> + *
> + * It is supposed to be called only once for each protocol at protocol
> + * initialization time, so it warns if the requested protocol is found
> + * already registered.
> + *
> + * @ni: A reference to the notification instance to use
> + * @proto_id: Protocol ID
> + * @queue_sz: Size of the associated queue to allocate
> + * @eh_sz: Size of the event header scratch area to pre-allocate
> + * @num_events: Number of events to support (size of @registered_events)
> + * @ops: Pointer to a struct holding references to protocol specific helpers
> + *	 needed during events handling
> + *
> + * Returns the allocated and registered descriptor on Success
> + */
> +static struct scmi_registered_protocol_events_desc *
> +scmi_allocate_registered_protocol_desc(struct scmi_notify_instance *ni,
> +				       u8 proto_id, size_t queue_sz,
> +				       size_t eh_sz, int num_events,
> +				const struct scmi_protocol_event_ops *ops)
> +{
> +	int ret;
> +	struct scmi_registered_protocol_events_desc *pd;
> +
> +	pd = READ_ONCE(ni->registered_protocols[proto_id]);
> +	if (pd) {
> +		WARN_ON(1);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	pd = devm_kzalloc(ni->handle->dev, sizeof(*pd), GFP_KERNEL);
> +	if (!pd)
> +		return ERR_PTR(-ENOMEM);
> +	pd->id = proto_id;
> +	pd->ops = ops;
> +	pd->ni = ni;
> +
> +	ret = scmi_initialize_events_queue(ni, &pd->equeue, queue_sz);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	pd->eh = devm_kzalloc(ni->handle->dev, eh_sz, GFP_KERNEL);
> +	if (!pd->eh)
> +		return ERR_PTR(-ENOMEM);
> +	pd->eh_sz = eh_sz;
> +
> +	pd->registered_events = devm_kcalloc(ni->handle->dev, num_events,
> +					     sizeof(char *), GFP_KERNEL);
> +	if (!pd->registered_events)
> +		return ERR_PTR(-ENOMEM);
> +	pd->num_events = num_events;
> +
> +	return pd;
> +}
> +
> +/**
> + * scmi_register_protocol_events  - Register Protocol Events with the core
> + *
> + * Used by SCMI Protocols initialization code to register with the notification
> + * core the list of supported events and their descriptors: takes care to
> + * pre-allocate and store all needed descriptors, scratch buffers and event
> + * queues.
> + *
> + * @handle: The handle identifying the platform instance against which the
> + *	    the protocol's events are registered
> + * @proto_id: Protocol ID
> + * @queue_sz: Size in bytes of the associated queue to be allocated
> + * @ops: Protocol specific event-related operations
> + * @evt: Event descriptor array
> + * @num_events: Number of events in @evt array
> + * @num_sources: Number of possible sources for this protocol on this
> + *		 platform.
> + *
> + * Return: 0 on Success
> + */
> +int scmi_register_protocol_events(const struct scmi_handle *handle,
> +				  u8 proto_id, size_t queue_sz,
> +				  const struct scmi_protocol_event_ops *ops,
> +				  const struct scmi_event *evt, int num_events,
> +				  int num_sources)
> +{
> +	int i;
> +	size_t payld_sz = 0;
> +	struct scmi_registered_protocol_events_desc *pd;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	if (!ops || !evt || proto_id >= SCMI_MAX_PROTO)
> +		return -EINVAL;
> +
> +	/* Ensure atomic value is updated */
> +	smp_mb__before_atomic();
> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
> +		return -EAGAIN;
> +
> +	/* Attach to the notification main devres group */
> +	if (!devres_open_group(ni->handle->dev, ni->gid, GFP_KERNEL))
> +		return -ENOMEM;
> +
> +	for (i = 0; i < num_events; i++)
> +		payld_sz = max_t(size_t, payld_sz, evt[i].max_payld_sz);
> +	pd = scmi_allocate_registered_protocol_desc(ni, proto_id, queue_sz,
> +				    sizeof(struct scmi_event_header) + payld_sz,
> +						    num_events, ops);
> +	if (IS_ERR(pd))
> +		goto err;
> +
> +	for (i = 0; i < num_events; i++, evt++) {
> +		struct scmi_registered_event *r_evt;
> +
> +		r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt),
> +				     GFP_KERNEL);
> +		if (!r_evt)
> +			goto err;
> +		r_evt->proto = pd;
> +		r_evt->evt = evt;
> +
> +		r_evt->sources = devm_kcalloc(ni->handle->dev, num_sources,
> +					      sizeof(refcount_t), GFP_KERNEL);
> +		if (!r_evt->sources)
> +			goto err;
> +		r_evt->num_sources = num_sources;
> +		mutex_init(&r_evt->sources_mtx);
> +
> +		r_evt->report = devm_kzalloc(ni->handle->dev,
> +					     evt->max_report_sz, GFP_KERNEL);
> +		if (!r_evt->report)
> +			goto err;
> +
> +		WRITE_ONCE(pd->registered_events[i], r_evt);
> +		pr_info("SCMI Notifications: registered event - %X\n",
> +			MAKE_ALL_SRCS_KEY(r_evt->proto->id, r_evt->evt->id));
> +	}
> +
> +	/* Register protocol and events...it will never be removed */
> +	WRITE_ONCE(ni->registered_protocols[proto_id], pd);
> +
> +	devres_close_group(ni->handle->dev, ni->gid);
> +
> +	return 0;
> +
> +err:
> +	pr_warn("SCMI Notifications - Proto:%X - Registration Failed !\n",
> +		proto_id);
> +	/* A failing protocol registration does not trigger full failure */
> +	devres_close_group(ni->handle->dev, ni->gid);
> +
> +	return -ENOMEM;
> +}
> +
> +/**
> + * scmi_notification_init  - Initializes Notification Core Support
> + *
> + * This function lays out all the basic resources needed by the notification
> + * core instance identified by the provided handle: once done, all of the
> + * SCMI Protocols can register their events with the core during their own
> + * initializations.
> + *
> + * Note that failing to initialize the core notifications support does not
> + * cause the whole SCMI Protocols stack to fail its initialization.
> + *
> + * SCMI Notification Initialization happens in 2 steps:
> + *
> + *  - initialization: basic common allocations (this function) -> .initialized
> + *  - registration: protocols asynchronously come into life and registers their
> + *		    own supported list of events with the core; this causes
> + *		    further per-protocol allocations.
> + *
> + * Any user's callback registration attempt, referring a still not registered
> + * event, will be registered as pending and finalized later (if possible)
> + * by @scmi_protocols_late_init work.
> + * This allows for lazy initialization of SCMI Protocols due to late (or
> + * missing) SCMI drivers' modules loading.
> + *
> + * @handle: The handle identifying the platform instance to initialize
> + *
> + * Return: 0 on Success
> + */
> +int scmi_notification_init(struct scmi_handle *handle)
> +{
> +	void *gid;
> +	struct scmi_notify_instance *ni;
> +
> +	gid = devres_open_group(handle->dev, NULL, GFP_KERNEL);
> +	if (!gid)
> +		return -ENOMEM;
> +
> +	ni = devm_kzalloc(handle->dev, sizeof(*ni), GFP_KERNEL);
> +	if (!ni)
> +		goto err;
> +
> +	ni->gid = gid;
> +	ni->handle = handle;
> +
> +	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
> +						sizeof(char *), GFP_KERNEL);
> +	if (!ni->registered_protocols)
> +		goto err;
> +
> +	handle->notify_priv = ni;
> +
> +	atomic_set(&ni->initialized, 1);
> +	atomic_set(&ni->enabled, 1);
> +	/* Ensure atomic values are updated */
> +	smp_mb__after_atomic();
> +
> +	pr_info("SCMI Notifications Core Initialized.\n");
> +
> +	devres_close_group(handle->dev, ni->gid);
> +
> +	return 0;
> +
> +err:
> +	pr_warn("SCMI Notifications - Initialization Failed.\n");
> +	devres_release_group(handle->dev, NULL);
> +	return -ENOMEM;
> +}
> +
> +/**
> + * scmi_notification_exit  - Shutdown and clean Notification core
> + *
> + * @handle: The handle identifying the platform instance to shutdown
> + */
> +void scmi_notification_exit(struct scmi_handle *handle)
> +{
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
> +		return;
> +
> +	atomic_set(&ni->enabled, 0);
> +	/* Ensure atomic values are updated */
> +	smp_mb__after_atomic();
> +
> +	devres_release_group(ni->handle->dev, ni->gid);
> +
> +	pr_info("SCMI Notifications Core Shutdown.\n");

Is this actually useful?  Seems like noise to me, maybe pr_debug is more appopriate.

> +}
> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
> new file mode 100644
> index 000000000000..a7ece64e8842
> --- /dev/null
> +++ b/drivers/firmware/arm_scmi/notify.h
> @@ -0,0 +1,57 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * System Control and Management Interface (SCMI) Message Protocol
> + * notification header file containing some definitions, structures
> + * and function prototypes related to SCMI Notification handling.
> + *
> + * Copyright (C) 2019 ARM Ltd.

Update the dates given you are still changing this stuff?

> + */
> +#ifndef _SCMI_NOTIFY_H
> +#define _SCMI_NOTIFY_H
> +
> +#include <linux/device.h>
> +#include <linux/types.h>
> +
> +/**
> + * scmi_event  - Describes an event to be supported

Fairly sure this isn't valid kernel-doc.

   * struct scmi_event - ...

Make sure to run the kernel-doc scripts over any files you've added kernel-doc to
and tidy up the warnings.

> + *
> + * Each SCMI protocol, during its initialization phase, can describe the events
> + * it wishes to support in a few struct scmi_event and pass them to the core
> + * using scmi_register_protocol_events().
> + *
> + * @id: Event ID
> + * @max_payld_sz: Max possible size for the payload of a notif msg of this kind
> + * @max_report_sz: Max possible size for the report of a notif msg of this kind
> + */
> +struct scmi_event {
> +	u8	id;
> +	size_t	max_payld_sz;
> +	size_t	max_report_sz;
> +

Nitpick: Blank line isn't adding anything

> +};
> +
> +/**
> + * scmi_protocol_event_ops  - Helpers called by notification core.
> + *
> + * These are called only in process context.
> + *
> + * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications
> + *			using the proper custom protocol commands.
> + *			Return true if at least one the required src_id
> + *			has been successfully enabled/disabled
> + */
> +struct scmi_protocol_event_ops {
> +	bool (*set_notify_enabled)(const struct scmi_handle *handle,
> +				   u8 evt_id, u32 src_id, bool enabled);
> +};
> +
> +int scmi_notification_init(struct scmi_handle *handle);
> +void scmi_notification_exit(struct scmi_handle *handle);
> +
> +int scmi_register_protocol_events(const struct scmi_handle *handle,
> +				  u8 proto_id, size_t queue_sz,
> +				  const struct scmi_protocol_event_ops *ops,
> +				  const struct scmi_event *evt, int num_events,
> +				  int num_sources);
> +
> +#endif /* _SCMI_NOTIFY_H */
> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
> index 5c873a59b387..0679f10ab05e 100644
> --- a/include/linux/scmi_protocol.h
> +++ b/include/linux/scmi_protocol.h
> @@ -4,6 +4,10 @@
>   *
>   * Copyright (C) 2018 ARM Ltd.
>   */
> +
> +#ifndef _LINUX_SCMI_PROTOCOL_H
> +#define _LINUX_SCMI_PROTOCOL_H
> +
>  #include <linux/device.h>
>  #include <linux/types.h>
>  
> @@ -227,6 +231,8 @@ struct scmi_reset_ops {
>   *	protocol(for internal use only)
>   * @reset_priv: pointer to private data structure specific to reset
>   *	protocol(for internal use only)
> + * @notify_priv: pointer to private data structure specific to notifications
> + *	(for internal use only)
>   */
>  struct scmi_handle {
>  	struct device *dev;
> @@ -242,6 +248,7 @@ struct scmi_handle {
>  	void *power_priv;
>  	void *sensor_priv;
>  	void *reset_priv;
> +	void *notify_priv;
>  };
>  
>  enum scmi_std_protocol {
> @@ -319,3 +326,5 @@ static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
>  typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
>  int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
>  void scmi_protocol_unregister(int protocol_id);
> +
> +#endif /* _LINUX_SCMI_PROTOCOL_H */



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 06/13] firmware: arm_scmi: Add notification callbacks-registration
  2020-03-04 16:25   ` Cristian Marussi
@ 2020-03-09 11:50     ` Jonathan Cameron
  -1 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 11:50 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

On Wed, 4 Mar 2020 16:25:51 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Add core SCMI Notifications callbacks-registration support: allow users
> to register their own callbacks against the desired events.
> Whenever a registration request is issued against a still non existent
> event, mark such request as pending for later processing, in order to
> account for possible late initializations of SCMI Protocols associated
> to loadable drivers.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Another one that you should run the kernel-doc scripts over. I haven't checked
but fairly sure they won't like some of this...

Otherwise a few trivial things inline.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Thanks,

Jonathan

> ---
> V3 --> V4
> - split registered_handlers hashtable on a per-protocol basis to reduce
>   unneeded contention
> - introduced pending_handlers table and related late_init worker to finalize
>   handlers registration upon effective protocols' registrations
> - introduced further safe accessors macros for registered_protocols
>   and registered_events arrays
> V2 --> V3
> - refactored get/put event_handler
> - removed generic non-handle-based API
> V1 --> V2
> - splitted out of V1 patch 04
> - moved from IDR maps to real HashTables to store event_handlers
> - added proper enable_events refcounting via __scmi_enable_evt()
>   [was broken in V1 when using ALL_SRCIDs notification chains]
> - reviewed hashtable cleanup strategy in scmi_notifications_exit()
> - added scmi_register_event_notifier()/scmi_unregister_event_notifier()
>   to include/linux/scmi_protocol.h as a candidate user API
>   [no EXPORTs still]
> - added notify_ops to handle during initialization as an additional
>   internal API for scmi_drivers
> ---
>  drivers/firmware/arm_scmi/notify.c | 700 +++++++++++++++++++++++++++++
>  drivers/firmware/arm_scmi/notify.h |  12 +
>  include/linux/scmi_protocol.h      |  50 +++
>  3 files changed, 762 insertions(+)
> 
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> index 31e49cb7d88e..d6c08cce3c63 100644
> --- a/drivers/firmware/arm_scmi/notify.c
> +++ b/drivers/firmware/arm_scmi/notify.c
> @@ -16,18 +16,50 @@
>   * this core its set of supported events using @scmi_register_protocol_events():
>   * all the needed descriptors are stored in the @registered_protocols and
>   * @registered_events arrays.
> + *
> + * Kernel users interested in some specific event can register their callbacks
> + * providing the usual notifier_block descriptor, since this core implements
> + * events' delivery using the standard Kernel notification chains machinery.
> + *
> + * Given the number of possible events defined by SCMI and the extensibility
> + * of the SCMI Protocol itself, the underlying notification chains are created
> + * and destroyed dynamically on demand depending on the number of users
> + * effectively registered for an event, so that no support structures or chains
> + * are allocated until at least one user has registered a notifier_block for
> + * such event. Similarly, events' generation itself is enabled at the platform
> + * level only after at least one user has registered, and it is shutdown after
> + * the last user for that event has gone.
> + *
> + * All users provided callbacks and allocated notification-chains are stored in
> + * the @registered_events_handlers hashtable. Callbacks' registration requests
> + * for still to be registered events are instead kept in the dedicated common
> + * hashtable @pending_events_handlers.
> + *
> + * An event is identified univocally by the tuple (proto_id, evt_id, src_id)
> + * and is served by its own dedicated notification chain; information contained
> + * in such tuples is used, in a few different ways, to generate the needed
> + * hash-keys.
> + *
> + * Here proto_id and evt_id are simply the protocol_id and message_id numbers
> + * as described in the SCMI Protocol specification, while src_id represents an
> + * optional, protocol dependent, source identifier (like domain_id, perf_id
> + * or sensor_id and so forth).
>   */
>  
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>  
>  #include <linux/atomic.h>
> +#include <linux/bitfield.h>
>  #include <linux/bug.h>
>  #include <linux/compiler.h>
>  #include <linux/device.h>
>  #include <linux/err.h>
> +#include <linux/hashtable.h>
>  #include <linux/kernel.h>
>  #include <linux/kfifo.h>
> +#include <linux/list.h>
>  #include <linux/mutex.h>
> +#include <linux/notifier.h>
>  #include <linux/refcount.h>
>  #include <linux/scmi_protocol.h>
>  #include <linux/slab.h>
> @@ -47,6 +79,71 @@
>  #define MAKE_ALL_SRCS_KEY(p, e)			\
>  	MAKE_HASH_KEY((p), (e), SCMI_ALL_SRC_IDS)
>  
> +/**
> + * Assumes that the stored obj includes its own hash-key in a field named 'key':
> + * with this simplification this macro can be equally used for all the objects'
> + * types hashed by this implementation.
> + *
> + * @__ht: The hashtable name
> + * @__obj: A pointer to the object type to be retrieved from the hashtable;
> + *	   it will be used as a cursor while scanning the hastable and it will
> + *	   be possibly left as NULL when @__k is not found
> + * @__k: The key to search for
> + */
> +#define KEY_FIND(__ht, __obj, __k)				\
> +({								\
> +	hash_for_each_possible((__ht), (__obj), hash, (__k))	\
> +		if (likely((__obj)->key == (__k)))		\
> +			break;					\
> +	__obj;							\
> +})
> +
> +#define PROTO_ID_MASK			GENMASK(31, 24)
> +#define EVT_ID_MASK			GENMASK(23, 16)
> +#define SRC_ID_MASK			GENMASK(15, 0)
> +#define KEY_XTRACT_PROTO_ID(key)	FIELD_GET(PROTO_ID_MASK, (key))
> +#define KEY_XTRACT_EVT_ID(key)		FIELD_GET(EVT_ID_MASK, (key))
> +#define KEY_XTRACT_SRC_ID(key)		FIELD_GET(SRC_ID_MASK, (key))
> +
> +/**
> + * A set of macros used to access safely @registered_protocols and
> + * @registered_events arrays; these are fixed in size and each entry is possibly
> + * populated at protocols' registration time and then only read but NEVER
> + * modified or removed.
> + */
> +#define SCMI_GET_PROTO(__ni, __pid)					\
> +({									\
> +	struct scmi_registered_protocol_events_desc *__pd = NULL;	\
> +									\
> +	if ((__ni) && (__pid) < SCMI_MAX_PROTO)				\
> +		__pd = READ_ONCE((__ni)->registered_protocols[(__pid)]);\
> +	__pd;								\
> +})
> +
> +#define SCMI_GET_REVT_FROM_PD(__pd, __eid)				\
> +({									\
> +	struct scmi_registered_event *__revt = NULL;			\
> +									\
> +	if ((__pd) && (__eid) < (__pd)->num_events)			\
> +		__revt = READ_ONCE((__pd)->registered_events[(__eid)]);	\
> +	__revt;								\
> +})
> +
> +#define SCMI_GET_REVT(__ni, __pid, __eid)				\
> +({									\
> +	struct scmi_registered_event *__revt = NULL;			\
> +	struct scmi_registered_protocol_events_desc *__pd = NULL;	\
> +									\
> +	__pd = SCMI_GET_PROTO((__ni), (__pid));				\
> +	__revt = SCMI_GET_REVT_FROM_PD(__pd, (__eid));			\
> +	__revt;								\
> +})
> +
> +/* A couple of utility macros to limit cruft when calling protocols' helpers */
> +#define REVT_NOTIFY_ENABLE(revt, ...)	\
> +	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
> +						__VA_ARGS__))
> +
>  struct scmi_registered_protocol_events_desc;
>  
>  /**
> @@ -60,16 +157,25 @@ struct scmi_registered_protocol_events_desc;
>   * @initialized: A flag that indicates if the core resources have been allocated
>   *		 and protocols are allowed to register their supported events
>   * @enabled: A flag to indicate events can be enabled and start flowing
> + * @init_work: A work item to perform final initializations of pending handlers
> + * @pending_mtx: A mutex to protect @pending_events_handlers
>   * @registered_protocols: An statically allocated array containing pointers to
>   *			  all the registered protocol-level specific information
>   *			  related to events' handling
> + * @pending_events_handlers: An hashtable containing all pending events'
> + *			     handlers descriptors
>   */
>  struct scmi_notify_instance {
>  	void						*gid;
>  	struct scmi_handle				*handle;
>  	atomic_t					initialized;
>  	atomic_t					enabled;
> +
> +	struct work_struct				init_work;
> +
> +	struct mutex					pending_mtx;
>  	struct scmi_registered_protocol_events_desc	**registered_protocols;
> +	DECLARE_HASHTABLE(pending_events_handlers, 8);
>  };
>  
>  /**
> @@ -132,6 +238,9 @@ struct scmi_registered_event;
>   * @registered_events: A dynamically allocated array holding all the registered
>   *		       events' descriptors, whose fixed-size is determined at
>   *		       compile time.
> + * @registered_mtx: A mutex to protect @registered_events_handlers
> + * @registered_events_handlers: An hashtable containing all events' handlers
> + *				descriptors registered for this protocol
>   */
>  struct scmi_registered_protocol_events_desc {
>  	u8					id;
> @@ -143,6 +252,8 @@ struct scmi_registered_protocol_events_desc {
>  	void					*in_flight;
>  	int					num_events;
>  	struct scmi_registered_event		**registered_events;
> +	struct mutex				registered_mtx;
> +	DECLARE_HASHTABLE(registered_events_handlers, 8);
>  };
>  
>  /**
> @@ -175,6 +286,38 @@ struct scmi_registered_event {
>  	struct mutex					sources_mtx;
>  };
>  
> +/**
> + * scmi_event_handler  - Event handler information
> + *
> + * This structure collects all the information needed to process a received
> + * event identified by the tuple (proto_id, evt_id, src_id).
> + * These descriptors are stored in a per-protocol @registered_events_handlers
> + * table using as a key a value derived from that tuple.
> + *
> + * @key: The used hashkey
> + * @users: A reference count for number of active users for this handler
> + * @r_evt: A reference to the associated registered event; when this is NULL
> + *	   this handler is pending, which means that identifies a set of
> + *	   callbacks intended to be attached to an event which is still not
> + *	   known nor registered by any protocol at that point in time
> + * @chain: The notification chain dedicated to this specific event tuple
> + * @hash: The hlist_node used for collision handling
> + * @enabled: A boolean which records if event's generation has been already
> + *	     enabled for this handler as a whole
> + */
> +struct scmi_event_handler {
> +	u32				key;
> +	refcount_t			users;
> +	struct scmi_registered_event	*r_evt;
> +	struct blocking_notifier_head	chain;
> +	struct hlist_node		hash;
> +	bool				enabled;
> +};
> +
> +#define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
> +
> +static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
> +				      struct scmi_event_handler *hndl);
>  /**
>   * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
>   *
> @@ -252,6 +395,10 @@ scmi_allocate_registered_protocol_desc(struct scmi_notify_instance *ni,
>  		return ERR_PTR(-ENOMEM);
>  	pd->num_events = num_events;
>  
> +	/* Initialize per protocol handlers table */
> +	mutex_init(&pd->registered_mtx);
> +	hash_init(pd->registered_events_handlers);
> +
>  	return pd;
>  }
>  
> @@ -338,6 +485,12 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
>  
>  	devres_close_group(ni->handle->dev, ni->gid);
>  
> +	/*
> +	 * Finalize any pending events' handler which could have been waiting
> +	 * for this protocol's events registration.
> +	 */
> +	schedule_work(&ni->init_work);
> +
>  	return 0;
>  
>  err:
> @@ -349,6 +502,547 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
>  	return -ENOMEM;
>  }
>  
> +/**
> + * scmi_allocate_event_handler  - Allocate Event handler
> + *
> + * Allocate an event handler and related notification chain associated with
> + * the provided event handler key.
> + * Note that, at this point, a related registered_event is still to be
> + * associated to this handler descriptor (hndl->r_evt == NULL), so the handler
> + * is initialized as pending.
> + *
> + * Assumes to be called with @pending_mtx already acquired.
> + *
> + * @ni: A reference to the notification instance to use
> + * @evt_key: 32bit key uniquely bind to the event identified by the tuple
> + *	     (proto_id, evt_id, src_id)
> + *
> + * Return: the freshly allocated structure on Success
> + */
> +static struct scmi_event_handler *
> +scmi_allocate_event_handler(struct scmi_notify_instance *ni, u32 evt_key)
> +{
> +	struct scmi_event_handler *hndl;
> +
> +	hndl = kzalloc(sizeof(*hndl), GFP_KERNEL);
> +	if (!hndl)
> +		return ERR_PTR(-ENOMEM);
> +	hndl->key = evt_key;
> +	BLOCKING_INIT_NOTIFIER_HEAD(&hndl->chain);
> +	refcount_set(&hndl->users, 1);
> +	/* New handlers are created pending */
> +	hash_add(ni->pending_events_handlers, &hndl->hash, hndl->key);
> +
> +	return hndl;
> +}
> +
> +/**
> + * scmi_free_event_handler  - Free the provided Event handler
> + *
> + * Assumes to be called with proper locking acquired depending on the situation.
> + *
> + * @hndl: The event handler structure to free
> + */
> +static void scmi_free_event_handler(struct scmi_event_handler *hndl)
> +{
> +	hash_del(&hndl->hash);
> +	kfree(hndl);
> +}
> +
> +/**
> + * scmi_bind_event_handler  - Helper to attempt binding an handler to an event
> + *
> + * If an associated registered event is found, move the handler from the pending
> + * into the registered table.
> + *
> + * Assumes to be called with @pending_mtx already acquired.
> + *
> + * @ni: A reference to the notification instance to use
> + * @hndl: The event handler to bind
> + *
> + * Return: True if bind was successful, False otherwise
> + */
> +static inline bool scmi_bind_event_handler(struct scmi_notify_instance *ni,
> +					   struct scmi_event_handler *hndl)
> +{
> +	struct scmi_registered_event *r_evt;
> +
> +
> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(hndl->key),
> +			      KEY_XTRACT_EVT_ID(hndl->key));
> +	if (unlikely(!r_evt))
> +		return false;
> +
> +	/* Remove from pending and insert into registered */
> +	hash_del(&hndl->hash);
> +	hndl->r_evt = r_evt;
> +	mutex_lock(&r_evt->proto->registered_mtx);
> +	hash_add(r_evt->proto->registered_events_handlers,
> +		 &hndl->hash, hndl->key);
> +	mutex_unlock(&r_evt->proto->registered_mtx);
> +
> +	return true;
> +}
> +
> +/**
> + * scmi_valid_pending_handler  - Helper to check pending status of handlers
> + *
> + * An handler is considered pending when its r_evt == NULL, because the related
> + * event was still unknown at handler's registration time; anyway, since all
> + * protocols register their supported events once for all at protocols'
> + * initialization time, a pending handler cannot be considered valid anymore if
> + * the underlying event (which it is waiting for), belongs to an already
> + * initialized and registered protocol.
> + *
> + * @ni: A reference to the notification instance to use
> + * @hndl: The event handler to check
> + *
> + * Return: True if pending registration is still valid, False otherwise.
> + */
> +static inline bool scmi_valid_pending_handler(struct scmi_notify_instance *ni,
> +					      struct scmi_event_handler *hndl)
> +{
> +	struct scmi_registered_protocol_events_desc *pd;
> +
> +	if (unlikely(!IS_HNDL_PENDING(hndl)))
> +		return false;
> +
> +	pd = SCMI_GET_PROTO(ni, KEY_XTRACT_PROTO_ID(hndl->key));
> +	if (pd)
> +		return false;
> +
> +	return true;
> +}
> +
> +/**
> + * scmi_register_event_handler  - Register whenever possible an Event handler
> + *
> + * At first try to bind an event handler to its associated event, then check if
> + * it was at least a valid pending handler: if it was not bound nor valid return
> + * false.
> + *
> + * Valid pending incomplete bindings will be periodically retried by a dedicated
> + * worker which is kicked each time a new protocol completes its own
> + * registration phase.
> + *
> + * Assumes to be called with @pending_mtx acquired.
> + *
> + * @ni: A reference to the notification instance to use
> + * @hndl: The event handler to register
> + *
> + * Return: True if a normal or a valid pending registration has been completed,
> + *	   False otherwise
> + */
> +static bool scmi_register_event_handler(struct scmi_notify_instance *ni,
> +					struct scmi_event_handler *hndl)
> +{
> +	bool ret;
> +
> +	ret = scmi_bind_event_handler(ni, hndl);
> +	if (ret) {
> +		pr_info("SCMI Notifications: registered NEW handler - key:%X\n",
> +			hndl->key);
> +	} else {
> +		ret = scmi_valid_pending_handler(ni, hndl);
> +		if (ret)
> +			pr_info("SCMI Notifications: registered PENDING handler - key:%X\n",
> +				hndl->key);
> +	}
> +
> +	return ret;
> +}
> +
> +/**
> + * __scmi_event_handler_get_ops  - Utility to get or create an event handler
> + *
> + * Search for the desired handler matching the key in both the per-protocol
> + * registered table and the common pending table:
> + *  - if found adjust users refcount
> + *  - if not found and @create is true, create and register the new handler:
> + *    handler could end up being registered as pending if no matching event
> + *    could be found.
> + *
> + * An handler is guaranteed to reside in one and only one of the tables at
> + * any one time; to ensure this the whole search and create is performed
> + * holding the @pending_mtx lock, with @registered_mtx additionally acquired
> + * if needed.
> + * Note that when a nested acquisition of these mutexes is needed the locking
> + * order is always (same as in @init_work):
> + *	1. pending_mtx
> + *	2. registered_mtx
> + *
> + * Events generation is NOT enabled right after creation within this routine
> + * since at creation time we usually want to have all setup and ready before
> + * events really start flowing.
> + *
> + * @ni: A reference to the notification instance to use
> + * @evt_key: The event key to use
> + * @create: A boolean flag to specify if a handler must be created when
> + *	    not already existent
> + *
> + * Return: A properly refcounted handler on Success, NULL on Failure
> + */
> +static inline struct scmi_event_handler *
> +__scmi_event_handler_get_ops(struct scmi_notify_instance *ni,
> +			     u32 evt_key, bool create)
> +{
> +	struct scmi_registered_event *r_evt;
> +	struct scmi_event_handler *hndl = NULL;
> +
> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
> +			      KEY_XTRACT_EVT_ID(evt_key));
> +
> +	mutex_lock(&ni->pending_mtx);
> +	/* Search registered events at first ... if possible at all */
> +	if (likely(r_evt)) {
> +		mutex_lock(&r_evt->proto->registered_mtx);
> +		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
> +				hndl, evt_key);
> +		if (likely(hndl))
> +			refcount_inc(&hndl->users);
> +		mutex_unlock(&r_evt->proto->registered_mtx);
> +	}
> +
> +	/* ...then amongst pending. */
> +	if (unlikely(!hndl)) {
> +		hndl = KEY_FIND(ni->pending_events_handlers, hndl, evt_key);
> +		if (likely(hndl))
> +			refcount_inc(&hndl->users);
> +	}
> +
> +	/* Create if still not found and required */
> +	if (!hndl && create) {
> +		hndl = scmi_allocate_event_handler(ni, evt_key);
> +		if (!IS_ERR_OR_NULL(hndl)) {
> +			if (!scmi_register_event_handler(ni, hndl)) {
> +				pr_info("SCMI Notifications: purging UNKNOWN handler - key:%X\n",
> +					hndl->key);
> +				/* this hndl can be only a pending one */
> +				scmi_put_handler_unlocked(ni, hndl);
> +				hndl = NULL;
> +			}
> +		}
> +	}
> +	mutex_unlock(&ni->pending_mtx);
> +
> +	return hndl;
> +}
> +
> +static struct scmi_event_handler *
> +scmi_get_handler(struct scmi_notify_instance *ni, u32 evt_key)
> +{
> +	return __scmi_event_handler_get_ops(ni, evt_key, false);
> +}
> +
> +static struct scmi_event_handler *
> +scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
> +{
> +	return __scmi_event_handler_get_ops(ni, evt_key, true);
> +}
> +
> +/**
> + * __scmi_enable_evt  - Enable/disable events generation
> + *
> + * Takes care of proper refcounting while performing enable/disable: handles
> + * the special case of ALL sources requests by itself.
> + *
> + * @r_evt: The registered event to act upon
> + * @src_id: The src_id to act upon
> + * @enable: The action to perform: true->Enable, false->Disable
> + *
> + * Return: True when the required @action has been successfully executed
> + */
> +static inline bool __scmi_enable_evt(struct scmi_registered_event *r_evt,
> +				     u32 src_id, bool enable)
> +{
> +	int ret = 0;
> +	u32 num_sources;
> +	refcount_t *sid;
> +
> +	if (src_id == SCMI_ALL_SRC_IDS) {
> +		src_id = 0;
> +		num_sources = r_evt->num_sources;
> +	} else if (src_id < r_evt->num_sources) {
> +		num_sources = 1;
> +	} else {
> +		return ret;
> +	}
> +
> +	mutex_lock(&r_evt->sources_mtx);
> +	if (enable) {
> +		for (; num_sources; src_id++, num_sources--) {
> +			bool r;
> +
> +			sid = &r_evt->sources[src_id];
> +			if (refcount_read(sid) == 0) {
> +				r = REVT_NOTIFY_ENABLE(r_evt,
> +						       r_evt->evt->id,
> +						       src_id, enable);

I would make the enable explicit in this call so it is obvious we are
in the enable path rather than disable.

> +				if (r)
> +					refcount_set(sid, 1);
> +			} else {
> +				refcount_inc(sid);
> +				r = true;
> +			}
> +			ret += r;
> +		}
> +	} else {
> +		for (; num_sources; src_id++, num_sources--) {
> +			sid = &r_evt->sources[src_id];
> +			if (refcount_dec_and_test(sid))
> +				REVT_NOTIFY_ENABLE(r_evt,
> +						   r_evt->evt->id,
> +						   src_id, enable);

As above, make the enable value explicit.

> +		}
> +		ret = 1;
> +	}
> +	mutex_unlock(&r_evt->sources_mtx);
> +
> +	return ret;
> +}
> +
> +static bool scmi_enable_events(struct scmi_event_handler *hndl)
> +{
> +	if (!hndl->enabled)
> +		hndl->enabled = __scmi_enable_evt(hndl->r_evt,
> +						  KEY_XTRACT_SRC_ID(hndl->key),
> +						  true);
> +	return hndl->enabled;
> +}
> +
> +static bool scmi_disable_events(struct scmi_event_handler *hndl)
> +{
> +	if (hndl->enabled)
> +		hndl->enabled = !__scmi_enable_evt(hndl->r_evt,
> +						   KEY_XTRACT_SRC_ID(hndl->key),
> +						   false);
> +	return !hndl->enabled;
> +}
> +
> +/**
> + * scmi_put_handler_unlocked  - Put an event handler
> + *
> + * After having got exclusive access to the registered handlers hashtable,
> + * update the refcount and if @hndl is no more in use by anyone:
> + *
> + *  - ask for events' generation disabling
> + *  - unregister and free the handler itself
> + *
> + *  Assumes all the proper locking has been managed by the caller.
> + *
> + * @ni: A reference to the notification instance to use
> + * @hndl: The event handler to act upon
> + */
> +
> +static void
> +scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
> +				struct scmi_event_handler *hndl)
> +{
> +	if (refcount_dec_and_test(&hndl->users)) {
> +		if (likely(!IS_HNDL_PENDING(hndl)))
> +			scmi_disable_events(hndl);
> +		scmi_free_event_handler(hndl);
> +	}
> +}
> +
> +static void scmi_put_handler(struct scmi_notify_instance *ni,
> +			     struct scmi_event_handler *hndl)
> +{
> +	struct scmi_registered_event *r_evt = hndl->r_evt;
> +
> +	mutex_lock(&ni->pending_mtx);
> +	if (r_evt)
> +		mutex_lock(&r_evt->proto->registered_mtx);
> +
> +	scmi_put_handler_unlocked(ni, hndl);
> +
> +	if (r_evt)
> +		mutex_unlock(&r_evt->proto->registered_mtx);
> +	mutex_unlock(&ni->pending_mtx);
> +}
> +
> +/**
> + * scmi_event_handler_enable_events  - Enable events associated to an handler
> + *
> + * @hndl: The Event handler to act upon
> + *
> + * Return: True on success
> + */
> +static bool scmi_event_handler_enable_events(struct scmi_event_handler *hndl)
> +{
> +	if (!scmi_enable_events(hndl)) {
> +		pr_err("SCMI Notifications: Failed to ENABLE events for key:%X !\n",
> +		       hndl->key);
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +/**
> + * scmi_register_notifier  - Register a notifier_block for an event
> + *
> + * Generic helper to register a notifier_block against a protocol event.
> + *
> + * A notifier_block @nb will be registered for each distinct event identified
> + * by the tuple (proto_id, evt_id, src_id) on a dedicated notification chain
> + * so that:
> + *
> + *	(proto_X, evt_Y, src_Z) --> chain_X_Y_Z
> + *
> + * @src_id meaning is protocol specific and identifies the origin of the event
> + * (like domain_id, sensor_id and so forth).
> + *
> + * @src_id can be NULL to signify that the caller is interested in receiving
> + * notifications from ALL the available sources for that protocol OR simply that
> + * the protocol does not support distinct sources.
> + *
> + * As soon as one user for the specified tuple appears, an handler is created,
> + * and that specific event's generation is enabled at the platform level, unless
> + * an associated registered event is found missing, meaning that the needed
> + * protocol is still to be initialized and the handler has just been registered
> + * as still pending.
> + *
> + * @handle: The handle identifying the platform instance against which the
> + *	    callback is registered
> + * @proto_id: Protocol ID
> + * @evt_id: Event ID
> + * @src_id: Source ID, when NULL register for events coming form ALL possible
> + *	    sources
> + * @nb: A standard notifier block to register for the specified event
> + *
> + * Return: Return 0 on Success
> + */
> +static int scmi_register_notifier(const struct scmi_handle *handle,
> +				  u8 proto_id, u8 evt_id, u32 *src_id,
> +				  struct notifier_block *nb)
> +{
> +	int ret = 0;
> +	u32 evt_key;
> +	struct scmi_event_handler *hndl;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
> +		return 0;
> +
> +	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
> +				src_id ? *src_id : SCMI_ALL_SRC_IDS);
> +	hndl = scmi_get_or_create_handler(ni, evt_key);
> +	if (IS_ERR_OR_NULL(hndl))
> +		return PTR_ERR(hndl);
> +
> +	blocking_notifier_chain_register(&hndl->chain, nb);
> +
> +	/* Enable events for not pending handlers */
> +	if (likely(!IS_HNDL_PENDING(hndl))) {
> +		if (!scmi_event_handler_enable_events(hndl)) {
> +			scmi_put_handler(ni, hndl);
> +			ret = -EINVAL;
> +		}
> +	}
> +
> +	return ret;
> +}
> +
> +/**
> + * scmi_unregister_notifier  - Unregister a notifier_block for an event
> + *
> + * Takes care to unregister the provided @nb from the notification chain
> + * associated to the specified event and, if there are no more users for the
> + * event handler, frees also the associated event handler structures.
> + * (this could possibly cause disabling of event's generation at platform level)
> + *
> + * @handle: The handle identifying the platform instance against which the
> + *	    callback is unregistered
> + * @proto_id: Protocol ID
> + * @evt_id: Event ID
> + * @src_id: Source ID
> + * @nb: The notifier_block to unregister
> + *
> + * Return: 0 on Success
> + */
> +static int scmi_unregister_notifier(const struct scmi_handle *handle,
> +				    u8 proto_id, u8 evt_id, u32 *src_id,
> +				    struct notifier_block *nb)
> +{
> +	u32 evt_key;
> +	struct scmi_event_handler *hndl;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
> +		return 0;
> +
> +	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
> +				src_id ? *src_id : SCMI_ALL_SRC_IDS);
> +	hndl = scmi_get_handler(ni, evt_key);
> +	if (IS_ERR_OR_NULL(hndl))
> +		return -EINVAL;
> +
> +	blocking_notifier_chain_unregister(&hndl->chain, nb);
> +	scmi_put_handler(ni, hndl);
> +
> +	/*
> +	 * Free the handler (and stop events) if this happens to be the last
> +	 * known user callback for this handler; a possible concurrently ongoing
> +	 * run of @scmi_lookup_and_call_event_chain will cause this to happen
> +	 * in that context safely instead.
> +	 */
> +	scmi_put_handler(ni, hndl);
> +
> +	return 0;
> +}
> +
> +/**
> + * scmi_protocols_late_init  - Worker for late initialization
> + *
> + * This kicks in whenever a new protocol has completed its own registration via
> + * scmi_register_protocol_events(): it is in charge of scanning the table of
> + * pending handlers (registered by users while the related protocol was still
> + * not initialized) and finalizing their initialization whenever possible;
> + * invalid pending handlers are purged at this point in time.
> + *
> + * @work: The work item to use associated to the proper SCMI instance
> + */
> +static void scmi_protocols_late_init(struct work_struct *work)
> +{
> +	int bkt;
> +	struct scmi_event_handler *hndl;
> +	struct scmi_notify_instance *ni;
> +	struct hlist_node *tmp;
> +
> +	ni = container_of(work, struct scmi_notify_instance, init_work);
> +
> +	mutex_lock(&ni->pending_mtx);
> +	hash_for_each_safe(ni->pending_events_handlers, bkt, tmp, hndl, hash) {
> +		bool ret;
> +
> +		ret = scmi_bind_event_handler(ni, hndl);
> +		if (ret) {
> +			pr_info("SCMI Notifications: finalized PENDING handler - key:%X\n",
> +				hndl->key);
> +			ret = scmi_event_handler_enable_events(hndl);
> +		} else {
> +			ret = scmi_valid_pending_handler(ni, hndl);
> +		}
> +		if (!ret) {
> +			pr_info("SCMI Notifications: purging PENDING handler - key:%X\n",
> +				hndl->key);
> +			/* this hndl can be only a pending one */
> +			scmi_put_handler_unlocked(ni, hndl);
> +		}
> +	}
> +	mutex_unlock(&ni->pending_mtx);
> +}
> +
> +/*
> + * notify_ops are attached to the handle so that can be accessed
> + * directly from an scmi_driver to register its own notifiers.
> + */
> +static struct scmi_notify_ops notify_ops = {
> +	.register_event_notifier = scmi_register_notifier,
> +	.unregister_event_notifier = scmi_unregister_notifier,
> +};
> +
>  /**
>   * scmi_notification_init  - Initializes Notification Core Support
>   *
> @@ -398,7 +1092,13 @@ int scmi_notification_init(struct scmi_handle *handle)
>  	if (!ni->registered_protocols)
>  		goto err;
>  
> +	mutex_init(&ni->pending_mtx);
> +	hash_init(ni->pending_events_handlers);
> +
> +	INIT_WORK(&ni->init_work, scmi_protocols_late_init);
> +
>  	handle->notify_priv = ni;
> +	handle->notify_ops = &notify_ops;
>  
>  	atomic_set(&ni->initialized, 1);
>  	atomic_set(&ni->enabled, 1);
> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
> index a7ece64e8842..f765acda2311 100644
> --- a/drivers/firmware/arm_scmi/notify.h
> +++ b/drivers/firmware/arm_scmi/notify.h
> @@ -9,9 +9,21 @@
>  #ifndef _SCMI_NOTIFY_H
>  #define _SCMI_NOTIFY_H
>  
> +#include <linux/bug.h>
>  #include <linux/device.h>
>  #include <linux/types.h>
>  
> +#define MAP_EVT_TO_ENABLE_CMD(id, map)			\
> +({							\
> +	int ret = -1;					\
> +							\
> +	if (likely((id) < ARRAY_SIZE((map))))		\
> +		ret = (map)[(id)];			\
> +	else						\
> +		WARN(1, "UN-KNOWN evt_id:%d\n", (id));	\
> +	ret;						\
> +})
> +
>  /**
>   * scmi_event  - Describes an event to be supported
>   *
> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
> index 0679f10ab05e..797e1e03ae52 100644
> --- a/include/linux/scmi_protocol.h
> +++ b/include/linux/scmi_protocol.h
> @@ -9,6 +9,8 @@
>  #define _LINUX_SCMI_PROTOCOL_H
>  
>  #include <linux/device.h>
> +#include <linux/ktime.h>
> +#include <linux/notifier.h>
>  #include <linux/types.h>
>  
>  #define SCMI_MAX_STR_SIZE	16
> @@ -211,6 +213,52 @@ struct scmi_reset_ops {
>  	int (*deassert)(const struct scmi_handle *handle, u32 domain);
>  };
>  
> +/**
> + * scmi_notify_ops  - represents notifications' operations provided by SCMI core
> + *
> + * A user can register/unregister its own notifier_block against the wanted
> + * platform instance regarding the desired event identified by the
> + * tuple: (proto_id, evt_id, src_id)
> + *
> + * @register_event_notifier: Register a notifier_block for the requested event
> + * @unregister_event_notifier: Unregister a notifier_block for the requested
> + *			       event
> + *
> + * where:
> + *
> + * @handle: The handle identifying the platform instance to use
> + * @proto_id: The protocol ID as in SCMI Specification
> + * @evt_id: The message ID of the desired event as in SCMI Specification
> + * @src_id: A pointer to the desired source ID if different sources are
> + *	    possible for the protocol (like domain_id, sensor_id...etc)
> + *
> + * @src_id can be provided as NULL if it simply does NOT make sense for
> + * the protocol at hand, OR if the user is explicitly interested in
> + * receiving notifications from ANY existent source associated to the
> + * specified proto_id / evt_id.
> + *
> + * Received notifications are finally delivered to the registered users,
> + * invoking the callback provided with the notifier_block *nb as follows:
> + *
> + *	int user_cb(nb, evt_id, report)
> + *
> + * with:
> + *
> + * @nb: The notifier block provided by the user
> + * @evt_id: The message ID of the delivered event
> + * @report: A custom struct describing the specific event delivered
> + *
> + * Events' customized report structs are detailed in the following.
> + */
> +struct scmi_notify_ops {
> +	int (*register_event_notifier)(const struct scmi_handle *handle,
> +				       u8 proto_id, u8 evt_id, u32 *src_id,
> +				       struct notifier_block *nb);
> +	int (*unregister_event_notifier)(const struct scmi_handle *handle,
> +					 u8 proto_id, u8 evt_id, u32 *src_id,
> +					 struct notifier_block *nb);
> +};
> +
>  /**
>   * struct scmi_handle - Handle returned to ARM SCMI clients for usage.
>   *
> @@ -221,6 +269,7 @@ struct scmi_reset_ops {
>   * @clk_ops: pointer to set of clock protocol operations
>   * @sensor_ops: pointer to set of sensor protocol operations
>   * @reset_ops: pointer to set of reset protocol operations
> + * @notify_ops: pointer to set of notifications related operations
>   * @perf_priv: pointer to private data structure specific to performance
>   *	protocol(for internal use only)
>   * @clk_priv: pointer to private data structure specific to clock
> @@ -242,6 +291,7 @@ struct scmi_handle {
>  	struct scmi_power_ops *power_ops;
>  	struct scmi_sensor_ops *sensor_ops;
>  	struct scmi_reset_ops *reset_ops;
> +	struct scmi_notify_ops *notify_ops;
>  	/* for protocol internal use */
>  	void *perf_priv;
>  	void *clk_priv;



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 06/13] firmware: arm_scmi: Add notification callbacks-registration
@ 2020-03-09 11:50     ` Jonathan Cameron
  0 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 11:50 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

On Wed, 4 Mar 2020 16:25:51 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Add core SCMI Notifications callbacks-registration support: allow users
> to register their own callbacks against the desired events.
> Whenever a registration request is issued against a still non existent
> event, mark such request as pending for later processing, in order to
> account for possible late initializations of SCMI Protocols associated
> to loadable drivers.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Another one that you should run the kernel-doc scripts over. I haven't checked
but fairly sure they won't like some of this...

Otherwise a few trivial things inline.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Thanks,

Jonathan

> ---
> V3 --> V4
> - split registered_handlers hashtable on a per-protocol basis to reduce
>   unneeded contention
> - introduced pending_handlers table and related late_init worker to finalize
>   handlers registration upon effective protocols' registrations
> - introduced further safe accessors macros for registered_protocols
>   and registered_events arrays
> V2 --> V3
> - refactored get/put event_handler
> - removed generic non-handle-based API
> V1 --> V2
> - splitted out of V1 patch 04
> - moved from IDR maps to real HashTables to store event_handlers
> - added proper enable_events refcounting via __scmi_enable_evt()
>   [was broken in V1 when using ALL_SRCIDs notification chains]
> - reviewed hashtable cleanup strategy in scmi_notifications_exit()
> - added scmi_register_event_notifier()/scmi_unregister_event_notifier()
>   to include/linux/scmi_protocol.h as a candidate user API
>   [no EXPORTs still]
> - added notify_ops to handle during initialization as an additional
>   internal API for scmi_drivers
> ---
>  drivers/firmware/arm_scmi/notify.c | 700 +++++++++++++++++++++++++++++
>  drivers/firmware/arm_scmi/notify.h |  12 +
>  include/linux/scmi_protocol.h      |  50 +++
>  3 files changed, 762 insertions(+)
> 
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> index 31e49cb7d88e..d6c08cce3c63 100644
> --- a/drivers/firmware/arm_scmi/notify.c
> +++ b/drivers/firmware/arm_scmi/notify.c
> @@ -16,18 +16,50 @@
>   * this core its set of supported events using @scmi_register_protocol_events():
>   * all the needed descriptors are stored in the @registered_protocols and
>   * @registered_events arrays.
> + *
> + * Kernel users interested in some specific event can register their callbacks
> + * providing the usual notifier_block descriptor, since this core implements
> + * events' delivery using the standard Kernel notification chains machinery.
> + *
> + * Given the number of possible events defined by SCMI and the extensibility
> + * of the SCMI Protocol itself, the underlying notification chains are created
> + * and destroyed dynamically on demand depending on the number of users
> + * effectively registered for an event, so that no support structures or chains
> + * are allocated until at least one user has registered a notifier_block for
> + * such event. Similarly, events' generation itself is enabled at the platform
> + * level only after at least one user has registered, and it is shutdown after
> + * the last user for that event has gone.
> + *
> + * All users provided callbacks and allocated notification-chains are stored in
> + * the @registered_events_handlers hashtable. Callbacks' registration requests
> + * for still to be registered events are instead kept in the dedicated common
> + * hashtable @pending_events_handlers.
> + *
> + * An event is identified univocally by the tuple (proto_id, evt_id, src_id)
> + * and is served by its own dedicated notification chain; information contained
> + * in such tuples is used, in a few different ways, to generate the needed
> + * hash-keys.
> + *
> + * Here proto_id and evt_id are simply the protocol_id and message_id numbers
> + * as described in the SCMI Protocol specification, while src_id represents an
> + * optional, protocol dependent, source identifier (like domain_id, perf_id
> + * or sensor_id and so forth).
>   */
>  
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>  
>  #include <linux/atomic.h>
> +#include <linux/bitfield.h>
>  #include <linux/bug.h>
>  #include <linux/compiler.h>
>  #include <linux/device.h>
>  #include <linux/err.h>
> +#include <linux/hashtable.h>
>  #include <linux/kernel.h>
>  #include <linux/kfifo.h>
> +#include <linux/list.h>
>  #include <linux/mutex.h>
> +#include <linux/notifier.h>
>  #include <linux/refcount.h>
>  #include <linux/scmi_protocol.h>
>  #include <linux/slab.h>
> @@ -47,6 +79,71 @@
>  #define MAKE_ALL_SRCS_KEY(p, e)			\
>  	MAKE_HASH_KEY((p), (e), SCMI_ALL_SRC_IDS)
>  
> +/**
> + * Assumes that the stored obj includes its own hash-key in a field named 'key':
> + * with this simplification this macro can be equally used for all the objects'
> + * types hashed by this implementation.
> + *
> + * @__ht: The hashtable name
> + * @__obj: A pointer to the object type to be retrieved from the hashtable;
> + *	   it will be used as a cursor while scanning the hastable and it will
> + *	   be possibly left as NULL when @__k is not found
> + * @__k: The key to search for
> + */
> +#define KEY_FIND(__ht, __obj, __k)				\
> +({								\
> +	hash_for_each_possible((__ht), (__obj), hash, (__k))	\
> +		if (likely((__obj)->key == (__k)))		\
> +			break;					\
> +	__obj;							\
> +})
> +
> +#define PROTO_ID_MASK			GENMASK(31, 24)
> +#define EVT_ID_MASK			GENMASK(23, 16)
> +#define SRC_ID_MASK			GENMASK(15, 0)
> +#define KEY_XTRACT_PROTO_ID(key)	FIELD_GET(PROTO_ID_MASK, (key))
> +#define KEY_XTRACT_EVT_ID(key)		FIELD_GET(EVT_ID_MASK, (key))
> +#define KEY_XTRACT_SRC_ID(key)		FIELD_GET(SRC_ID_MASK, (key))
> +
> +/**
> + * A set of macros used to access safely @registered_protocols and
> + * @registered_events arrays; these are fixed in size and each entry is possibly
> + * populated at protocols' registration time and then only read but NEVER
> + * modified or removed.
> + */
> +#define SCMI_GET_PROTO(__ni, __pid)					\
> +({									\
> +	struct scmi_registered_protocol_events_desc *__pd = NULL;	\
> +									\
> +	if ((__ni) && (__pid) < SCMI_MAX_PROTO)				\
> +		__pd = READ_ONCE((__ni)->registered_protocols[(__pid)]);\
> +	__pd;								\
> +})
> +
> +#define SCMI_GET_REVT_FROM_PD(__pd, __eid)				\
> +({									\
> +	struct scmi_registered_event *__revt = NULL;			\
> +									\
> +	if ((__pd) && (__eid) < (__pd)->num_events)			\
> +		__revt = READ_ONCE((__pd)->registered_events[(__eid)]);	\
> +	__revt;								\
> +})
> +
> +#define SCMI_GET_REVT(__ni, __pid, __eid)				\
> +({									\
> +	struct scmi_registered_event *__revt = NULL;			\
> +	struct scmi_registered_protocol_events_desc *__pd = NULL;	\
> +									\
> +	__pd = SCMI_GET_PROTO((__ni), (__pid));				\
> +	__revt = SCMI_GET_REVT_FROM_PD(__pd, (__eid));			\
> +	__revt;								\
> +})
> +
> +/* A couple of utility macros to limit cruft when calling protocols' helpers */
> +#define REVT_NOTIFY_ENABLE(revt, ...)	\
> +	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
> +						__VA_ARGS__))
> +
>  struct scmi_registered_protocol_events_desc;
>  
>  /**
> @@ -60,16 +157,25 @@ struct scmi_registered_protocol_events_desc;
>   * @initialized: A flag that indicates if the core resources have been allocated
>   *		 and protocols are allowed to register their supported events
>   * @enabled: A flag to indicate events can be enabled and start flowing
> + * @init_work: A work item to perform final initializations of pending handlers
> + * @pending_mtx: A mutex to protect @pending_events_handlers
>   * @registered_protocols: An statically allocated array containing pointers to
>   *			  all the registered protocol-level specific information
>   *			  related to events' handling
> + * @pending_events_handlers: An hashtable containing all pending events'
> + *			     handlers descriptors
>   */
>  struct scmi_notify_instance {
>  	void						*gid;
>  	struct scmi_handle				*handle;
>  	atomic_t					initialized;
>  	atomic_t					enabled;
> +
> +	struct work_struct				init_work;
> +
> +	struct mutex					pending_mtx;
>  	struct scmi_registered_protocol_events_desc	**registered_protocols;
> +	DECLARE_HASHTABLE(pending_events_handlers, 8);
>  };
>  
>  /**
> @@ -132,6 +238,9 @@ struct scmi_registered_event;
>   * @registered_events: A dynamically allocated array holding all the registered
>   *		       events' descriptors, whose fixed-size is determined at
>   *		       compile time.
> + * @registered_mtx: A mutex to protect @registered_events_handlers
> + * @registered_events_handlers: An hashtable containing all events' handlers
> + *				descriptors registered for this protocol
>   */
>  struct scmi_registered_protocol_events_desc {
>  	u8					id;
> @@ -143,6 +252,8 @@ struct scmi_registered_protocol_events_desc {
>  	void					*in_flight;
>  	int					num_events;
>  	struct scmi_registered_event		**registered_events;
> +	struct mutex				registered_mtx;
> +	DECLARE_HASHTABLE(registered_events_handlers, 8);
>  };
>  
>  /**
> @@ -175,6 +286,38 @@ struct scmi_registered_event {
>  	struct mutex					sources_mtx;
>  };
>  
> +/**
> + * scmi_event_handler  - Event handler information
> + *
> + * This structure collects all the information needed to process a received
> + * event identified by the tuple (proto_id, evt_id, src_id).
> + * These descriptors are stored in a per-protocol @registered_events_handlers
> + * table using as a key a value derived from that tuple.
> + *
> + * @key: The used hashkey
> + * @users: A reference count for number of active users for this handler
> + * @r_evt: A reference to the associated registered event; when this is NULL
> + *	   this handler is pending, which means that identifies a set of
> + *	   callbacks intended to be attached to an event which is still not
> + *	   known nor registered by any protocol at that point in time
> + * @chain: The notification chain dedicated to this specific event tuple
> + * @hash: The hlist_node used for collision handling
> + * @enabled: A boolean which records if event's generation has been already
> + *	     enabled for this handler as a whole
> + */
> +struct scmi_event_handler {
> +	u32				key;
> +	refcount_t			users;
> +	struct scmi_registered_event	*r_evt;
> +	struct blocking_notifier_head	chain;
> +	struct hlist_node		hash;
> +	bool				enabled;
> +};
> +
> +#define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
> +
> +static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
> +				      struct scmi_event_handler *hndl);
>  /**
>   * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
>   *
> @@ -252,6 +395,10 @@ scmi_allocate_registered_protocol_desc(struct scmi_notify_instance *ni,
>  		return ERR_PTR(-ENOMEM);
>  	pd->num_events = num_events;
>  
> +	/* Initialize per protocol handlers table */
> +	mutex_init(&pd->registered_mtx);
> +	hash_init(pd->registered_events_handlers);
> +
>  	return pd;
>  }
>  
> @@ -338,6 +485,12 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
>  
>  	devres_close_group(ni->handle->dev, ni->gid);
>  
> +	/*
> +	 * Finalize any pending events' handler which could have been waiting
> +	 * for this protocol's events registration.
> +	 */
> +	schedule_work(&ni->init_work);
> +
>  	return 0;
>  
>  err:
> @@ -349,6 +502,547 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
>  	return -ENOMEM;
>  }
>  
> +/**
> + * scmi_allocate_event_handler  - Allocate Event handler
> + *
> + * Allocate an event handler and related notification chain associated with
> + * the provided event handler key.
> + * Note that, at this point, a related registered_event is still to be
> + * associated to this handler descriptor (hndl->r_evt == NULL), so the handler
> + * is initialized as pending.
> + *
> + * Assumes to be called with @pending_mtx already acquired.
> + *
> + * @ni: A reference to the notification instance to use
> + * @evt_key: 32bit key uniquely bind to the event identified by the tuple
> + *	     (proto_id, evt_id, src_id)
> + *
> + * Return: the freshly allocated structure on Success
> + */
> +static struct scmi_event_handler *
> +scmi_allocate_event_handler(struct scmi_notify_instance *ni, u32 evt_key)
> +{
> +	struct scmi_event_handler *hndl;
> +
> +	hndl = kzalloc(sizeof(*hndl), GFP_KERNEL);
> +	if (!hndl)
> +		return ERR_PTR(-ENOMEM);
> +	hndl->key = evt_key;
> +	BLOCKING_INIT_NOTIFIER_HEAD(&hndl->chain);
> +	refcount_set(&hndl->users, 1);
> +	/* New handlers are created pending */
> +	hash_add(ni->pending_events_handlers, &hndl->hash, hndl->key);
> +
> +	return hndl;
> +}
> +
> +/**
> + * scmi_free_event_handler  - Free the provided Event handler
> + *
> + * Assumes to be called with proper locking acquired depending on the situation.
> + *
> + * @hndl: The event handler structure to free
> + */
> +static void scmi_free_event_handler(struct scmi_event_handler *hndl)
> +{
> +	hash_del(&hndl->hash);
> +	kfree(hndl);
> +}
> +
> +/**
> + * scmi_bind_event_handler  - Helper to attempt binding an handler to an event
> + *
> + * If an associated registered event is found, move the handler from the pending
> + * into the registered table.
> + *
> + * Assumes to be called with @pending_mtx already acquired.
> + *
> + * @ni: A reference to the notification instance to use
> + * @hndl: The event handler to bind
> + *
> + * Return: True if bind was successful, False otherwise
> + */
> +static inline bool scmi_bind_event_handler(struct scmi_notify_instance *ni,
> +					   struct scmi_event_handler *hndl)
> +{
> +	struct scmi_registered_event *r_evt;
> +
> +
> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(hndl->key),
> +			      KEY_XTRACT_EVT_ID(hndl->key));
> +	if (unlikely(!r_evt))
> +		return false;
> +
> +	/* Remove from pending and insert into registered */
> +	hash_del(&hndl->hash);
> +	hndl->r_evt = r_evt;
> +	mutex_lock(&r_evt->proto->registered_mtx);
> +	hash_add(r_evt->proto->registered_events_handlers,
> +		 &hndl->hash, hndl->key);
> +	mutex_unlock(&r_evt->proto->registered_mtx);
> +
> +	return true;
> +}
> +
> +/**
> + * scmi_valid_pending_handler  - Helper to check pending status of handlers
> + *
> + * An handler is considered pending when its r_evt == NULL, because the related
> + * event was still unknown at handler's registration time; anyway, since all
> + * protocols register their supported events once for all at protocols'
> + * initialization time, a pending handler cannot be considered valid anymore if
> + * the underlying event (which it is waiting for), belongs to an already
> + * initialized and registered protocol.
> + *
> + * @ni: A reference to the notification instance to use
> + * @hndl: The event handler to check
> + *
> + * Return: True if pending registration is still valid, False otherwise.
> + */
> +static inline bool scmi_valid_pending_handler(struct scmi_notify_instance *ni,
> +					      struct scmi_event_handler *hndl)
> +{
> +	struct scmi_registered_protocol_events_desc *pd;
> +
> +	if (unlikely(!IS_HNDL_PENDING(hndl)))
> +		return false;
> +
> +	pd = SCMI_GET_PROTO(ni, KEY_XTRACT_PROTO_ID(hndl->key));
> +	if (pd)
> +		return false;
> +
> +	return true;
> +}
> +
> +/**
> + * scmi_register_event_handler  - Register whenever possible an Event handler
> + *
> + * At first try to bind an event handler to its associated event, then check if
> + * it was at least a valid pending handler: if it was not bound nor valid return
> + * false.
> + *
> + * Valid pending incomplete bindings will be periodically retried by a dedicated
> + * worker which is kicked each time a new protocol completes its own
> + * registration phase.
> + *
> + * Assumes to be called with @pending_mtx acquired.
> + *
> + * @ni: A reference to the notification instance to use
> + * @hndl: The event handler to register
> + *
> + * Return: True if a normal or a valid pending registration has been completed,
> + *	   False otherwise
> + */
> +static bool scmi_register_event_handler(struct scmi_notify_instance *ni,
> +					struct scmi_event_handler *hndl)
> +{
> +	bool ret;
> +
> +	ret = scmi_bind_event_handler(ni, hndl);
> +	if (ret) {
> +		pr_info("SCMI Notifications: registered NEW handler - key:%X\n",
> +			hndl->key);
> +	} else {
> +		ret = scmi_valid_pending_handler(ni, hndl);
> +		if (ret)
> +			pr_info("SCMI Notifications: registered PENDING handler - key:%X\n",
> +				hndl->key);
> +	}
> +
> +	return ret;
> +}
> +
> +/**
> + * __scmi_event_handler_get_ops  - Utility to get or create an event handler
> + *
> + * Search for the desired handler matching the key in both the per-protocol
> + * registered table and the common pending table:
> + *  - if found adjust users refcount
> + *  - if not found and @create is true, create and register the new handler:
> + *    handler could end up being registered as pending if no matching event
> + *    could be found.
> + *
> + * An handler is guaranteed to reside in one and only one of the tables at
> + * any one time; to ensure this the whole search and create is performed
> + * holding the @pending_mtx lock, with @registered_mtx additionally acquired
> + * if needed.
> + * Note that when a nested acquisition of these mutexes is needed the locking
> + * order is always (same as in @init_work):
> + *	1. pending_mtx
> + *	2. registered_mtx
> + *
> + * Events generation is NOT enabled right after creation within this routine
> + * since at creation time we usually want to have all setup and ready before
> + * events really start flowing.
> + *
> + * @ni: A reference to the notification instance to use
> + * @evt_key: The event key to use
> + * @create: A boolean flag to specify if a handler must be created when
> + *	    not already existent
> + *
> + * Return: A properly refcounted handler on Success, NULL on Failure
> + */
> +static inline struct scmi_event_handler *
> +__scmi_event_handler_get_ops(struct scmi_notify_instance *ni,
> +			     u32 evt_key, bool create)
> +{
> +	struct scmi_registered_event *r_evt;
> +	struct scmi_event_handler *hndl = NULL;
> +
> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
> +			      KEY_XTRACT_EVT_ID(evt_key));
> +
> +	mutex_lock(&ni->pending_mtx);
> +	/* Search registered events at first ... if possible at all */
> +	if (likely(r_evt)) {
> +		mutex_lock(&r_evt->proto->registered_mtx);
> +		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
> +				hndl, evt_key);
> +		if (likely(hndl))
> +			refcount_inc(&hndl->users);
> +		mutex_unlock(&r_evt->proto->registered_mtx);
> +	}
> +
> +	/* ...then amongst pending. */
> +	if (unlikely(!hndl)) {
> +		hndl = KEY_FIND(ni->pending_events_handlers, hndl, evt_key);
> +		if (likely(hndl))
> +			refcount_inc(&hndl->users);
> +	}
> +
> +	/* Create if still not found and required */
> +	if (!hndl && create) {
> +		hndl = scmi_allocate_event_handler(ni, evt_key);
> +		if (!IS_ERR_OR_NULL(hndl)) {
> +			if (!scmi_register_event_handler(ni, hndl)) {
> +				pr_info("SCMI Notifications: purging UNKNOWN handler - key:%X\n",
> +					hndl->key);
> +				/* this hndl can be only a pending one */
> +				scmi_put_handler_unlocked(ni, hndl);
> +				hndl = NULL;
> +			}
> +		}
> +	}
> +	mutex_unlock(&ni->pending_mtx);
> +
> +	return hndl;
> +}
> +
> +static struct scmi_event_handler *
> +scmi_get_handler(struct scmi_notify_instance *ni, u32 evt_key)
> +{
> +	return __scmi_event_handler_get_ops(ni, evt_key, false);
> +}
> +
> +static struct scmi_event_handler *
> +scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
> +{
> +	return __scmi_event_handler_get_ops(ni, evt_key, true);
> +}
> +
> +/**
> + * __scmi_enable_evt  - Enable/disable events generation
> + *
> + * Takes care of proper refcounting while performing enable/disable: handles
> + * the special case of ALL sources requests by itself.
> + *
> + * @r_evt: The registered event to act upon
> + * @src_id: The src_id to act upon
> + * @enable: The action to perform: true->Enable, false->Disable
> + *
> + * Return: True when the required @action has been successfully executed
> + */
> +static inline bool __scmi_enable_evt(struct scmi_registered_event *r_evt,
> +				     u32 src_id, bool enable)
> +{
> +	int ret = 0;
> +	u32 num_sources;
> +	refcount_t *sid;
> +
> +	if (src_id == SCMI_ALL_SRC_IDS) {
> +		src_id = 0;
> +		num_sources = r_evt->num_sources;
> +	} else if (src_id < r_evt->num_sources) {
> +		num_sources = 1;
> +	} else {
> +		return ret;
> +	}
> +
> +	mutex_lock(&r_evt->sources_mtx);
> +	if (enable) {
> +		for (; num_sources; src_id++, num_sources--) {
> +			bool r;
> +
> +			sid = &r_evt->sources[src_id];
> +			if (refcount_read(sid) == 0) {
> +				r = REVT_NOTIFY_ENABLE(r_evt,
> +						       r_evt->evt->id,
> +						       src_id, enable);

I would make the enable explicit in this call so it is obvious we are
in the enable path rather than disable.

> +				if (r)
> +					refcount_set(sid, 1);
> +			} else {
> +				refcount_inc(sid);
> +				r = true;
> +			}
> +			ret += r;
> +		}
> +	} else {
> +		for (; num_sources; src_id++, num_sources--) {
> +			sid = &r_evt->sources[src_id];
> +			if (refcount_dec_and_test(sid))
> +				REVT_NOTIFY_ENABLE(r_evt,
> +						   r_evt->evt->id,
> +						   src_id, enable);

As above, make the enable value explicit.

> +		}
> +		ret = 1;
> +	}
> +	mutex_unlock(&r_evt->sources_mtx);
> +
> +	return ret;
> +}
> +
> +static bool scmi_enable_events(struct scmi_event_handler *hndl)
> +{
> +	if (!hndl->enabled)
> +		hndl->enabled = __scmi_enable_evt(hndl->r_evt,
> +						  KEY_XTRACT_SRC_ID(hndl->key),
> +						  true);
> +	return hndl->enabled;
> +}
> +
> +static bool scmi_disable_events(struct scmi_event_handler *hndl)
> +{
> +	if (hndl->enabled)
> +		hndl->enabled = !__scmi_enable_evt(hndl->r_evt,
> +						   KEY_XTRACT_SRC_ID(hndl->key),
> +						   false);
> +	return !hndl->enabled;
> +}
> +
> +/**
> + * scmi_put_handler_unlocked  - Put an event handler
> + *
> + * After having got exclusive access to the registered handlers hashtable,
> + * update the refcount and if @hndl is no more in use by anyone:
> + *
> + *  - ask for events' generation disabling
> + *  - unregister and free the handler itself
> + *
> + *  Assumes all the proper locking has been managed by the caller.
> + *
> + * @ni: A reference to the notification instance to use
> + * @hndl: The event handler to act upon
> + */
> +
> +static void
> +scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
> +				struct scmi_event_handler *hndl)
> +{
> +	if (refcount_dec_and_test(&hndl->users)) {
> +		if (likely(!IS_HNDL_PENDING(hndl)))
> +			scmi_disable_events(hndl);
> +		scmi_free_event_handler(hndl);
> +	}
> +}
> +
> +static void scmi_put_handler(struct scmi_notify_instance *ni,
> +			     struct scmi_event_handler *hndl)
> +{
> +	struct scmi_registered_event *r_evt = hndl->r_evt;
> +
> +	mutex_lock(&ni->pending_mtx);
> +	if (r_evt)
> +		mutex_lock(&r_evt->proto->registered_mtx);
> +
> +	scmi_put_handler_unlocked(ni, hndl);
> +
> +	if (r_evt)
> +		mutex_unlock(&r_evt->proto->registered_mtx);
> +	mutex_unlock(&ni->pending_mtx);
> +}
> +
> +/**
> + * scmi_event_handler_enable_events  - Enable events associated to an handler
> + *
> + * @hndl: The Event handler to act upon
> + *
> + * Return: True on success
> + */
> +static bool scmi_event_handler_enable_events(struct scmi_event_handler *hndl)
> +{
> +	if (!scmi_enable_events(hndl)) {
> +		pr_err("SCMI Notifications: Failed to ENABLE events for key:%X !\n",
> +		       hndl->key);
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +/**
> + * scmi_register_notifier  - Register a notifier_block for an event
> + *
> + * Generic helper to register a notifier_block against a protocol event.
> + *
> + * A notifier_block @nb will be registered for each distinct event identified
> + * by the tuple (proto_id, evt_id, src_id) on a dedicated notification chain
> + * so that:
> + *
> + *	(proto_X, evt_Y, src_Z) --> chain_X_Y_Z
> + *
> + * @src_id meaning is protocol specific and identifies the origin of the event
> + * (like domain_id, sensor_id and so forth).
> + *
> + * @src_id can be NULL to signify that the caller is interested in receiving
> + * notifications from ALL the available sources for that protocol OR simply that
> + * the protocol does not support distinct sources.
> + *
> + * As soon as one user for the specified tuple appears, an handler is created,
> + * and that specific event's generation is enabled at the platform level, unless
> + * an associated registered event is found missing, meaning that the needed
> + * protocol is still to be initialized and the handler has just been registered
> + * as still pending.
> + *
> + * @handle: The handle identifying the platform instance against which the
> + *	    callback is registered
> + * @proto_id: Protocol ID
> + * @evt_id: Event ID
> + * @src_id: Source ID, when NULL register for events coming form ALL possible
> + *	    sources
> + * @nb: A standard notifier block to register for the specified event
> + *
> + * Return: Return 0 on Success
> + */
> +static int scmi_register_notifier(const struct scmi_handle *handle,
> +				  u8 proto_id, u8 evt_id, u32 *src_id,
> +				  struct notifier_block *nb)
> +{
> +	int ret = 0;
> +	u32 evt_key;
> +	struct scmi_event_handler *hndl;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
> +		return 0;
> +
> +	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
> +				src_id ? *src_id : SCMI_ALL_SRC_IDS);
> +	hndl = scmi_get_or_create_handler(ni, evt_key);
> +	if (IS_ERR_OR_NULL(hndl))
> +		return PTR_ERR(hndl);
> +
> +	blocking_notifier_chain_register(&hndl->chain, nb);
> +
> +	/* Enable events for not pending handlers */
> +	if (likely(!IS_HNDL_PENDING(hndl))) {
> +		if (!scmi_event_handler_enable_events(hndl)) {
> +			scmi_put_handler(ni, hndl);
> +			ret = -EINVAL;
> +		}
> +	}
> +
> +	return ret;
> +}
> +
> +/**
> + * scmi_unregister_notifier  - Unregister a notifier_block for an event
> + *
> + * Takes care to unregister the provided @nb from the notification chain
> + * associated to the specified event and, if there are no more users for the
> + * event handler, frees also the associated event handler structures.
> + * (this could possibly cause disabling of event's generation at platform level)
> + *
> + * @handle: The handle identifying the platform instance against which the
> + *	    callback is unregistered
> + * @proto_id: Protocol ID
> + * @evt_id: Event ID
> + * @src_id: Source ID
> + * @nb: The notifier_block to unregister
> + *
> + * Return: 0 on Success
> + */
> +static int scmi_unregister_notifier(const struct scmi_handle *handle,
> +				    u8 proto_id, u8 evt_id, u32 *src_id,
> +				    struct notifier_block *nb)
> +{
> +	u32 evt_key;
> +	struct scmi_event_handler *hndl;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
> +		return 0;
> +
> +	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
> +				src_id ? *src_id : SCMI_ALL_SRC_IDS);
> +	hndl = scmi_get_handler(ni, evt_key);
> +	if (IS_ERR_OR_NULL(hndl))
> +		return -EINVAL;
> +
> +	blocking_notifier_chain_unregister(&hndl->chain, nb);
> +	scmi_put_handler(ni, hndl);
> +
> +	/*
> +	 * Free the handler (and stop events) if this happens to be the last
> +	 * known user callback for this handler; a possible concurrently ongoing
> +	 * run of @scmi_lookup_and_call_event_chain will cause this to happen
> +	 * in that context safely instead.
> +	 */
> +	scmi_put_handler(ni, hndl);
> +
> +	return 0;
> +}
> +
> +/**
> + * scmi_protocols_late_init  - Worker for late initialization
> + *
> + * This kicks in whenever a new protocol has completed its own registration via
> + * scmi_register_protocol_events(): it is in charge of scanning the table of
> + * pending handlers (registered by users while the related protocol was still
> + * not initialized) and finalizing their initialization whenever possible;
> + * invalid pending handlers are purged at this point in time.
> + *
> + * @work: The work item to use associated to the proper SCMI instance
> + */
> +static void scmi_protocols_late_init(struct work_struct *work)
> +{
> +	int bkt;
> +	struct scmi_event_handler *hndl;
> +	struct scmi_notify_instance *ni;
> +	struct hlist_node *tmp;
> +
> +	ni = container_of(work, struct scmi_notify_instance, init_work);
> +
> +	mutex_lock(&ni->pending_mtx);
> +	hash_for_each_safe(ni->pending_events_handlers, bkt, tmp, hndl, hash) {
> +		bool ret;
> +
> +		ret = scmi_bind_event_handler(ni, hndl);
> +		if (ret) {
> +			pr_info("SCMI Notifications: finalized PENDING handler - key:%X\n",
> +				hndl->key);
> +			ret = scmi_event_handler_enable_events(hndl);
> +		} else {
> +			ret = scmi_valid_pending_handler(ni, hndl);
> +		}
> +		if (!ret) {
> +			pr_info("SCMI Notifications: purging PENDING handler - key:%X\n",
> +				hndl->key);
> +			/* this hndl can be only a pending one */
> +			scmi_put_handler_unlocked(ni, hndl);
> +		}
> +	}
> +	mutex_unlock(&ni->pending_mtx);
> +}
> +
> +/*
> + * notify_ops are attached to the handle so that can be accessed
> + * directly from an scmi_driver to register its own notifiers.
> + */
> +static struct scmi_notify_ops notify_ops = {
> +	.register_event_notifier = scmi_register_notifier,
> +	.unregister_event_notifier = scmi_unregister_notifier,
> +};
> +
>  /**
>   * scmi_notification_init  - Initializes Notification Core Support
>   *
> @@ -398,7 +1092,13 @@ int scmi_notification_init(struct scmi_handle *handle)
>  	if (!ni->registered_protocols)
>  		goto err;
>  
> +	mutex_init(&ni->pending_mtx);
> +	hash_init(ni->pending_events_handlers);
> +
> +	INIT_WORK(&ni->init_work, scmi_protocols_late_init);
> +
>  	handle->notify_priv = ni;
> +	handle->notify_ops = &notify_ops;
>  
>  	atomic_set(&ni->initialized, 1);
>  	atomic_set(&ni->enabled, 1);
> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
> index a7ece64e8842..f765acda2311 100644
> --- a/drivers/firmware/arm_scmi/notify.h
> +++ b/drivers/firmware/arm_scmi/notify.h
> @@ -9,9 +9,21 @@
>  #ifndef _SCMI_NOTIFY_H
>  #define _SCMI_NOTIFY_H
>  
> +#include <linux/bug.h>
>  #include <linux/device.h>
>  #include <linux/types.h>
>  
> +#define MAP_EVT_TO_ENABLE_CMD(id, map)			\
> +({							\
> +	int ret = -1;					\
> +							\
> +	if (likely((id) < ARRAY_SIZE((map))))		\
> +		ret = (map)[(id)];			\
> +	else						\
> +		WARN(1, "UN-KNOWN evt_id:%d\n", (id));	\
> +	ret;						\
> +})
> +
>  /**
>   * scmi_event  - Describes an event to be supported
>   *
> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
> index 0679f10ab05e..797e1e03ae52 100644
> --- a/include/linux/scmi_protocol.h
> +++ b/include/linux/scmi_protocol.h
> @@ -9,6 +9,8 @@
>  #define _LINUX_SCMI_PROTOCOL_H
>  
>  #include <linux/device.h>
> +#include <linux/ktime.h>
> +#include <linux/notifier.h>
>  #include <linux/types.h>
>  
>  #define SCMI_MAX_STR_SIZE	16
> @@ -211,6 +213,52 @@ struct scmi_reset_ops {
>  	int (*deassert)(const struct scmi_handle *handle, u32 domain);
>  };
>  
> +/**
> + * scmi_notify_ops  - represents notifications' operations provided by SCMI core
> + *
> + * A user can register/unregister its own notifier_block against the wanted
> + * platform instance regarding the desired event identified by the
> + * tuple: (proto_id, evt_id, src_id)
> + *
> + * @register_event_notifier: Register a notifier_block for the requested event
> + * @unregister_event_notifier: Unregister a notifier_block for the requested
> + *			       event
> + *
> + * where:
> + *
> + * @handle: The handle identifying the platform instance to use
> + * @proto_id: The protocol ID as in SCMI Specification
> + * @evt_id: The message ID of the desired event as in SCMI Specification
> + * @src_id: A pointer to the desired source ID if different sources are
> + *	    possible for the protocol (like domain_id, sensor_id...etc)
> + *
> + * @src_id can be provided as NULL if it simply does NOT make sense for
> + * the protocol at hand, OR if the user is explicitly interested in
> + * receiving notifications from ANY existent source associated to the
> + * specified proto_id / evt_id.
> + *
> + * Received notifications are finally delivered to the registered users,
> + * invoking the callback provided with the notifier_block *nb as follows:
> + *
> + *	int user_cb(nb, evt_id, report)
> + *
> + * with:
> + *
> + * @nb: The notifier block provided by the user
> + * @evt_id: The message ID of the delivered event
> + * @report: A custom struct describing the specific event delivered
> + *
> + * Events' customized report structs are detailed in the following.
> + */
> +struct scmi_notify_ops {
> +	int (*register_event_notifier)(const struct scmi_handle *handle,
> +				       u8 proto_id, u8 evt_id, u32 *src_id,
> +				       struct notifier_block *nb);
> +	int (*unregister_event_notifier)(const struct scmi_handle *handle,
> +					 u8 proto_id, u8 evt_id, u32 *src_id,
> +					 struct notifier_block *nb);
> +};
> +
>  /**
>   * struct scmi_handle - Handle returned to ARM SCMI clients for usage.
>   *
> @@ -221,6 +269,7 @@ struct scmi_reset_ops {
>   * @clk_ops: pointer to set of clock protocol operations
>   * @sensor_ops: pointer to set of sensor protocol operations
>   * @reset_ops: pointer to set of reset protocol operations
> + * @notify_ops: pointer to set of notifications related operations
>   * @perf_priv: pointer to private data structure specific to performance
>   *	protocol(for internal use only)
>   * @clk_priv: pointer to private data structure specific to clock
> @@ -242,6 +291,7 @@ struct scmi_handle {
>  	struct scmi_power_ops *power_ops;
>  	struct scmi_sensor_ops *sensor_ops;
>  	struct scmi_reset_ops *reset_ops;
> +	struct scmi_notify_ops *notify_ops;
>  	/* for protocol internal use */
>  	void *perf_priv;
>  	void *clk_priv;



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 05/13] firmware: arm_scmi: Add notification protocol-registration
  2020-03-09 11:33     ` Jonathan Cameron
@ 2020-03-09 12:04       ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-09 12:04 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

Hi

On 09/03/2020 11:33, Jonathan Cameron wrote:
> On Wed, 4 Mar 2020 16:25:50 +0000
> Cristian Marussi <cristian.marussi@arm.com> wrote:
> 
>> Add core SCMI Notifications protocol-registration support: allow protocols
>> to register their own set of supported events, during their initialization
>> phase. Notification core can track multiple platform instances by their
>> handles.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> 
> Hi.
> 
> A few minor things inline.  Fairly sure kernel-doc needs
> struct before the heading for each structure comment block.
> 
> Also, the events queue init looks like it could just be done with
> a kfifo_alloc call.  Perhaps that makes sense given later patches...
> 
> Thanks,
> 
> Jonathan

Thanks for the review first of all !

> 
>> ---
>> V3 --> V4
>> - removed scratch ISR buffer, move scratch BH buffer into protocol
>>   descriptor
>> - converted registered_protocols and registered_events from hashtables
>>   into bare fixed-sized arrays
>> - removed unregister protocols' routines (never called really)
>> V2 --> V3
>> - added scmi_notify_instance to track target platform instance
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store events
>> - scmi_notifications_initialized is now an atomic_t
>> - reviewed protocol registration/unregistration to use devres
>> - fixed:
>>   drivers/firmware/arm_scmi/notify.c:483:18-23: ERROR:
>>   	reference preceded by free on line 482
>>
>> Reported-by: kbuild test robot <lkp@intel.com>
>> Reported-by: Julia Lawall <julia.lawall@lip6.fr>
>> ---
[snip]
>> +
>> +/**
>> + * scmi_notify_instance  - Represents an instance of the notification core
>> + *
>> + * Each platform instance, represented by a handle, has its own instance of
>> + * the notification subsystem represented by this structure.
>> + *
>> + * @gid: GroupID used for devres
>> + * @handle: A reference to the platform instance
>> + * @initialized: A flag that indicates if the core resources have been allocated
>> + *		 and protocols are allowed to register their supported events
>> + * @enabled: A flag to indicate events can be enabled and start flowing
>> + * @registered_protocols: An statically allocated array containing pointers to
>> + *			  all the registered protocol-level specific information
>> + *			  related to events' handling
>> + */
>> +struct scmi_notify_instance {
>> +	void						*gid;
>> +	struct scmi_handle				*handle;
>> +	atomic_t					initialized;
>> +	atomic_t					enabled;
>> +	struct scmi_registered_protocol_events_desc	**registered_protocols;
>> +};
>> +
>> +/**
>> + * events_queue  - Describes a queue and its associated worker
> 
> I guess this might become clear later, but right now this just looks like
> we are open code what could be handled automatically by just using
> kfifo_alloc
> 

In fact I switched to this split alloc/init (as you guessed later) because of the lack
of devm_ flavour (and my ignorance about the usage of devm_add_action_or_reset ...)
I'll look into it.

>> + *
>> + * Each protocol has its own dedicated events_queue descriptor.
>> + *
>> + * @sz: Size in bytes of the related kfifo
>> + * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
>> + * @kfifo: A dedicated Kernel kfifo descriptor
>> + */
>> +struct events_queue {
>> +	size_t				sz;
>> +	u8				*qbuf;
>> +	struct kfifo			kfifo;
>> +};
>> +
>> +/**
>> + * scmi_event_header  - A utility header
> 
> struct scmi...
> 

I'll fix all of these and test with kernel-doc.

>> + *
>> + * This header is prepended to each received event message payload before
>> + * queueing it on the related events_queue.
>> + *
>> + * @timestamp: The timestamp, in nanoseconds (boottime), which was associated
>> + *	       to this event as soon as it entered the SCMI RX ISR
>> + * @evt_id: Event ID (corresponds to the Event MsgID for this Protocol)
>> + * @payld_sz: Effective size of the embedded message payload which follows
>> + * @payld: A reference to the embedded event payload
>> + */
>> +struct scmi_event_header {
>> +	u64	timestamp;
>> +	u8	evt_id;
>> +	size_t	payld_sz;
>> +	u8	payld[];
>> +} __packed;
>> +
>> +struct scmi_registered_event;
>> +
>> +/**
>> + * scmi_registered_protocol_events_desc  - Protocol Specific information
>> + *
>> + * All protocols that registers at least one event have their protocol-specific
>> + * information stored here, together with the embedded allocated events_queue.
>> + * These descriptors are stored in the @registered_protocols array at protocol
>> + * registration time.
>> + *
>> + * Once these descriptors are successfully registered, they are NEVER again
>> + * removed or modified since protocols do not unregister ever, so that once we
>> + * safely grab a NON-NULL reference from the array we can keep it and use it.
>> + *
>> + * @id: Protocol ID
>> + * @ops: Protocol specific and event-related operations
>> + * @equeue: The embedded per-protocol events_queue
>> + * @ni: A reference to the initialized instance descriptor
>> + * @eh: A reference to pre-allocated buffer to be used as a scratch area by the
>> + *	deferred worker when fetching data from the kfifo
>> + * @eh_sz: Size of the pre-allocated buffer @eh
>> + * @in_flight: A reference to an in flight @scmi_registered_event
>> + * @num_events: Number of events in @registered_events
>> + * @registered_events: A dynamically allocated array holding all the registered
>> + *		       events' descriptors, whose fixed-size is determined at
>> + *		       compile time.
>> + */
>> +struct scmi_registered_protocol_events_desc {
>> +	u8					id;
>> +	const struct scmi_protocol_event_ops	*ops;
>> +	struct events_queue			equeue;
>> +	struct scmi_notify_instance		*ni;
>> +	struct scmi_event_header		*eh;
>> +	size_t					eh_sz;
>> +	void					*in_flight;
>> +	int					num_events;
>> +	struct scmi_registered_event		**registered_events;
>> +};
>> +
>> +/**
>> + * scmi_registered_event  - Event Specific Information
> 
> struct scmi_registered_event - Event...
> 
I'll fix
>> + *
>> + * All registered events are represented by one of these structures that are
>> + * stored in the @registered_events array at protocol registration time.
>> + *
>> + * Once these descriptors are successfully registered, they are NEVER again
>> + * removed or modified since protocols do not unregister ever, so that once we
>> + * safely grab a NON-NULL reference from the table we can keep it and use it.
>> + *
>> + * @proto: A reference to the associated protocol descriptor
>> + * @evt: A reference to the associated event descriptor (as provided at
>> + *       registration time)
>> + * @report: A pre-allocated buffer used by the deferred worker to fill a
>> + *	    customized event report
>> + * @num_sources: The number of possible sources for this event as stated at
>> + *		 events' registration time
>> + * @sources: A reference to a dynamically allocated array used to refcount the
>> + *	     events' enable requests for all the existing sources
>> + * @sources_mtx: A mutex to serialize the access to @sources
>> + */
>> +struct scmi_registered_event {
>> +	struct scmi_registered_protocol_events_desc	*proto;
>> +	const struct scmi_event				*evt;
>> +	void						*report;
>> +	u32						num_sources;
>> +	refcount_t					*sources;
>> +	struct mutex					sources_mtx;
>> +};
>> +
>> +/**
>> + * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
>> + *
>> + * Allocate a buffer for the kfifo and initialize it.
>> + *
>> + * @ni: A reference to the notification instance to use
>> + * @equeue: The events_queue to initialize
>> + * @sz: Size of the kfifo buffer to allocate
>> + *
>> + * Return: 0 on Success
>> + */
>> +static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
>> +					struct events_queue *equeue, size_t sz)
>> +{
>> +	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
>> +	if (!equeue->qbuf)
>> +		return -ENOMEM;
>> +	equeue->sz = sz;
>> +
>> +	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> 
> This seems like a slightly odd dance.  Why not use kfifo_alloc?
> 
> If it's because of the lack of devm_kfifo_alloc, maybe use a devm_add_action_or_reset
> to handle that.
> 

As said above exactly for the lack of devm_ flavour
>> +}
>> +
>> +/**
>> + * scmi_allocate_registered_protocol_desc  - Allocate a registered protocol
>> + * events' descriptor
>> + *
>> + * It is supposed to be called only once for each protocol at protocol
>> + * initialization time, so it warns if the requested protocol is found
>> + * already registered.
>> + *
>> + * @ni: A reference to the notification instance to use
>> + * @proto_id: Protocol ID
>> + * @queue_sz: Size of the associated queue to allocate
>> + * @eh_sz: Size of the event header scratch area to pre-allocate
>> + * @num_events: Number of events to support (size of @registered_events)
>> + * @ops: Pointer to a struct holding references to protocol specific helpers
>> + *	 needed during events handling
>> + *
>> + * Returns the allocated and registered descriptor on Success
>> + */
>> +static struct scmi_registered_protocol_events_desc *

[snip]
>> + */
>> +void scmi_notification_exit(struct scmi_handle *handle)
>> +{
>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
>> +		return;
>> +
>> +	atomic_set(&ni->enabled, 0);
>> +	/* Ensure atomic values are updated */
>> +	smp_mb__after_atomic();
>> +
>> +	devres_release_group(ni->handle->dev, ni->gid);
>> +
>> +	pr_info("SCMI Notifications Core Shutdown.\n");
> 
> Is this actually useful?  Seems like noise to me, maybe pr_debug is more appopriate.
> 
No I think in general the verbosity of the printk is still to be 'tuned' in this series

>> +}
>> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
>> new file mode 100644
>> index 000000000000..a7ece64e8842
>> --- /dev/null
>> +++ b/drivers/firmware/arm_scmi/notify.h
>> @@ -0,0 +1,57 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * System Control and Management Interface (SCMI) Message Protocol
>> + * notification header file containing some definitions, structures
>> + * and function prototypes related to SCMI Notification handling.
>> + *
>> + * Copyright (C) 2019 ARM Ltd.
> 
> Update the dates given you are still changing this stuff?
> 

Missed that. I'll fix.

>> + */
>> +#ifndef _SCMI_NOTIFY_H
>> +#define _SCMI_NOTIFY_H
>> +
>> +#include <linux/device.h>
>> +#include <linux/types.h>
>> +
>> +/**
>> + * scmi_event  - Describes an event to be supported
> 
> Fairly sure this isn't valid kernel-doc.
> 
>    * struct scmi_event - ...
> 
> Make sure to run the kernel-doc scripts over any files you've added kernel-doc to
> and tidy up the warnings.
> 
I'll do.
>> + *
>> + * Each SCMI protocol, during its initialization phase, can describe the events
>> + * it wishes to support in a few struct scmi_event and pass them to the core
>> + * using scmi_register_protocol_events().
>> + *
>> + * @id: Event ID
>> + * @max_payld_sz: Max possible size for the payload of a notif msg of this kind
>> + * @max_report_sz: Max possible size for the report of a notif msg of this kind
>> + */
>> +struct scmi_event {
>> +	u8	id;
>> +	size_t	max_payld_sz;
>> +	size_t	max_report_sz;
>> +
> 
> Nitpick: Blank line isn't adding anything
> 

Missed. I'll fix

As a general note, this morning I was going to reply to myself (O_o) on this patch
saying that I'm inclined to review a bit the current initialization phase of 
registered_protocols and registered_events in the sense of adding a few cpu barriers
which are probably lacking where I use mere compiler barriers (_ONCE).
I'll put those probably in v5

Thanks again

Cristian

>> +
>> +/**
>> + * scmi_protocol_event_ops  - Helpers called by notification core.
>> + *
>> + * These are called only in process context.
>> + *
>> + * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications
>> + *			using the proper custom protocol commands.
>> + *			Return true if at least one the required src_id
>> + *			has been successfully enabled/disabled
>> + */
>> +struct scmi_protocol_event_ops {
>> +	bool (*set_notify_enabled)(const struct scmi_handle *handle,
>> +				   u8 evt_id, u32 src_id, bool enabled);
>> +};
>> +
>> +int scmi_notification_init(struct scmi_handle *handle);
>> +void scmi_notification_exit(struct scmi_handle *handle);
>> +
>> +int scmi_register_protocol_events(const struct scmi_handle *handle,
>> +				  u8 proto_id, size_t queue_sz,
>> +				  const struct scmi_protocol_event_ops *ops,
>> +				  const struct scmi_event *evt, int num_events,
>> +				  int num_sources);
>> +
>> +#endif /* _SCMI_NOTIFY_H */
>> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
>> index 5c873a59b387..0679f10ab05e 100644
>> --- a/include/linux/scmi_protocol.h
>> +++ b/include/linux/scmi_protocol.h
>> @@ -4,6 +4,10 @@
>>   *
>>   * Copyright (C) 2018 ARM Ltd.
>>   */
>> +
>> +#ifndef _LINUX_SCMI_PROTOCOL_H
>> +#define _LINUX_SCMI_PROTOCOL_H
>> +
>>  #include <linux/device.h>
>>  #include <linux/types.h>
>>  
>> @@ -227,6 +231,8 @@ struct scmi_reset_ops {
>>   *	protocol(for internal use only)
>>   * @reset_priv: pointer to private data structure specific to reset
>>   *	protocol(for internal use only)
>> + * @notify_priv: pointer to private data structure specific to notifications
>> + *	(for internal use only)
>>   */
>>  struct scmi_handle {
>>  	struct device *dev;
>> @@ -242,6 +248,7 @@ struct scmi_handle {
>>  	void *power_priv;
>>  	void *sensor_priv;
>>  	void *reset_priv;
>> +	void *notify_priv;
>>  };
>>  
>>  enum scmi_std_protocol {
>> @@ -319,3 +326,5 @@ static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
>>  typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
>>  int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
>>  void scmi_protocol_unregister(int protocol_id);
>> +
>> +#endif /* _LINUX_SCMI_PROTOCOL_H */
> 
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 05/13] firmware: arm_scmi: Add notification protocol-registration
@ 2020-03-09 12:04       ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-09 12:04 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

Hi

On 09/03/2020 11:33, Jonathan Cameron wrote:
> On Wed, 4 Mar 2020 16:25:50 +0000
> Cristian Marussi <cristian.marussi@arm.com> wrote:
> 
>> Add core SCMI Notifications protocol-registration support: allow protocols
>> to register their own set of supported events, during their initialization
>> phase. Notification core can track multiple platform instances by their
>> handles.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> 
> Hi.
> 
> A few minor things inline.  Fairly sure kernel-doc needs
> struct before the heading for each structure comment block.
> 
> Also, the events queue init looks like it could just be done with
> a kfifo_alloc call.  Perhaps that makes sense given later patches...
> 
> Thanks,
> 
> Jonathan

Thanks for the review first of all !

> 
>> ---
>> V3 --> V4
>> - removed scratch ISR buffer, move scratch BH buffer into protocol
>>   descriptor
>> - converted registered_protocols and registered_events from hashtables
>>   into bare fixed-sized arrays
>> - removed unregister protocols' routines (never called really)
>> V2 --> V3
>> - added scmi_notify_instance to track target platform instance
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store events
>> - scmi_notifications_initialized is now an atomic_t
>> - reviewed protocol registration/unregistration to use devres
>> - fixed:
>>   drivers/firmware/arm_scmi/notify.c:483:18-23: ERROR:
>>   	reference preceded by free on line 482
>>
>> Reported-by: kbuild test robot <lkp@intel.com>
>> Reported-by: Julia Lawall <julia.lawall@lip6.fr>
>> ---
[snip]
>> +
>> +/**
>> + * scmi_notify_instance  - Represents an instance of the notification core
>> + *
>> + * Each platform instance, represented by a handle, has its own instance of
>> + * the notification subsystem represented by this structure.
>> + *
>> + * @gid: GroupID used for devres
>> + * @handle: A reference to the platform instance
>> + * @initialized: A flag that indicates if the core resources have been allocated
>> + *		 and protocols are allowed to register their supported events
>> + * @enabled: A flag to indicate events can be enabled and start flowing
>> + * @registered_protocols: An statically allocated array containing pointers to
>> + *			  all the registered protocol-level specific information
>> + *			  related to events' handling
>> + */
>> +struct scmi_notify_instance {
>> +	void						*gid;
>> +	struct scmi_handle				*handle;
>> +	atomic_t					initialized;
>> +	atomic_t					enabled;
>> +	struct scmi_registered_protocol_events_desc	**registered_protocols;
>> +};
>> +
>> +/**
>> + * events_queue  - Describes a queue and its associated worker
> 
> I guess this might become clear later, but right now this just looks like
> we are open code what could be handled automatically by just using
> kfifo_alloc
> 

In fact I switched to this split alloc/init (as you guessed later) because of the lack
of devm_ flavour (and my ignorance about the usage of devm_add_action_or_reset ...)
I'll look into it.

>> + *
>> + * Each protocol has its own dedicated events_queue descriptor.
>> + *
>> + * @sz: Size in bytes of the related kfifo
>> + * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
>> + * @kfifo: A dedicated Kernel kfifo descriptor
>> + */
>> +struct events_queue {
>> +	size_t				sz;
>> +	u8				*qbuf;
>> +	struct kfifo			kfifo;
>> +};
>> +
>> +/**
>> + * scmi_event_header  - A utility header
> 
> struct scmi...
> 

I'll fix all of these and test with kernel-doc.

>> + *
>> + * This header is prepended to each received event message payload before
>> + * queueing it on the related events_queue.
>> + *
>> + * @timestamp: The timestamp, in nanoseconds (boottime), which was associated
>> + *	       to this event as soon as it entered the SCMI RX ISR
>> + * @evt_id: Event ID (corresponds to the Event MsgID for this Protocol)
>> + * @payld_sz: Effective size of the embedded message payload which follows
>> + * @payld: A reference to the embedded event payload
>> + */
>> +struct scmi_event_header {
>> +	u64	timestamp;
>> +	u8	evt_id;
>> +	size_t	payld_sz;
>> +	u8	payld[];
>> +} __packed;
>> +
>> +struct scmi_registered_event;
>> +
>> +/**
>> + * scmi_registered_protocol_events_desc  - Protocol Specific information
>> + *
>> + * All protocols that registers at least one event have their protocol-specific
>> + * information stored here, together with the embedded allocated events_queue.
>> + * These descriptors are stored in the @registered_protocols array at protocol
>> + * registration time.
>> + *
>> + * Once these descriptors are successfully registered, they are NEVER again
>> + * removed or modified since protocols do not unregister ever, so that once we
>> + * safely grab a NON-NULL reference from the array we can keep it and use it.
>> + *
>> + * @id: Protocol ID
>> + * @ops: Protocol specific and event-related operations
>> + * @equeue: The embedded per-protocol events_queue
>> + * @ni: A reference to the initialized instance descriptor
>> + * @eh: A reference to pre-allocated buffer to be used as a scratch area by the
>> + *	deferred worker when fetching data from the kfifo
>> + * @eh_sz: Size of the pre-allocated buffer @eh
>> + * @in_flight: A reference to an in flight @scmi_registered_event
>> + * @num_events: Number of events in @registered_events
>> + * @registered_events: A dynamically allocated array holding all the registered
>> + *		       events' descriptors, whose fixed-size is determined at
>> + *		       compile time.
>> + */
>> +struct scmi_registered_protocol_events_desc {
>> +	u8					id;
>> +	const struct scmi_protocol_event_ops	*ops;
>> +	struct events_queue			equeue;
>> +	struct scmi_notify_instance		*ni;
>> +	struct scmi_event_header		*eh;
>> +	size_t					eh_sz;
>> +	void					*in_flight;
>> +	int					num_events;
>> +	struct scmi_registered_event		**registered_events;
>> +};
>> +
>> +/**
>> + * scmi_registered_event  - Event Specific Information
> 
> struct scmi_registered_event - Event...
> 
I'll fix
>> + *
>> + * All registered events are represented by one of these structures that are
>> + * stored in the @registered_events array at protocol registration time.
>> + *
>> + * Once these descriptors are successfully registered, they are NEVER again
>> + * removed or modified since protocols do not unregister ever, so that once we
>> + * safely grab a NON-NULL reference from the table we can keep it and use it.
>> + *
>> + * @proto: A reference to the associated protocol descriptor
>> + * @evt: A reference to the associated event descriptor (as provided at
>> + *       registration time)
>> + * @report: A pre-allocated buffer used by the deferred worker to fill a
>> + *	    customized event report
>> + * @num_sources: The number of possible sources for this event as stated at
>> + *		 events' registration time
>> + * @sources: A reference to a dynamically allocated array used to refcount the
>> + *	     events' enable requests for all the existing sources
>> + * @sources_mtx: A mutex to serialize the access to @sources
>> + */
>> +struct scmi_registered_event {
>> +	struct scmi_registered_protocol_events_desc	*proto;
>> +	const struct scmi_event				*evt;
>> +	void						*report;
>> +	u32						num_sources;
>> +	refcount_t					*sources;
>> +	struct mutex					sources_mtx;
>> +};
>> +
>> +/**
>> + * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
>> + *
>> + * Allocate a buffer for the kfifo and initialize it.
>> + *
>> + * @ni: A reference to the notification instance to use
>> + * @equeue: The events_queue to initialize
>> + * @sz: Size of the kfifo buffer to allocate
>> + *
>> + * Return: 0 on Success
>> + */
>> +static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
>> +					struct events_queue *equeue, size_t sz)
>> +{
>> +	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
>> +	if (!equeue->qbuf)
>> +		return -ENOMEM;
>> +	equeue->sz = sz;
>> +
>> +	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> 
> This seems like a slightly odd dance.  Why not use kfifo_alloc?
> 
> If it's because of the lack of devm_kfifo_alloc, maybe use a devm_add_action_or_reset
> to handle that.
> 

As said above exactly for the lack of devm_ flavour
>> +}
>> +
>> +/**
>> + * scmi_allocate_registered_protocol_desc  - Allocate a registered protocol
>> + * events' descriptor
>> + *
>> + * It is supposed to be called only once for each protocol at protocol
>> + * initialization time, so it warns if the requested protocol is found
>> + * already registered.
>> + *
>> + * @ni: A reference to the notification instance to use
>> + * @proto_id: Protocol ID
>> + * @queue_sz: Size of the associated queue to allocate
>> + * @eh_sz: Size of the event header scratch area to pre-allocate
>> + * @num_events: Number of events to support (size of @registered_events)
>> + * @ops: Pointer to a struct holding references to protocol specific helpers
>> + *	 needed during events handling
>> + *
>> + * Returns the allocated and registered descriptor on Success
>> + */
>> +static struct scmi_registered_protocol_events_desc *

[snip]
>> + */
>> +void scmi_notification_exit(struct scmi_handle *handle)
>> +{
>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
>> +		return;
>> +
>> +	atomic_set(&ni->enabled, 0);
>> +	/* Ensure atomic values are updated */
>> +	smp_mb__after_atomic();
>> +
>> +	devres_release_group(ni->handle->dev, ni->gid);
>> +
>> +	pr_info("SCMI Notifications Core Shutdown.\n");
> 
> Is this actually useful?  Seems like noise to me, maybe pr_debug is more appopriate.
> 
No I think in general the verbosity of the printk is still to be 'tuned' in this series

>> +}
>> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
>> new file mode 100644
>> index 000000000000..a7ece64e8842
>> --- /dev/null
>> +++ b/drivers/firmware/arm_scmi/notify.h
>> @@ -0,0 +1,57 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * System Control and Management Interface (SCMI) Message Protocol
>> + * notification header file containing some definitions, structures
>> + * and function prototypes related to SCMI Notification handling.
>> + *
>> + * Copyright (C) 2019 ARM Ltd.
> 
> Update the dates given you are still changing this stuff?
> 

Missed that. I'll fix.

>> + */
>> +#ifndef _SCMI_NOTIFY_H
>> +#define _SCMI_NOTIFY_H
>> +
>> +#include <linux/device.h>
>> +#include <linux/types.h>
>> +
>> +/**
>> + * scmi_event  - Describes an event to be supported
> 
> Fairly sure this isn't valid kernel-doc.
> 
>    * struct scmi_event - ...
> 
> Make sure to run the kernel-doc scripts over any files you've added kernel-doc to
> and tidy up the warnings.
> 
I'll do.
>> + *
>> + * Each SCMI protocol, during its initialization phase, can describe the events
>> + * it wishes to support in a few struct scmi_event and pass them to the core
>> + * using scmi_register_protocol_events().
>> + *
>> + * @id: Event ID
>> + * @max_payld_sz: Max possible size for the payload of a notif msg of this kind
>> + * @max_report_sz: Max possible size for the report of a notif msg of this kind
>> + */
>> +struct scmi_event {
>> +	u8	id;
>> +	size_t	max_payld_sz;
>> +	size_t	max_report_sz;
>> +
> 
> Nitpick: Blank line isn't adding anything
> 

Missed. I'll fix

As a general note, this morning I was going to reply to myself (O_o) on this patch
saying that I'm inclined to review a bit the current initialization phase of 
registered_protocols and registered_events in the sense of adding a few cpu barriers
which are probably lacking where I use mere compiler barriers (_ONCE).
I'll put those probably in v5

Thanks again

Cristian

>> +
>> +/**
>> + * scmi_protocol_event_ops  - Helpers called by notification core.
>> + *
>> + * These are called only in process context.
>> + *
>> + * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications
>> + *			using the proper custom protocol commands.
>> + *			Return true if at least one the required src_id
>> + *			has been successfully enabled/disabled
>> + */
>> +struct scmi_protocol_event_ops {
>> +	bool (*set_notify_enabled)(const struct scmi_handle *handle,
>> +				   u8 evt_id, u32 src_id, bool enabled);
>> +};
>> +
>> +int scmi_notification_init(struct scmi_handle *handle);
>> +void scmi_notification_exit(struct scmi_handle *handle);
>> +
>> +int scmi_register_protocol_events(const struct scmi_handle *handle,
>> +				  u8 proto_id, size_t queue_sz,
>> +				  const struct scmi_protocol_event_ops *ops,
>> +				  const struct scmi_event *evt, int num_events,
>> +				  int num_sources);
>> +
>> +#endif /* _SCMI_NOTIFY_H */
>> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
>> index 5c873a59b387..0679f10ab05e 100644
>> --- a/include/linux/scmi_protocol.h
>> +++ b/include/linux/scmi_protocol.h
>> @@ -4,6 +4,10 @@
>>   *
>>   * Copyright (C) 2018 ARM Ltd.
>>   */
>> +
>> +#ifndef _LINUX_SCMI_PROTOCOL_H
>> +#define _LINUX_SCMI_PROTOCOL_H
>> +
>>  #include <linux/device.h>
>>  #include <linux/types.h>
>>  
>> @@ -227,6 +231,8 @@ struct scmi_reset_ops {
>>   *	protocol(for internal use only)
>>   * @reset_priv: pointer to private data structure specific to reset
>>   *	protocol(for internal use only)
>> + * @notify_priv: pointer to private data structure specific to notifications
>> + *	(for internal use only)
>>   */
>>  struct scmi_handle {
>>  	struct device *dev;
>> @@ -242,6 +248,7 @@ struct scmi_handle {
>>  	void *power_priv;
>>  	void *sensor_priv;
>>  	void *reset_priv;
>> +	void *notify_priv;
>>  };
>>  
>>  enum scmi_std_protocol {
>> @@ -319,3 +326,5 @@ static inline void scmi_driver_unregister(struct scmi_driver *driver) {}
>>  typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
>>  int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
>>  void scmi_protocol_unregister(int protocol_id);
>> +
>> +#endif /* _LINUX_SCMI_PROTOCOL_H */
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 06/13] firmware: arm_scmi: Add notification callbacks-registration
  2020-03-09 11:50     ` Jonathan Cameron
@ 2020-03-09 12:25       ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-09 12:25 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

On 09/03/2020 11:50, Jonathan Cameron wrote:
> On Wed, 4 Mar 2020 16:25:51 +0000
> Cristian Marussi <cristian.marussi@arm.com> wrote:
> 
>> Add core SCMI Notifications callbacks-registration support: allow users
>> to register their own callbacks against the desired events.
>> Whenever a registration request is issued against a still non existent
>> event, mark such request as pending for later processing, in order to
>> account for possible late initializations of SCMI Protocols associated
>> to loadable drivers.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> Another one that you should run the kernel-doc scripts over. I haven't checked
> but fairly sure they won't like some of this...
> 

Sorry for that, I passed the series through cp sparse and lockdep but I completely
ignored kernel-doc building.


> Otherwise a few trivial things inline.
> 
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> 
> Thanks,
> 
> Jonathan
> 


>> ---
>> V3 --> V4
>> - split registered_handlers hashtable on a per-protocol basis to reduce
>>   unneeded contention
>> - introduced pending_handlers table and related late_init worker to finalize
>>   handlers registration upon effective protocols' registrations
>> - introduced further safe accessors macros for registered_protocols
>>   and registered_events arrays
>> V2 --> V3
>> - refactored get/put event_handler
>> - removed generic non-handle-based API
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store event_handlers
>> - added proper enable_events refcounting via __scmi_enable_evt()
>>   [was broken in V1 when using ALL_SRCIDs notification chains]
>> - reviewed hashtable cleanup strategy in scmi_notifications_exit()
>> - added scmi_register_event_notifier()/scmi_unregister_event_notifier()
>>   to include/linux/scmi_protocol.h as a candidate user API
>>   [no EXPORTs still]
>> - added notify_ops to handle during initialization as an additional
>>   internal API for scmi_drivers
>> ---
>>  drivers/firmware/arm_scmi/notify.c | 700 +++++++++++++++++++++++++++++
>>  drivers/firmware/arm_scmi/notify.h |  12 +
>>  include/linux/scmi_protocol.h      |  50 +++
>>  3 files changed, 762 insertions(+)
>> +

[snip]
>> +/**
>> + * __scmi_enable_evt  - Enable/disable events generation
>> + *
>> + * Takes care of proper refcounting while performing enable/disable: handles
>> + * the special case of ALL sources requests by itself.
>> + *
>> + * @r_evt: The registered event to act upon
>> + * @src_id: The src_id to act upon
>> + * @enable: The action to perform: true->Enable, false->Disable
>> + *
>> + * Return: True when the required @action has been successfully executed
>> + */
>> +static inline bool __scmi_enable_evt(struct scmi_registered_event *r_evt,
>> +				     u32 src_id, bool enable)
>> +{
>> +	int ret = 0;
>> +	u32 num_sources;
>> +	refcount_t *sid;
>> +
>> +	if (src_id == SCMI_ALL_SRC_IDS) {
>> +		src_id = 0;
>> +		num_sources = r_evt->num_sources;
>> +	} else if (src_id < r_evt->num_sources) {
>> +		num_sources = 1;
>> +	} else {
>> +		return ret;
>> +	}
>> +
>> +	mutex_lock(&r_evt->sources_mtx);
>> +	if (enable) {
>> +		for (; num_sources; src_id++, num_sources--) {
>> +			bool r;
>> +
>> +			sid = &r_evt->sources[src_id];
>> +			if (refcount_read(sid) == 0) {
>> +				r = REVT_NOTIFY_ENABLE(r_evt,
>> +						       r_evt->evt->id,
>> +						       src_id, enable);
> 
> I would make the enable explicit in this call so it is obvious we are
> in the enable path rather than disable.
> 

Right, I'll use an explicit macro naming like REVY_NOTIFY_ENABLE/DISABLE

>> +				if (r)
>> +					refcount_set(sid, 1);
>> +			} else {
>> +				refcount_inc(sid);
>> +				r = true;
>> +			}
>> +			ret += r;
>> +		}
>> +	} else {
>> +		for (; num_sources; src_id++, num_sources--) {
>> +			sid = &r_evt->sources[src_id];
>> +			if (refcount_dec_and_test(sid))
>> +				REVT_NOTIFY_ENABLE(r_evt,
>> +						   r_evt->evt->id,
>> +						   src_id, enable);
> 
> As above, make the enable value explicit.
> 

I'll do.

Thanks

Cristian

>> +		}
>> +		ret = 1;
>> +	}
>> +	mutex_unlock(&r_evt->sources_mtx);
>> +
>> +	return ret;
>> +}
>> +
>> +static bool scmi_enable_events(struct scmi_event_handler *hndl)
>> +{
>> +	if (!hndl->enabled)
>> +		hndl->enabled = __scmi_enable_evt(hndl->r_evt,
>> +						  KEY_XTRACT_SRC_ID(hndl->key),
>> +						  true);
>> +	return hndl->enabled;
>> +}
>> +
>> +static bool scmi_disable_events(struct scmi_event_handler *hndl)
>> +{
>> +	if (hndl->enabled)
>> +		hndl->enabled = !__scmi_enable_evt(hndl->r_evt,
>> +						   KEY_XTRACT_SRC_ID(hndl->key),
>> +						   false);
>> +	return !hndl->enabled;
>> +}
>> +
>> +/**
>> + * scmi_put_handler_unlocked  - Put an event handler
>> + *
>> + * After having got exclusive access to the registered handlers hashtable,
>> + * update the refcount and if @hndl is no more in use by anyone:
>> + *
>> + *  - ask for events' generation disabling
>> + *  - unregister and free the handler itself
>> + *
>> + *  Assumes all the proper locking has been managed by the caller.
>> + *
>> + * @ni: A reference to the notification instance to use
>> + * @hndl: The event handler to act upon
>> + */
>> +
>> +static void
>> +scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
>> +				struct scmi_event_handler *hndl)
>> +{
>> +	if (refcount_dec_and_test(&hndl->users)) {
>> +		if (likely(!IS_HNDL_PENDING(hndl)))
>> +			scmi_disable_events(hndl);
>> +		scmi_free_event_handler(hndl);
>> +	}
>> +}
>> +
>> +static void scmi_put_handler(struct scmi_notify_instance *ni,
>> +			     struct scmi_event_handler *hndl)
>> +{
>> +	struct scmi_registered_event *r_evt = hndl->r_evt;
>> +
>> +	mutex_lock(&ni->pending_mtx);
>> +	if (r_evt)
>> +		mutex_lock(&r_evt->proto->registered_mtx);
>> +
>> +	scmi_put_handler_unlocked(ni, hndl);
>> +
>> +	if (r_evt)
>> +		mutex_unlock(&r_evt->proto->registered_mtx);
>> +	mutex_unlock(&ni->pending_mtx);
>> +}
>> +
>> +/**
>> + * scmi_event_handler_enable_events  - Enable events associated to an handler
>> + *
>> + * @hndl: The Event handler to act upon
>> + *
>> + * Return: True on success
>> + */
>> +static bool scmi_event_handler_enable_events(struct scmi_event_handler *hndl)
>> +{
>> +	if (!scmi_enable_events(hndl)) {
>> +		pr_err("SCMI Notifications: Failed to ENABLE events for key:%X !\n",
>> +		       hndl->key);
>> +		return false;
>> +	}
>> +
>> +	return true;
>> +}
>> +
>> +/**
>> + * scmi_register_notifier  - Register a notifier_block for an event
>> + *
>> + * Generic helper to register a notifier_block against a protocol event.
>> + *
>> + * A notifier_block @nb will be registered for each distinct event identified
>> + * by the tuple (proto_id, evt_id, src_id) on a dedicated notification chain
>> + * so that:
>> + *
>> + *	(proto_X, evt_Y, src_Z) --> chain_X_Y_Z
>> + *
>> + * @src_id meaning is protocol specific and identifies the origin of the event
>> + * (like domain_id, sensor_id and so forth).
>> + *
>> + * @src_id can be NULL to signify that the caller is interested in receiving
>> + * notifications from ALL the available sources for that protocol OR simply that
>> + * the protocol does not support distinct sources.
>> + *
>> + * As soon as one user for the specified tuple appears, an handler is created,
>> + * and that specific event's generation is enabled at the platform level, unless
>> + * an associated registered event is found missing, meaning that the needed
>> + * protocol is still to be initialized and the handler has just been registered
>> + * as still pending.
>> + *
>> + * @handle: The handle identifying the platform instance against which the
>> + *	    callback is registered
>> + * @proto_id: Protocol ID
>> + * @evt_id: Event ID
>> + * @src_id: Source ID, when NULL register for events coming form ALL possible
>> + *	    sources
>> + * @nb: A standard notifier block to register for the specified event
>> + *
>> + * Return: Return 0 on Success
>> + */
>> +static int scmi_register_notifier(const struct scmi_handle *handle,
>> +				  u8 proto_id, u8 evt_id, u32 *src_id,
>> +				  struct notifier_block *nb)
>> +{
>> +	int ret = 0;
>> +	u32 evt_key;
>> +	struct scmi_event_handler *hndl;
>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
>> +		return 0;
>> +
>> +	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
>> +				src_id ? *src_id : SCMI_ALL_SRC_IDS);
>> +	hndl = scmi_get_or_create_handler(ni, evt_key);
>> +	if (IS_ERR_OR_NULL(hndl))
>> +		return PTR_ERR(hndl);
>> +
>> +	blocking_notifier_chain_register(&hndl->chain, nb);
>> +
>> +	/* Enable events for not pending handlers */
>> +	if (likely(!IS_HNDL_PENDING(hndl))) {
>> +		if (!scmi_event_handler_enable_events(hndl)) {
>> +			scmi_put_handler(ni, hndl);
>> +			ret = -EINVAL;
>> +		}
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>> + * scmi_unregister_notifier  - Unregister a notifier_block for an event
>> + *
>> + * Takes care to unregister the provided @nb from the notification chain
>> + * associated to the specified event and, if there are no more users for the
>> + * event handler, frees also the associated event handler structures.
>> + * (this could possibly cause disabling of event's generation at platform level)
>> + *
>> + * @handle: The handle identifying the platform instance against which the
>> + *	    callback is unregistered
>> + * @proto_id: Protocol ID
>> + * @evt_id: Event ID
>> + * @src_id: Source ID
>> + * @nb: The notifier_block to unregister
>> + *
>> + * Return: 0 on Success
>> + */
>> +static int scmi_unregister_notifier(const struct scmi_handle *handle,
>> +				    u8 proto_id, u8 evt_id, u32 *src_id,
>> +				    struct notifier_block *nb)
>> +{
>> +	u32 evt_key;
>> +	struct scmi_event_handler *hndl;
>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
>> +		return 0;
>> +
>> +	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
>> +				src_id ? *src_id : SCMI_ALL_SRC_IDS);
>> +	hndl = scmi_get_handler(ni, evt_key);
>> +	if (IS_ERR_OR_NULL(hndl))
>> +		return -EINVAL;
>> +
>> +	blocking_notifier_chain_unregister(&hndl->chain, nb);
>> +	scmi_put_handler(ni, hndl);
>> +
>> +	/*
>> +	 * Free the handler (and stop events) if this happens to be the last
>> +	 * known user callback for this handler; a possible concurrently ongoing
>> +	 * run of @scmi_lookup_and_call_event_chain will cause this to happen
>> +	 * in that context safely instead.
>> +	 */
>> +	scmi_put_handler(ni, hndl);
>> +
>> +	return 0;
>> +}
>> +
>> +/**
>> + * scmi_protocols_late_init  - Worker for late initialization
>> + *
>> + * This kicks in whenever a new protocol has completed its own registration via
>> + * scmi_register_protocol_events(): it is in charge of scanning the table of
>> + * pending handlers (registered by users while the related protocol was still
>> + * not initialized) and finalizing their initialization whenever possible;
>> + * invalid pending handlers are purged at this point in time.
>> + *
>> + * @work: The work item to use associated to the proper SCMI instance
>> + */
>> +static void scmi_protocols_late_init(struct work_struct *work)
>> +{
>> +	int bkt;
>> +	struct scmi_event_handler *hndl;
>> +	struct scmi_notify_instance *ni;
>> +	struct hlist_node *tmp;
>> +
>> +	ni = container_of(work, struct scmi_notify_instance, init_work);
>> +
>> +	mutex_lock(&ni->pending_mtx);
>> +	hash_for_each_safe(ni->pending_events_handlers, bkt, tmp, hndl, hash) {
>> +		bool ret;
>> +
>> +		ret = scmi_bind_event_handler(ni, hndl);
>> +		if (ret) {
>> +			pr_info("SCMI Notifications: finalized PENDING handler - key:%X\n",
>> +				hndl->key);
>> +			ret = scmi_event_handler_enable_events(hndl);
>> +		} else {
>> +			ret = scmi_valid_pending_handler(ni, hndl);
>> +		}
>> +		if (!ret) {
>> +			pr_info("SCMI Notifications: purging PENDING handler - key:%X\n",
>> +				hndl->key);
>> +			/* this hndl can be only a pending one */
>> +			scmi_put_handler_unlocked(ni, hndl);
>> +		}
>> +	}
>> +	mutex_unlock(&ni->pending_mtx);
>> +}
>> +
>> +/*
>> + * notify_ops are attached to the handle so that can be accessed
>> + * directly from an scmi_driver to register its own notifiers.
>> + */
>> +static struct scmi_notify_ops notify_ops = {
>> +	.register_event_notifier = scmi_register_notifier,
>> +	.unregister_event_notifier = scmi_unregister_notifier,
>> +};
>> +
>>  /**
>>   * scmi_notification_init  - Initializes Notification Core Support
>>   *
>> @@ -398,7 +1092,13 @@ int scmi_notification_init(struct scmi_handle *handle)
>>  	if (!ni->registered_protocols)
>>  		goto err;
>>  
>> +	mutex_init(&ni->pending_mtx);
>> +	hash_init(ni->pending_events_handlers);
>> +
>> +	INIT_WORK(&ni->init_work, scmi_protocols_late_init);
>> +
>>  	handle->notify_priv = ni;
>> +	handle->notify_ops = &notify_ops;
>>  
>>  	atomic_set(&ni->initialized, 1);
>>  	atomic_set(&ni->enabled, 1);
>> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
>> index a7ece64e8842..f765acda2311 100644
>> --- a/drivers/firmware/arm_scmi/notify.h
>> +++ b/drivers/firmware/arm_scmi/notify.h
>> @@ -9,9 +9,21 @@
>>  #ifndef _SCMI_NOTIFY_H
>>  #define _SCMI_NOTIFY_H
>>  
>> +#include <linux/bug.h>
>>  #include <linux/device.h>
>>  #include <linux/types.h>
>>  
>> +#define MAP_EVT_TO_ENABLE_CMD(id, map)			\
>> +({							\
>> +	int ret = -1;					\
>> +							\
>> +	if (likely((id) < ARRAY_SIZE((map))))		\
>> +		ret = (map)[(id)];			\
>> +	else						\
>> +		WARN(1, "UN-KNOWN evt_id:%d\n", (id));	\
>> +	ret;						\
>> +})
>> +
>>  /**
>>   * scmi_event  - Describes an event to be supported
>>   *
>> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
>> index 0679f10ab05e..797e1e03ae52 100644
>> --- a/include/linux/scmi_protocol.h
>> +++ b/include/linux/scmi_protocol.h
>> @@ -9,6 +9,8 @@
>>  #define _LINUX_SCMI_PROTOCOL_H
>>  
>>  #include <linux/device.h>
>> +#include <linux/ktime.h>
>> +#include <linux/notifier.h>
>>  #include <linux/types.h>
>>  
>>  #define SCMI_MAX_STR_SIZE	16
>> @@ -211,6 +213,52 @@ struct scmi_reset_ops {
>>  	int (*deassert)(const struct scmi_handle *handle, u32 domain);
>>  };
>>  
>> +/**
>> + * scmi_notify_ops  - represents notifications' operations provided by SCMI core
>> + *
>> + * A user can register/unregister its own notifier_block against the wanted
>> + * platform instance regarding the desired event identified by the
>> + * tuple: (proto_id, evt_id, src_id)
>> + *
>> + * @register_event_notifier: Register a notifier_block for the requested event
>> + * @unregister_event_notifier: Unregister a notifier_block for the requested
>> + *			       event
>> + *
>> + * where:
>> + *
>> + * @handle: The handle identifying the platform instance to use
>> + * @proto_id: The protocol ID as in SCMI Specification
>> + * @evt_id: The message ID of the desired event as in SCMI Specification
>> + * @src_id: A pointer to the desired source ID if different sources are
>> + *	    possible for the protocol (like domain_id, sensor_id...etc)
>> + *
>> + * @src_id can be provided as NULL if it simply does NOT make sense for
>> + * the protocol at hand, OR if the user is explicitly interested in
>> + * receiving notifications from ANY existent source associated to the
>> + * specified proto_id / evt_id.
>> + *
>> + * Received notifications are finally delivered to the registered users,
>> + * invoking the callback provided with the notifier_block *nb as follows:
>> + *
>> + *	int user_cb(nb, evt_id, report)
>> + *
>> + * with:
>> + *
>> + * @nb: The notifier block provided by the user
>> + * @evt_id: The message ID of the delivered event
>> + * @report: A custom struct describing the specific event delivered
>> + *
>> + * Events' customized report structs are detailed in the following.
>> + */
>> +struct scmi_notify_ops {
>> +	int (*register_event_notifier)(const struct scmi_handle *handle,
>> +				       u8 proto_id, u8 evt_id, u32 *src_id,
>> +				       struct notifier_block *nb);
>> +	int (*unregister_event_notifier)(const struct scmi_handle *handle,
>> +					 u8 proto_id, u8 evt_id, u32 *src_id,
>> +					 struct notifier_block *nb);
>> +};
>> +
>>  /**
>>   * struct scmi_handle - Handle returned to ARM SCMI clients for usage.
>>   *
>> @@ -221,6 +269,7 @@ struct scmi_reset_ops {
>>   * @clk_ops: pointer to set of clock protocol operations
>>   * @sensor_ops: pointer to set of sensor protocol operations
>>   * @reset_ops: pointer to set of reset protocol operations
>> + * @notify_ops: pointer to set of notifications related operations
>>   * @perf_priv: pointer to private data structure specific to performance
>>   *	protocol(for internal use only)
>>   * @clk_priv: pointer to private data structure specific to clock
>> @@ -242,6 +291,7 @@ struct scmi_handle {
>>  	struct scmi_power_ops *power_ops;
>>  	struct scmi_sensor_ops *sensor_ops;
>>  	struct scmi_reset_ops *reset_ops;
>> +	struct scmi_notify_ops *notify_ops;
>>  	/* for protocol internal use */
>>  	void *perf_priv;
>>  	void *clk_priv;
> 
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 06/13] firmware: arm_scmi: Add notification callbacks-registration
@ 2020-03-09 12:25       ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-09 12:25 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

On 09/03/2020 11:50, Jonathan Cameron wrote:
> On Wed, 4 Mar 2020 16:25:51 +0000
> Cristian Marussi <cristian.marussi@arm.com> wrote:
> 
>> Add core SCMI Notifications callbacks-registration support: allow users
>> to register their own callbacks against the desired events.
>> Whenever a registration request is issued against a still non existent
>> event, mark such request as pending for later processing, in order to
>> account for possible late initializations of SCMI Protocols associated
>> to loadable drivers.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> Another one that you should run the kernel-doc scripts over. I haven't checked
> but fairly sure they won't like some of this...
> 

Sorry for that, I passed the series through cp sparse and lockdep but I completely
ignored kernel-doc building.


> Otherwise a few trivial things inline.
> 
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> 
> Thanks,
> 
> Jonathan
> 


>> ---
>> V3 --> V4
>> - split registered_handlers hashtable on a per-protocol basis to reduce
>>   unneeded contention
>> - introduced pending_handlers table and related late_init worker to finalize
>>   handlers registration upon effective protocols' registrations
>> - introduced further safe accessors macros for registered_protocols
>>   and registered_events arrays
>> V2 --> V3
>> - refactored get/put event_handler
>> - removed generic non-handle-based API
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store event_handlers
>> - added proper enable_events refcounting via __scmi_enable_evt()
>>   [was broken in V1 when using ALL_SRCIDs notification chains]
>> - reviewed hashtable cleanup strategy in scmi_notifications_exit()
>> - added scmi_register_event_notifier()/scmi_unregister_event_notifier()
>>   to include/linux/scmi_protocol.h as a candidate user API
>>   [no EXPORTs still]
>> - added notify_ops to handle during initialization as an additional
>>   internal API for scmi_drivers
>> ---
>>  drivers/firmware/arm_scmi/notify.c | 700 +++++++++++++++++++++++++++++
>>  drivers/firmware/arm_scmi/notify.h |  12 +
>>  include/linux/scmi_protocol.h      |  50 +++
>>  3 files changed, 762 insertions(+)
>> +

[snip]
>> +/**
>> + * __scmi_enable_evt  - Enable/disable events generation
>> + *
>> + * Takes care of proper refcounting while performing enable/disable: handles
>> + * the special case of ALL sources requests by itself.
>> + *
>> + * @r_evt: The registered event to act upon
>> + * @src_id: The src_id to act upon
>> + * @enable: The action to perform: true->Enable, false->Disable
>> + *
>> + * Return: True when the required @action has been successfully executed
>> + */
>> +static inline bool __scmi_enable_evt(struct scmi_registered_event *r_evt,
>> +				     u32 src_id, bool enable)
>> +{
>> +	int ret = 0;
>> +	u32 num_sources;
>> +	refcount_t *sid;
>> +
>> +	if (src_id == SCMI_ALL_SRC_IDS) {
>> +		src_id = 0;
>> +		num_sources = r_evt->num_sources;
>> +	} else if (src_id < r_evt->num_sources) {
>> +		num_sources = 1;
>> +	} else {
>> +		return ret;
>> +	}
>> +
>> +	mutex_lock(&r_evt->sources_mtx);
>> +	if (enable) {
>> +		for (; num_sources; src_id++, num_sources--) {
>> +			bool r;
>> +
>> +			sid = &r_evt->sources[src_id];
>> +			if (refcount_read(sid) == 0) {
>> +				r = REVT_NOTIFY_ENABLE(r_evt,
>> +						       r_evt->evt->id,
>> +						       src_id, enable);
> 
> I would make the enable explicit in this call so it is obvious we are
> in the enable path rather than disable.
> 

Right, I'll use an explicit macro naming like REVY_NOTIFY_ENABLE/DISABLE

>> +				if (r)
>> +					refcount_set(sid, 1);
>> +			} else {
>> +				refcount_inc(sid);
>> +				r = true;
>> +			}
>> +			ret += r;
>> +		}
>> +	} else {
>> +		for (; num_sources; src_id++, num_sources--) {
>> +			sid = &r_evt->sources[src_id];
>> +			if (refcount_dec_and_test(sid))
>> +				REVT_NOTIFY_ENABLE(r_evt,
>> +						   r_evt->evt->id,
>> +						   src_id, enable);
> 
> As above, make the enable value explicit.
> 

I'll do.

Thanks

Cristian

>> +		}
>> +		ret = 1;
>> +	}
>> +	mutex_unlock(&r_evt->sources_mtx);
>> +
>> +	return ret;
>> +}
>> +
>> +static bool scmi_enable_events(struct scmi_event_handler *hndl)
>> +{
>> +	if (!hndl->enabled)
>> +		hndl->enabled = __scmi_enable_evt(hndl->r_evt,
>> +						  KEY_XTRACT_SRC_ID(hndl->key),
>> +						  true);
>> +	return hndl->enabled;
>> +}
>> +
>> +static bool scmi_disable_events(struct scmi_event_handler *hndl)
>> +{
>> +	if (hndl->enabled)
>> +		hndl->enabled = !__scmi_enable_evt(hndl->r_evt,
>> +						   KEY_XTRACT_SRC_ID(hndl->key),
>> +						   false);
>> +	return !hndl->enabled;
>> +}
>> +
>> +/**
>> + * scmi_put_handler_unlocked  - Put an event handler
>> + *
>> + * After having got exclusive access to the registered handlers hashtable,
>> + * update the refcount and if @hndl is no more in use by anyone:
>> + *
>> + *  - ask for events' generation disabling
>> + *  - unregister and free the handler itself
>> + *
>> + *  Assumes all the proper locking has been managed by the caller.
>> + *
>> + * @ni: A reference to the notification instance to use
>> + * @hndl: The event handler to act upon
>> + */
>> +
>> +static void
>> +scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
>> +				struct scmi_event_handler *hndl)
>> +{
>> +	if (refcount_dec_and_test(&hndl->users)) {
>> +		if (likely(!IS_HNDL_PENDING(hndl)))
>> +			scmi_disable_events(hndl);
>> +		scmi_free_event_handler(hndl);
>> +	}
>> +}
>> +
>> +static void scmi_put_handler(struct scmi_notify_instance *ni,
>> +			     struct scmi_event_handler *hndl)
>> +{
>> +	struct scmi_registered_event *r_evt = hndl->r_evt;
>> +
>> +	mutex_lock(&ni->pending_mtx);
>> +	if (r_evt)
>> +		mutex_lock(&r_evt->proto->registered_mtx);
>> +
>> +	scmi_put_handler_unlocked(ni, hndl);
>> +
>> +	if (r_evt)
>> +		mutex_unlock(&r_evt->proto->registered_mtx);
>> +	mutex_unlock(&ni->pending_mtx);
>> +}
>> +
>> +/**
>> + * scmi_event_handler_enable_events  - Enable events associated to an handler
>> + *
>> + * @hndl: The Event handler to act upon
>> + *
>> + * Return: True on success
>> + */
>> +static bool scmi_event_handler_enable_events(struct scmi_event_handler *hndl)
>> +{
>> +	if (!scmi_enable_events(hndl)) {
>> +		pr_err("SCMI Notifications: Failed to ENABLE events for key:%X !\n",
>> +		       hndl->key);
>> +		return false;
>> +	}
>> +
>> +	return true;
>> +}
>> +
>> +/**
>> + * scmi_register_notifier  - Register a notifier_block for an event
>> + *
>> + * Generic helper to register a notifier_block against a protocol event.
>> + *
>> + * A notifier_block @nb will be registered for each distinct event identified
>> + * by the tuple (proto_id, evt_id, src_id) on a dedicated notification chain
>> + * so that:
>> + *
>> + *	(proto_X, evt_Y, src_Z) --> chain_X_Y_Z
>> + *
>> + * @src_id meaning is protocol specific and identifies the origin of the event
>> + * (like domain_id, sensor_id and so forth).
>> + *
>> + * @src_id can be NULL to signify that the caller is interested in receiving
>> + * notifications from ALL the available sources for that protocol OR simply that
>> + * the protocol does not support distinct sources.
>> + *
>> + * As soon as one user for the specified tuple appears, an handler is created,
>> + * and that specific event's generation is enabled at the platform level, unless
>> + * an associated registered event is found missing, meaning that the needed
>> + * protocol is still to be initialized and the handler has just been registered
>> + * as still pending.
>> + *
>> + * @handle: The handle identifying the platform instance against which the
>> + *	    callback is registered
>> + * @proto_id: Protocol ID
>> + * @evt_id: Event ID
>> + * @src_id: Source ID, when NULL register for events coming form ALL possible
>> + *	    sources
>> + * @nb: A standard notifier block to register for the specified event
>> + *
>> + * Return: Return 0 on Success
>> + */
>> +static int scmi_register_notifier(const struct scmi_handle *handle,
>> +				  u8 proto_id, u8 evt_id, u32 *src_id,
>> +				  struct notifier_block *nb)
>> +{
>> +	int ret = 0;
>> +	u32 evt_key;
>> +	struct scmi_event_handler *hndl;
>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
>> +		return 0;
>> +
>> +	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
>> +				src_id ? *src_id : SCMI_ALL_SRC_IDS);
>> +	hndl = scmi_get_or_create_handler(ni, evt_key);
>> +	if (IS_ERR_OR_NULL(hndl))
>> +		return PTR_ERR(hndl);
>> +
>> +	blocking_notifier_chain_register(&hndl->chain, nb);
>> +
>> +	/* Enable events for not pending handlers */
>> +	if (likely(!IS_HNDL_PENDING(hndl))) {
>> +		if (!scmi_event_handler_enable_events(hndl)) {
>> +			scmi_put_handler(ni, hndl);
>> +			ret = -EINVAL;
>> +		}
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>> + * scmi_unregister_notifier  - Unregister a notifier_block for an event
>> + *
>> + * Takes care to unregister the provided @nb from the notification chain
>> + * associated to the specified event and, if there are no more users for the
>> + * event handler, frees also the associated event handler structures.
>> + * (this could possibly cause disabling of event's generation at platform level)
>> + *
>> + * @handle: The handle identifying the platform instance against which the
>> + *	    callback is unregistered
>> + * @proto_id: Protocol ID
>> + * @evt_id: Event ID
>> + * @src_id: Source ID
>> + * @nb: The notifier_block to unregister
>> + *
>> + * Return: 0 on Success
>> + */
>> +static int scmi_unregister_notifier(const struct scmi_handle *handle,
>> +				    u8 proto_id, u8 evt_id, u32 *src_id,
>> +				    struct notifier_block *nb)
>> +{
>> +	u32 evt_key;
>> +	struct scmi_event_handler *hndl;
>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +	if (unlikely(!ni || !atomic_read(&ni->initialized)))
>> +		return 0;
>> +
>> +	evt_key = MAKE_HASH_KEY(proto_id, evt_id,
>> +				src_id ? *src_id : SCMI_ALL_SRC_IDS);
>> +	hndl = scmi_get_handler(ni, evt_key);
>> +	if (IS_ERR_OR_NULL(hndl))
>> +		return -EINVAL;
>> +
>> +	blocking_notifier_chain_unregister(&hndl->chain, nb);
>> +	scmi_put_handler(ni, hndl);
>> +
>> +	/*
>> +	 * Free the handler (and stop events) if this happens to be the last
>> +	 * known user callback for this handler; a possible concurrently ongoing
>> +	 * run of @scmi_lookup_and_call_event_chain will cause this to happen
>> +	 * in that context safely instead.
>> +	 */
>> +	scmi_put_handler(ni, hndl);
>> +
>> +	return 0;
>> +}
>> +
>> +/**
>> + * scmi_protocols_late_init  - Worker for late initialization
>> + *
>> + * This kicks in whenever a new protocol has completed its own registration via
>> + * scmi_register_protocol_events(): it is in charge of scanning the table of
>> + * pending handlers (registered by users while the related protocol was still
>> + * not initialized) and finalizing their initialization whenever possible;
>> + * invalid pending handlers are purged at this point in time.
>> + *
>> + * @work: The work item to use associated to the proper SCMI instance
>> + */
>> +static void scmi_protocols_late_init(struct work_struct *work)
>> +{
>> +	int bkt;
>> +	struct scmi_event_handler *hndl;
>> +	struct scmi_notify_instance *ni;
>> +	struct hlist_node *tmp;
>> +
>> +	ni = container_of(work, struct scmi_notify_instance, init_work);
>> +
>> +	mutex_lock(&ni->pending_mtx);
>> +	hash_for_each_safe(ni->pending_events_handlers, bkt, tmp, hndl, hash) {
>> +		bool ret;
>> +
>> +		ret = scmi_bind_event_handler(ni, hndl);
>> +		if (ret) {
>> +			pr_info("SCMI Notifications: finalized PENDING handler - key:%X\n",
>> +				hndl->key);
>> +			ret = scmi_event_handler_enable_events(hndl);
>> +		} else {
>> +			ret = scmi_valid_pending_handler(ni, hndl);
>> +		}
>> +		if (!ret) {
>> +			pr_info("SCMI Notifications: purging PENDING handler - key:%X\n",
>> +				hndl->key);
>> +			/* this hndl can be only a pending one */
>> +			scmi_put_handler_unlocked(ni, hndl);
>> +		}
>> +	}
>> +	mutex_unlock(&ni->pending_mtx);
>> +}
>> +
>> +/*
>> + * notify_ops are attached to the handle so that can be accessed
>> + * directly from an scmi_driver to register its own notifiers.
>> + */
>> +static struct scmi_notify_ops notify_ops = {
>> +	.register_event_notifier = scmi_register_notifier,
>> +	.unregister_event_notifier = scmi_unregister_notifier,
>> +};
>> +
>>  /**
>>   * scmi_notification_init  - Initializes Notification Core Support
>>   *
>> @@ -398,7 +1092,13 @@ int scmi_notification_init(struct scmi_handle *handle)
>>  	if (!ni->registered_protocols)
>>  		goto err;
>>  
>> +	mutex_init(&ni->pending_mtx);
>> +	hash_init(ni->pending_events_handlers);
>> +
>> +	INIT_WORK(&ni->init_work, scmi_protocols_late_init);
>> +
>>  	handle->notify_priv = ni;
>> +	handle->notify_ops = &notify_ops;
>>  
>>  	atomic_set(&ni->initialized, 1);
>>  	atomic_set(&ni->enabled, 1);
>> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
>> index a7ece64e8842..f765acda2311 100644
>> --- a/drivers/firmware/arm_scmi/notify.h
>> +++ b/drivers/firmware/arm_scmi/notify.h
>> @@ -9,9 +9,21 @@
>>  #ifndef _SCMI_NOTIFY_H
>>  #define _SCMI_NOTIFY_H
>>  
>> +#include <linux/bug.h>
>>  #include <linux/device.h>
>>  #include <linux/types.h>
>>  
>> +#define MAP_EVT_TO_ENABLE_CMD(id, map)			\
>> +({							\
>> +	int ret = -1;					\
>> +							\
>> +	if (likely((id) < ARRAY_SIZE((map))))		\
>> +		ret = (map)[(id)];			\
>> +	else						\
>> +		WARN(1, "UN-KNOWN evt_id:%d\n", (id));	\
>> +	ret;						\
>> +})
>> +
>>  /**
>>   * scmi_event  - Describes an event to be supported
>>   *
>> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
>> index 0679f10ab05e..797e1e03ae52 100644
>> --- a/include/linux/scmi_protocol.h
>> +++ b/include/linux/scmi_protocol.h
>> @@ -9,6 +9,8 @@
>>  #define _LINUX_SCMI_PROTOCOL_H
>>  
>>  #include <linux/device.h>
>> +#include <linux/ktime.h>
>> +#include <linux/notifier.h>
>>  #include <linux/types.h>
>>  
>>  #define SCMI_MAX_STR_SIZE	16
>> @@ -211,6 +213,52 @@ struct scmi_reset_ops {
>>  	int (*deassert)(const struct scmi_handle *handle, u32 domain);
>>  };
>>  
>> +/**
>> + * scmi_notify_ops  - represents notifications' operations provided by SCMI core
>> + *
>> + * A user can register/unregister its own notifier_block against the wanted
>> + * platform instance regarding the desired event identified by the
>> + * tuple: (proto_id, evt_id, src_id)
>> + *
>> + * @register_event_notifier: Register a notifier_block for the requested event
>> + * @unregister_event_notifier: Unregister a notifier_block for the requested
>> + *			       event
>> + *
>> + * where:
>> + *
>> + * @handle: The handle identifying the platform instance to use
>> + * @proto_id: The protocol ID as in SCMI Specification
>> + * @evt_id: The message ID of the desired event as in SCMI Specification
>> + * @src_id: A pointer to the desired source ID if different sources are
>> + *	    possible for the protocol (like domain_id, sensor_id...etc)
>> + *
>> + * @src_id can be provided as NULL if it simply does NOT make sense for
>> + * the protocol at hand, OR if the user is explicitly interested in
>> + * receiving notifications from ANY existent source associated to the
>> + * specified proto_id / evt_id.
>> + *
>> + * Received notifications are finally delivered to the registered users,
>> + * invoking the callback provided with the notifier_block *nb as follows:
>> + *
>> + *	int user_cb(nb, evt_id, report)
>> + *
>> + * with:
>> + *
>> + * @nb: The notifier block provided by the user
>> + * @evt_id: The message ID of the delivered event
>> + * @report: A custom struct describing the specific event delivered
>> + *
>> + * Events' customized report structs are detailed in the following.
>> + */
>> +struct scmi_notify_ops {
>> +	int (*register_event_notifier)(const struct scmi_handle *handle,
>> +				       u8 proto_id, u8 evt_id, u32 *src_id,
>> +				       struct notifier_block *nb);
>> +	int (*unregister_event_notifier)(const struct scmi_handle *handle,
>> +					 u8 proto_id, u8 evt_id, u32 *src_id,
>> +					 struct notifier_block *nb);
>> +};
>> +
>>  /**
>>   * struct scmi_handle - Handle returned to ARM SCMI clients for usage.
>>   *
>> @@ -221,6 +269,7 @@ struct scmi_reset_ops {
>>   * @clk_ops: pointer to set of clock protocol operations
>>   * @sensor_ops: pointer to set of sensor protocol operations
>>   * @reset_ops: pointer to set of reset protocol operations
>> + * @notify_ops: pointer to set of notifications related operations
>>   * @perf_priv: pointer to private data structure specific to performance
>>   *	protocol(for internal use only)
>>   * @clk_priv: pointer to private data structure specific to clock
>> @@ -242,6 +291,7 @@ struct scmi_handle {
>>  	struct scmi_power_ops *power_ops;
>>  	struct scmi_sensor_ops *sensor_ops;
>>  	struct scmi_reset_ops *reset_ops;
>> +	struct scmi_notify_ops *notify_ops;
>>  	/* for protocol internal use */
>>  	void *perf_priv;
>>  	void *clk_priv;
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-04 16:25   ` Cristian Marussi
@ 2020-03-09 12:26     ` Jonathan Cameron
  -1 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 12:26 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

On Wed, 4 Mar 2020 16:25:52 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Add core SCMI Notifications dispatch and delivery support logic which is
> able, at first, to dispatch well-known received events from the RX ISR to
> the dedicated deferred worker, and then, from there, to final deliver the
> events to the registered users' callbacks.
> 
> Dispatch and delivery is just added here, still not enabled.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>

Hmm.  Doing that magic in_flight stuff looks fine, but it feels like
the wrong way to approach a problem which is down to the lack of
atomicity of the kfifo_in pair.   Could we just make that atomic via
a bit of custom manipulation of the kfifo?

The snag is that stuff isn't exported from the innards of kfifo...

Maybe what you have here is the best option.

Jonathan

> ---
> V3 --> V4
> - dispatcher now handles dequeuing of events in chunks (header+payload):
>   handling of these in_flight events let us remove one unneeded memcpy
>   on RX interrupt path (scmi_notify)
> - deferred dispatcher now access their own per-protocol handlers' table
>   reducing locking contention on the RX path
> V2 --> V3
> - exposing wq in sysfs via WQ_SYSFS
> V1 --> V2
> - splitted out of V1 patch 04
> - moved from IDR maps to real HashTables to store event_handlers
> - simplified delivery logic
> ---
>  drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>  drivers/firmware/arm_scmi/notify.h |   9 +
>  2 files changed, 342 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> index d6c08cce3c63..0854d48d5886 100644
> --- a/drivers/firmware/arm_scmi/notify.c
> +++ b/drivers/firmware/arm_scmi/notify.c
> @@ -44,6 +44,27 @@
>   * as described in the SCMI Protocol specification, while src_id represents an
>   * optional, protocol dependent, source identifier (like domain_id, perf_id
>   * or sensor_id and so forth).
> + *
> + * Upon reception of a notification message from the platform the SCMI RX ISR
> + * passes the received message payload and some ancillary information (including
> + * an arrival timestamp in nanoseconds) to the core via @scmi_notify() which
> + * pushes the event-data itself on a protocol-dedicated kfifo queue for further
> + * deferred processing as specified in @scmi_events_dispatcher().
> + *
> + * Each protocol has it own dedicated work_struct and worker which, once kicked
> + * by the ISR, takes care to empty its own dedicated queue, deliverying the
> + * queued items into the proper notification-chain: notifications processing can
> + * proceed concurrently on distinct workers only between events belonging to
> + * different protocols while delivery of events within the same protocol is
> + * still strictly sequentially ordered by time of arrival.
> + *
> + * Events' information is then extracted from the SCMI Notification messages and
> + * conveyed, converted into a custom per-event report struct, as the void *data
> + * param to the user callback provided by the registered notifier_block, so that
> + * from the user perspective his callback will look invoked like:
> + *
> + * int user_cb(struct notifier_block *nb, unsigned long event_id, void *report)
> + *
>   */
>  
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> @@ -64,6 +85,7 @@
>  #include <linux/scmi_protocol.h>
>  #include <linux/slab.h>
>  #include <linux/types.h>
> +#include <linux/workqueue.h>
>  
>  #include "notify.h"
>  
> @@ -143,6 +165,8 @@
>  #define REVT_NOTIFY_ENABLE(revt, ...)	\
>  	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
>  						__VA_ARGS__))
> +#define REVT_FILL_REPORT(revt, ...)	\
> +	((revt)->proto->ops->fill_custom_report(__VA_ARGS__))
>  
>  struct scmi_registered_protocol_events_desc;
>  
> @@ -158,6 +182,7 @@ struct scmi_registered_protocol_events_desc;
>   *		 and protocols are allowed to register their supported events
>   * @enabled: A flag to indicate events can be enabled and start flowing
>   * @init_work: A work item to perform final initializations of pending handlers
> + * @notify_wq: A reference to the allocated Kernel cmwq
>   * @pending_mtx: A mutex to protect @pending_events_handlers
>   * @registered_protocols: An statically allocated array containing pointers to
>   *			  all the registered protocol-level specific information
> @@ -173,6 +198,8 @@ struct scmi_notify_instance {
>  
>  	struct work_struct				init_work;
>  
> +	struct workqueue_struct				*notify_wq;
> +
>  	struct mutex					pending_mtx;
>  	struct scmi_registered_protocol_events_desc	**registered_protocols;
>  	DECLARE_HASHTABLE(pending_events_handlers, 8);
> @@ -186,11 +213,15 @@ struct scmi_notify_instance {
>   * @sz: Size in bytes of the related kfifo
>   * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
>   * @kfifo: A dedicated Kernel kfifo descriptor
> + * @notify_work: A custom work item bound to this queue
> + * @wq: A reference to the associated workqueue
>   */
>  struct events_queue {
>  	size_t				sz;
>  	u8				*qbuf;
>  	struct kfifo			kfifo;
> +	struct work_struct		notify_work;
> +	struct workqueue_struct		*wq;
>  };
>  
>  /**
> @@ -316,8 +347,249 @@ struct scmi_event_handler {
>  
>  #define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
>  
> +static struct scmi_event_handler *
> +scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key);
> +static void scmi_put_active_handler(struct scmi_notify_instance *ni,
> +				    struct scmi_event_handler *hndl);
>  static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
>  				      struct scmi_event_handler *hndl);
> +
> +/**
> + * scmi_lookup_and_call_event_chain  - Lookup the proper chain and call it
> + *
> + * @ni: A reference to the notification instance to use
> + * @evt_key: The key to use to lookup the related notification chain
> + * @report: The customized event-specific report to pass down to the callbacks
> + *	    as their *data parameter.
> + */
> +static inline void
> +scmi_lookup_and_call_event_chain(struct scmi_notify_instance *ni,
> +				 u32 evt_key, void *report)
> +{
> +	int ret;
> +	struct scmi_event_handler *hndl;
> +
> +	/* Here ensure the event handler cannot vanish while using it */
> +	hndl = scmi_get_active_handler(ni, evt_key);
> +	if (IS_ERR_OR_NULL(hndl))
> +		return;
> +
> +	ret = blocking_notifier_call_chain(&hndl->chain,
> +					   KEY_XTRACT_EVT_ID(evt_key),
> +					   report);
> +	/* Notifiers are NOT supposed to cut the chain ... */
> +	WARN_ON_ONCE(ret & NOTIFY_STOP_MASK);
> +
> +	scmi_put_active_handler(ni, hndl);
> +}
> +
> +/**
> + * scmi_process_event_header  - Dequeue and process an event header
> + *
> + * Read an event header from the protocol queue into the dedicated scratch
> + * buffer and looks for a matching registered event; in case an anomalously
> + * sized read is detected just flush the queue.
> + *
> + * @eq: The queue to use
> + * @pd: The protocol descriptor to use
> + *
> + * Returns:
> + *  - a reference to the matching registered event when found
> + *  - ERR_PTR(-EINVAL) when NO registered event could be found
> + *  - NULL when the queue is empty
> + */
> +static inline struct scmi_registered_event *
> +scmi_process_event_header(struct events_queue *eq,
> +			  struct scmi_registered_protocol_events_desc *pd)
> +{
> +	unsigned int outs;
> +	struct scmi_registered_event *r_evt;
> +
> +	outs = kfifo_out(&eq->kfifo, pd->eh,
> +			 sizeof(struct scmi_event_header));
> +	if (!outs)
> +		return NULL;
> +	if (outs != sizeof(struct scmi_event_header)) {
> +		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
> +		kfifo_reset_out(&eq->kfifo);
> +		return NULL;
> +	}
> +
> +	r_evt = SCMI_GET_REVT_FROM_PD(pd, pd->eh->evt_id);
> +	if (!r_evt)
> +		r_evt = ERR_PTR(-EINVAL);
> +
> +	return r_evt;
> +}
> +
> +/**
> + * scmi_process_event_payload  - Dequeue and process an event payload
> + *
> + * Read an event payload from the protocol queue into the dedicated scratch
> + * buffer, fills a custom report and then look for matching event handlers and
> + * call them; skip any unknown event (as marked by scmi_process_event_header())
> + * and in case an anomalously sized read is detected just flush the queue.
> + *
> + * @eq: The queue to use
> + * @pd: The protocol descriptor to use
> + * @r_evt: The registered event descriptor to use
> + *
> + * Return: False when the queue is empty
> + */
> +static inline bool
> +scmi_process_event_payload(struct events_queue *eq,
> +			   struct scmi_registered_protocol_events_desc *pd,
> +			   struct scmi_registered_event *r_evt)
> +{
> +	u32 src_id, key;
> +	unsigned int outs;
> +	void *report = NULL;
> +
> +	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
> +	if (unlikely(!outs))
> +		return false;
> +
> +	/* Any in-flight event has now been officially processed */
> +	pd->in_flight = NULL;
> +
> +	if (unlikely(outs != pd->eh->payld_sz)) {
> +		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
> +		kfifo_reset_out(&eq->kfifo);
> +		return false;
> +	}
> +
> +	if (IS_ERR(r_evt)) {
> +		pr_warn("SCMI Notifications: SKIP UNKNOWN EVT - proto:%X  evt:%d\n",
> +			pd->id, pd->eh->evt_id);
> +		return true;
> +	}
> +
> +	report = REVT_FILL_REPORT(r_evt, pd->eh->evt_id, pd->eh->timestamp,
> +				  pd->eh->payld, pd->eh->payld_sz,
> +				  r_evt->report, &src_id);
> +	if (!report) {
> +		pr_err("SCMI Notifications: Report not available - proto:%X  evt:%d\n",
> +		       pd->id, pd->eh->evt_id);
> +		return true;
> +	}
> +
> +	/* At first search for a generic ALL src_ids handler... */
> +	key = MAKE_ALL_SRCS_KEY(pd->id, pd->eh->evt_id);
> +	scmi_lookup_and_call_event_chain(pd->ni, key, report);
> +
> +	/* ...then search for any specific src_id */
> +	key = MAKE_HASH_KEY(pd->id, pd->eh->evt_id, src_id);
> +	scmi_lookup_and_call_event_chain(pd->ni, key, report);
> +
> +	return true;
> +}
> +
> +/**
> + * scmi_events_dispatcher  - Common worker logic for all work items.
> + *
> + *  1. dequeue one pending RX notification (queued in SCMI RX ISR context)
> + *  2. generate a custom event report from the received event message
> + *  3. lookup for any registered ALL_SRC_IDs handler
> + *     - > call the related notification chain passing in the report
> + *  4. lookup for any registered specific SRC_ID handler
> + *     - > call the related notification chain passing in the report
> + *
> + * Note that:
> + * - a dedicated per-protocol kfifo queue is used: in this way an anomalous
> + *   flood of events cannot saturate other protocols' queues.
> + *
> + * - each per-protocol queue is associated to a distinct work_item, which
> + *   means, in turn, that:
> + *   + all protocols can process their dedicated queues concurrently
> + *     (since notify_wq:max_active != 1)
> + *   + anyway at most one worker instance is allowed to run on the same queue
> + *     concurrently: this ensures that we can have only one concurrent
> + *     reader/writer on the associated kfifo, so that we can use it lock-less
> + *
> + * @work: The work item to use, which is associated to a dedicated events_queue
> + */
> +static void scmi_events_dispatcher(struct work_struct *work)
> +{
> +	struct events_queue *eq;
> +	struct scmi_registered_protocol_events_desc *pd;
> +	struct scmi_registered_event *r_evt;
> +
> +	eq = container_of(work, struct events_queue, notify_work);
> +	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
> +			  equeue);
> +	/*
> +	 * In order to keep the queue lock-less and the number of memcopies
> +	 * to the bare minimum needed, the dispatcher accounts for the
> +	 * possibility of per-protocol in-flight events: i.e. an event whose
> +	 * reception could end up being split across two subsequent runs of this
> +	 * worker, first the header, then the payload.
> +	 */
> +	do {
> +		if (likely(!pd->in_flight)) {
> +			r_evt = scmi_process_event_header(eq, pd);
> +			if (!r_evt)
> +				break;
> +			pd->in_flight = r_evt;
> +		} else {
> +			r_evt = pd->in_flight;
> +		}
> +	} while (scmi_process_event_payload(eq, pd, r_evt));
> +}
> +
> +/**
> + * scmi_notify  - Queues a notification for further deferred processing
> + *
> + * This is called in interrupt context to queue a received event for
> + * deferred processing.
> + *
> + * @handle: The handle identifying the platform instance from which the
> + *	    dispatched event is generated
> + * @proto_id: Protocol ID
> + * @evt_id: Event ID (msgID)
> + * @buf: Event Message Payload (without the header)
> + * @len: Event Message Payload size
> + * @ts: RX Timestamp in nanoseconds (boottime)
> + *
> + * Return: 0 on Success
> + */
> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
> +		const void *buf, size_t len, u64 ts)
> +{
> +	struct scmi_registered_event *r_evt;
> +	struct scmi_event_header eh;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	/* Ensure atomic value is updated */
> +	smp_mb__before_atomic();
> +	if (unlikely(!atomic_read(&ni->enabled)))
> +		return 0;
> +
> +	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
> +	if (unlikely(!r_evt))
> +		return -EINVAL;
> +
> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
> +		pr_err("SCMI Notifications: discard badly sized message\n");
> +		return -EINVAL;
> +	}
> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
> +		     sizeof(eh) + len)) {
> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
> +			proto_id, evt_id, ts);
> +		return -ENOMEM;
> +	}
> +
> +	eh.timestamp = ts;
> +	eh.evt_id = evt_id;
> +	eh.payld_sz = len;
> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));

I'd add a comment that this potential race here is the reason (I think) for all
the inflight handling above.

Either that or create a kfifo_in_pair_unsafe that just makes these atomic by only
updating the kfifo->in point after adding both parts.

It will be as simple as (I think, kfifo magic always give me a headache).
{
	struct __kfifo *__kfifo = &kfifo->kfifo;
	kfifo_copy_in(fifo, &eh, sizeof(eh), fifo->in);
	kfifo_copy_in(fifo, &buf, len, fifo->in + sizeof(eh));
	fifo->in += len + sizeof(eh);
}

It's unsafe because crazy things will happen if there isn't enough room, but you
can't get there in this code because of the check above and we are making
horrendous assumptions about the kfifo type.

> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> +	queue_work(r_evt->proto->equeue.wq,
> +		   &r_evt->proto->equeue.notify_work);
> +
> +	return 0;
> +}
> +
>  /**
>   * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
>   *
> @@ -332,12 +604,21 @@ static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
>  static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
>  					struct events_queue *equeue, size_t sz)
>  {
> +	int ret = 0;

ret looks to be always initialized below.

> +
>  	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
>  	if (!equeue->qbuf)
>  		return -ENOMEM;
>  	equeue->sz = sz;
>  
> -	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> +	ret = kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> +	if (ret)
> +		return ret;
> +
> +	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
> +	equeue->wq = ni->notify_wq;
> +
> +	return ret;
>  }
>  
>  /**
> @@ -740,6 +1021,38 @@ scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
>  	return __scmi_event_handler_get_ops(ni, evt_key, true);
>  }
>  
> +/**
> + * scmi_get_active_handler  - Helper to get active handlers only
> + *
> + * Search for the desired handler matching the key only in the per-protocol
> + * table of registered handlers: this is called only from the dispatching path
> + * so want to be as quick as possible and do not care about pending.
> + *
> + * @ni: A reference to the notification instance to use
> + * @evt_key: The event key to use
> + *
> + * Return: A properly refcounted active handler
> + */
> +static struct scmi_event_handler *
> +scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
> +{
> +	struct scmi_registered_event *r_evt;
> +	struct scmi_event_handler *hndl = NULL;
> +
> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
> +			      KEY_XTRACT_EVT_ID(evt_key));
> +	if (likely(r_evt)) {
> +		mutex_lock(&r_evt->proto->registered_mtx);
> +		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
> +				hndl, evt_key);
> +		if (likely(hndl))
> +			refcount_inc(&hndl->users);
> +		mutex_unlock(&r_evt->proto->registered_mtx);
> +	}
> +
> +	return hndl;
> +}
> +
>  /**
>   * __scmi_enable_evt  - Enable/disable events generation
>   *
> @@ -861,6 +1174,16 @@ static void scmi_put_handler(struct scmi_notify_instance *ni,
>  	mutex_unlock(&ni->pending_mtx);
>  }
>  
> +static void scmi_put_active_handler(struct scmi_notify_instance *ni,
> +					  struct scmi_event_handler *hndl)
> +{
> +	struct scmi_registered_event *r_evt = hndl->r_evt;
> +
> +	mutex_lock(&r_evt->proto->registered_mtx);
> +	scmi_put_handler_unlocked(ni, hndl);
> +	mutex_unlock(&r_evt->proto->registered_mtx);
> +}
> +
>  /**
>   * scmi_event_handler_enable_events  - Enable events associated to an handler
>   *
> @@ -1087,6 +1410,12 @@ int scmi_notification_init(struct scmi_handle *handle)
>  	ni->gid = gid;
>  	ni->handle = handle;
>  
> +	ni->notify_wq = alloc_workqueue("scmi_notify",
> +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
> +					0);
> +	if (!ni->notify_wq)
> +		goto err;
> +
>  	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
>  						sizeof(char *), GFP_KERNEL);
>  	if (!ni->registered_protocols)
> @@ -1133,6 +1462,9 @@ void scmi_notification_exit(struct scmi_handle *handle)
>  	/* Ensure atomic values are updated */
>  	smp_mb__after_atomic();
>  
> +	/* Destroy while letting pending work complete */
> +	destroy_workqueue(ni->notify_wq);
> +
>  	devres_release_group(ni->handle->dev, ni->gid);
>  
>  	pr_info("SCMI Notifications Core Shutdown.\n");
> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
> index f765acda2311..6cd386649d5a 100644
> --- a/drivers/firmware/arm_scmi/notify.h
> +++ b/drivers/firmware/arm_scmi/notify.h
> @@ -51,10 +51,17 @@ struct scmi_event {
>   *			using the proper custom protocol commands.
>   *			Return true if at least one the required src_id
>   *			has been successfully enabled/disabled
> + * @fill_custom_report: fills a custom event report from the provided
> + *			event message payld identifying the event
> + *			specific src_id.
> + *			Return NULL on failure otherwise @report now fully
> + *			populated
>   */
>  struct scmi_protocol_event_ops {
>  	bool (*set_notify_enabled)(const struct scmi_handle *handle,
>  				   u8 evt_id, u32 src_id, bool enabled);
> +	void *(*fill_custom_report)(u8 evt_id, u64 timestamp, const void *payld,
> +				    size_t payld_sz, void *report, u32 *src_id);
>  };
>  
>  int scmi_notification_init(struct scmi_handle *handle);
> @@ -65,5 +72,7 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
>  				  const struct scmi_protocol_event_ops *ops,
>  				  const struct scmi_event *evt, int num_events,
>  				  int num_sources);
> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
> +		const void *buf, size_t len, u64 ts);
>  
>  #endif /* _SCMI_NOTIFY_H */



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-09 12:26     ` Jonathan Cameron
  0 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 12:26 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

On Wed, 4 Mar 2020 16:25:52 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Add core SCMI Notifications dispatch and delivery support logic which is
> able, at first, to dispatch well-known received events from the RX ISR to
> the dedicated deferred worker, and then, from there, to final deliver the
> events to the registered users' callbacks.
> 
> Dispatch and delivery is just added here, still not enabled.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>

Hmm.  Doing that magic in_flight stuff looks fine, but it feels like
the wrong way to approach a problem which is down to the lack of
atomicity of the kfifo_in pair.   Could we just make that atomic via
a bit of custom manipulation of the kfifo?

The snag is that stuff isn't exported from the innards of kfifo...

Maybe what you have here is the best option.

Jonathan

> ---
> V3 --> V4
> - dispatcher now handles dequeuing of events in chunks (header+payload):
>   handling of these in_flight events let us remove one unneeded memcpy
>   on RX interrupt path (scmi_notify)
> - deferred dispatcher now access their own per-protocol handlers' table
>   reducing locking contention on the RX path
> V2 --> V3
> - exposing wq in sysfs via WQ_SYSFS
> V1 --> V2
> - splitted out of V1 patch 04
> - moved from IDR maps to real HashTables to store event_handlers
> - simplified delivery logic
> ---
>  drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>  drivers/firmware/arm_scmi/notify.h |   9 +
>  2 files changed, 342 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> index d6c08cce3c63..0854d48d5886 100644
> --- a/drivers/firmware/arm_scmi/notify.c
> +++ b/drivers/firmware/arm_scmi/notify.c
> @@ -44,6 +44,27 @@
>   * as described in the SCMI Protocol specification, while src_id represents an
>   * optional, protocol dependent, source identifier (like domain_id, perf_id
>   * or sensor_id and so forth).
> + *
> + * Upon reception of a notification message from the platform the SCMI RX ISR
> + * passes the received message payload and some ancillary information (including
> + * an arrival timestamp in nanoseconds) to the core via @scmi_notify() which
> + * pushes the event-data itself on a protocol-dedicated kfifo queue for further
> + * deferred processing as specified in @scmi_events_dispatcher().
> + *
> + * Each protocol has it own dedicated work_struct and worker which, once kicked
> + * by the ISR, takes care to empty its own dedicated queue, deliverying the
> + * queued items into the proper notification-chain: notifications processing can
> + * proceed concurrently on distinct workers only between events belonging to
> + * different protocols while delivery of events within the same protocol is
> + * still strictly sequentially ordered by time of arrival.
> + *
> + * Events' information is then extracted from the SCMI Notification messages and
> + * conveyed, converted into a custom per-event report struct, as the void *data
> + * param to the user callback provided by the registered notifier_block, so that
> + * from the user perspective his callback will look invoked like:
> + *
> + * int user_cb(struct notifier_block *nb, unsigned long event_id, void *report)
> + *
>   */
>  
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> @@ -64,6 +85,7 @@
>  #include <linux/scmi_protocol.h>
>  #include <linux/slab.h>
>  #include <linux/types.h>
> +#include <linux/workqueue.h>
>  
>  #include "notify.h"
>  
> @@ -143,6 +165,8 @@
>  #define REVT_NOTIFY_ENABLE(revt, ...)	\
>  	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
>  						__VA_ARGS__))
> +#define REVT_FILL_REPORT(revt, ...)	\
> +	((revt)->proto->ops->fill_custom_report(__VA_ARGS__))
>  
>  struct scmi_registered_protocol_events_desc;
>  
> @@ -158,6 +182,7 @@ struct scmi_registered_protocol_events_desc;
>   *		 and protocols are allowed to register their supported events
>   * @enabled: A flag to indicate events can be enabled and start flowing
>   * @init_work: A work item to perform final initializations of pending handlers
> + * @notify_wq: A reference to the allocated Kernel cmwq
>   * @pending_mtx: A mutex to protect @pending_events_handlers
>   * @registered_protocols: An statically allocated array containing pointers to
>   *			  all the registered protocol-level specific information
> @@ -173,6 +198,8 @@ struct scmi_notify_instance {
>  
>  	struct work_struct				init_work;
>  
> +	struct workqueue_struct				*notify_wq;
> +
>  	struct mutex					pending_mtx;
>  	struct scmi_registered_protocol_events_desc	**registered_protocols;
>  	DECLARE_HASHTABLE(pending_events_handlers, 8);
> @@ -186,11 +213,15 @@ struct scmi_notify_instance {
>   * @sz: Size in bytes of the related kfifo
>   * @qbuf: Pre-allocated buffer of @sz bytes to be used by the kfifo
>   * @kfifo: A dedicated Kernel kfifo descriptor
> + * @notify_work: A custom work item bound to this queue
> + * @wq: A reference to the associated workqueue
>   */
>  struct events_queue {
>  	size_t				sz;
>  	u8				*qbuf;
>  	struct kfifo			kfifo;
> +	struct work_struct		notify_work;
> +	struct workqueue_struct		*wq;
>  };
>  
>  /**
> @@ -316,8 +347,249 @@ struct scmi_event_handler {
>  
>  #define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
>  
> +static struct scmi_event_handler *
> +scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key);
> +static void scmi_put_active_handler(struct scmi_notify_instance *ni,
> +				    struct scmi_event_handler *hndl);
>  static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
>  				      struct scmi_event_handler *hndl);
> +
> +/**
> + * scmi_lookup_and_call_event_chain  - Lookup the proper chain and call it
> + *
> + * @ni: A reference to the notification instance to use
> + * @evt_key: The key to use to lookup the related notification chain
> + * @report: The customized event-specific report to pass down to the callbacks
> + *	    as their *data parameter.
> + */
> +static inline void
> +scmi_lookup_and_call_event_chain(struct scmi_notify_instance *ni,
> +				 u32 evt_key, void *report)
> +{
> +	int ret;
> +	struct scmi_event_handler *hndl;
> +
> +	/* Here ensure the event handler cannot vanish while using it */
> +	hndl = scmi_get_active_handler(ni, evt_key);
> +	if (IS_ERR_OR_NULL(hndl))
> +		return;
> +
> +	ret = blocking_notifier_call_chain(&hndl->chain,
> +					   KEY_XTRACT_EVT_ID(evt_key),
> +					   report);
> +	/* Notifiers are NOT supposed to cut the chain ... */
> +	WARN_ON_ONCE(ret & NOTIFY_STOP_MASK);
> +
> +	scmi_put_active_handler(ni, hndl);
> +}
> +
> +/**
> + * scmi_process_event_header  - Dequeue and process an event header
> + *
> + * Read an event header from the protocol queue into the dedicated scratch
> + * buffer and looks for a matching registered event; in case an anomalously
> + * sized read is detected just flush the queue.
> + *
> + * @eq: The queue to use
> + * @pd: The protocol descriptor to use
> + *
> + * Returns:
> + *  - a reference to the matching registered event when found
> + *  - ERR_PTR(-EINVAL) when NO registered event could be found
> + *  - NULL when the queue is empty
> + */
> +static inline struct scmi_registered_event *
> +scmi_process_event_header(struct events_queue *eq,
> +			  struct scmi_registered_protocol_events_desc *pd)
> +{
> +	unsigned int outs;
> +	struct scmi_registered_event *r_evt;
> +
> +	outs = kfifo_out(&eq->kfifo, pd->eh,
> +			 sizeof(struct scmi_event_header));
> +	if (!outs)
> +		return NULL;
> +	if (outs != sizeof(struct scmi_event_header)) {
> +		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
> +		kfifo_reset_out(&eq->kfifo);
> +		return NULL;
> +	}
> +
> +	r_evt = SCMI_GET_REVT_FROM_PD(pd, pd->eh->evt_id);
> +	if (!r_evt)
> +		r_evt = ERR_PTR(-EINVAL);
> +
> +	return r_evt;
> +}
> +
> +/**
> + * scmi_process_event_payload  - Dequeue and process an event payload
> + *
> + * Read an event payload from the protocol queue into the dedicated scratch
> + * buffer, fills a custom report and then look for matching event handlers and
> + * call them; skip any unknown event (as marked by scmi_process_event_header())
> + * and in case an anomalously sized read is detected just flush the queue.
> + *
> + * @eq: The queue to use
> + * @pd: The protocol descriptor to use
> + * @r_evt: The registered event descriptor to use
> + *
> + * Return: False when the queue is empty
> + */
> +static inline bool
> +scmi_process_event_payload(struct events_queue *eq,
> +			   struct scmi_registered_protocol_events_desc *pd,
> +			   struct scmi_registered_event *r_evt)
> +{
> +	u32 src_id, key;
> +	unsigned int outs;
> +	void *report = NULL;
> +
> +	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
> +	if (unlikely(!outs))
> +		return false;
> +
> +	/* Any in-flight event has now been officially processed */
> +	pd->in_flight = NULL;
> +
> +	if (unlikely(outs != pd->eh->payld_sz)) {
> +		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
> +		kfifo_reset_out(&eq->kfifo);
> +		return false;
> +	}
> +
> +	if (IS_ERR(r_evt)) {
> +		pr_warn("SCMI Notifications: SKIP UNKNOWN EVT - proto:%X  evt:%d\n",
> +			pd->id, pd->eh->evt_id);
> +		return true;
> +	}
> +
> +	report = REVT_FILL_REPORT(r_evt, pd->eh->evt_id, pd->eh->timestamp,
> +				  pd->eh->payld, pd->eh->payld_sz,
> +				  r_evt->report, &src_id);
> +	if (!report) {
> +		pr_err("SCMI Notifications: Report not available - proto:%X  evt:%d\n",
> +		       pd->id, pd->eh->evt_id);
> +		return true;
> +	}
> +
> +	/* At first search for a generic ALL src_ids handler... */
> +	key = MAKE_ALL_SRCS_KEY(pd->id, pd->eh->evt_id);
> +	scmi_lookup_and_call_event_chain(pd->ni, key, report);
> +
> +	/* ...then search for any specific src_id */
> +	key = MAKE_HASH_KEY(pd->id, pd->eh->evt_id, src_id);
> +	scmi_lookup_and_call_event_chain(pd->ni, key, report);
> +
> +	return true;
> +}
> +
> +/**
> + * scmi_events_dispatcher  - Common worker logic for all work items.
> + *
> + *  1. dequeue one pending RX notification (queued in SCMI RX ISR context)
> + *  2. generate a custom event report from the received event message
> + *  3. lookup for any registered ALL_SRC_IDs handler
> + *     - > call the related notification chain passing in the report
> + *  4. lookup for any registered specific SRC_ID handler
> + *     - > call the related notification chain passing in the report
> + *
> + * Note that:
> + * - a dedicated per-protocol kfifo queue is used: in this way an anomalous
> + *   flood of events cannot saturate other protocols' queues.
> + *
> + * - each per-protocol queue is associated to a distinct work_item, which
> + *   means, in turn, that:
> + *   + all protocols can process their dedicated queues concurrently
> + *     (since notify_wq:max_active != 1)
> + *   + anyway at most one worker instance is allowed to run on the same queue
> + *     concurrently: this ensures that we can have only one concurrent
> + *     reader/writer on the associated kfifo, so that we can use it lock-less
> + *
> + * @work: The work item to use, which is associated to a dedicated events_queue
> + */
> +static void scmi_events_dispatcher(struct work_struct *work)
> +{
> +	struct events_queue *eq;
> +	struct scmi_registered_protocol_events_desc *pd;
> +	struct scmi_registered_event *r_evt;
> +
> +	eq = container_of(work, struct events_queue, notify_work);
> +	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
> +			  equeue);
> +	/*
> +	 * In order to keep the queue lock-less and the number of memcopies
> +	 * to the bare minimum needed, the dispatcher accounts for the
> +	 * possibility of per-protocol in-flight events: i.e. an event whose
> +	 * reception could end up being split across two subsequent runs of this
> +	 * worker, first the header, then the payload.
> +	 */
> +	do {
> +		if (likely(!pd->in_flight)) {
> +			r_evt = scmi_process_event_header(eq, pd);
> +			if (!r_evt)
> +				break;
> +			pd->in_flight = r_evt;
> +		} else {
> +			r_evt = pd->in_flight;
> +		}
> +	} while (scmi_process_event_payload(eq, pd, r_evt));
> +}
> +
> +/**
> + * scmi_notify  - Queues a notification for further deferred processing
> + *
> + * This is called in interrupt context to queue a received event for
> + * deferred processing.
> + *
> + * @handle: The handle identifying the platform instance from which the
> + *	    dispatched event is generated
> + * @proto_id: Protocol ID
> + * @evt_id: Event ID (msgID)
> + * @buf: Event Message Payload (without the header)
> + * @len: Event Message Payload size
> + * @ts: RX Timestamp in nanoseconds (boottime)
> + *
> + * Return: 0 on Success
> + */
> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
> +		const void *buf, size_t len, u64 ts)
> +{
> +	struct scmi_registered_event *r_evt;
> +	struct scmi_event_header eh;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	/* Ensure atomic value is updated */
> +	smp_mb__before_atomic();
> +	if (unlikely(!atomic_read(&ni->enabled)))
> +		return 0;
> +
> +	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
> +	if (unlikely(!r_evt))
> +		return -EINVAL;
> +
> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
> +		pr_err("SCMI Notifications: discard badly sized message\n");
> +		return -EINVAL;
> +	}
> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
> +		     sizeof(eh) + len)) {
> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
> +			proto_id, evt_id, ts);
> +		return -ENOMEM;
> +	}
> +
> +	eh.timestamp = ts;
> +	eh.evt_id = evt_id;
> +	eh.payld_sz = len;
> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));

I'd add a comment that this potential race here is the reason (I think) for all
the inflight handling above.

Either that or create a kfifo_in_pair_unsafe that just makes these atomic by only
updating the kfifo->in point after adding both parts.

It will be as simple as (I think, kfifo magic always give me a headache).
{
	struct __kfifo *__kfifo = &kfifo->kfifo;
	kfifo_copy_in(fifo, &eh, sizeof(eh), fifo->in);
	kfifo_copy_in(fifo, &buf, len, fifo->in + sizeof(eh));
	fifo->in += len + sizeof(eh);
}

It's unsafe because crazy things will happen if there isn't enough room, but you
can't get there in this code because of the check above and we are making
horrendous assumptions about the kfifo type.

> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> +	queue_work(r_evt->proto->equeue.wq,
> +		   &r_evt->proto->equeue.notify_work);
> +
> +	return 0;
> +}
> +
>  /**
>   * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
>   *
> @@ -332,12 +604,21 @@ static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
>  static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
>  					struct events_queue *equeue, size_t sz)
>  {
> +	int ret = 0;

ret looks to be always initialized below.

> +
>  	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
>  	if (!equeue->qbuf)
>  		return -ENOMEM;
>  	equeue->sz = sz;
>  
> -	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> +	ret = kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> +	if (ret)
> +		return ret;
> +
> +	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
> +	equeue->wq = ni->notify_wq;
> +
> +	return ret;
>  }
>  
>  /**
> @@ -740,6 +1021,38 @@ scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
>  	return __scmi_event_handler_get_ops(ni, evt_key, true);
>  }
>  
> +/**
> + * scmi_get_active_handler  - Helper to get active handlers only
> + *
> + * Search for the desired handler matching the key only in the per-protocol
> + * table of registered handlers: this is called only from the dispatching path
> + * so want to be as quick as possible and do not care about pending.
> + *
> + * @ni: A reference to the notification instance to use
> + * @evt_key: The event key to use
> + *
> + * Return: A properly refcounted active handler
> + */
> +static struct scmi_event_handler *
> +scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
> +{
> +	struct scmi_registered_event *r_evt;
> +	struct scmi_event_handler *hndl = NULL;
> +
> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
> +			      KEY_XTRACT_EVT_ID(evt_key));
> +	if (likely(r_evt)) {
> +		mutex_lock(&r_evt->proto->registered_mtx);
> +		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
> +				hndl, evt_key);
> +		if (likely(hndl))
> +			refcount_inc(&hndl->users);
> +		mutex_unlock(&r_evt->proto->registered_mtx);
> +	}
> +
> +	return hndl;
> +}
> +
>  /**
>   * __scmi_enable_evt  - Enable/disable events generation
>   *
> @@ -861,6 +1174,16 @@ static void scmi_put_handler(struct scmi_notify_instance *ni,
>  	mutex_unlock(&ni->pending_mtx);
>  }
>  
> +static void scmi_put_active_handler(struct scmi_notify_instance *ni,
> +					  struct scmi_event_handler *hndl)
> +{
> +	struct scmi_registered_event *r_evt = hndl->r_evt;
> +
> +	mutex_lock(&r_evt->proto->registered_mtx);
> +	scmi_put_handler_unlocked(ni, hndl);
> +	mutex_unlock(&r_evt->proto->registered_mtx);
> +}
> +
>  /**
>   * scmi_event_handler_enable_events  - Enable events associated to an handler
>   *
> @@ -1087,6 +1410,12 @@ int scmi_notification_init(struct scmi_handle *handle)
>  	ni->gid = gid;
>  	ni->handle = handle;
>  
> +	ni->notify_wq = alloc_workqueue("scmi_notify",
> +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
> +					0);
> +	if (!ni->notify_wq)
> +		goto err;
> +
>  	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
>  						sizeof(char *), GFP_KERNEL);
>  	if (!ni->registered_protocols)
> @@ -1133,6 +1462,9 @@ void scmi_notification_exit(struct scmi_handle *handle)
>  	/* Ensure atomic values are updated */
>  	smp_mb__after_atomic();
>  
> +	/* Destroy while letting pending work complete */
> +	destroy_workqueue(ni->notify_wq);
> +
>  	devres_release_group(ni->handle->dev, ni->gid);
>  
>  	pr_info("SCMI Notifications Core Shutdown.\n");
> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
> index f765acda2311..6cd386649d5a 100644
> --- a/drivers/firmware/arm_scmi/notify.h
> +++ b/drivers/firmware/arm_scmi/notify.h
> @@ -51,10 +51,17 @@ struct scmi_event {
>   *			using the proper custom protocol commands.
>   *			Return true if at least one the required src_id
>   *			has been successfully enabled/disabled
> + * @fill_custom_report: fills a custom event report from the provided
> + *			event message payld identifying the event
> + *			specific src_id.
> + *			Return NULL on failure otherwise @report now fully
> + *			populated
>   */
>  struct scmi_protocol_event_ops {
>  	bool (*set_notify_enabled)(const struct scmi_handle *handle,
>  				   u8 evt_id, u32 src_id, bool enabled);
> +	void *(*fill_custom_report)(u8 evt_id, u64 timestamp, const void *payld,
> +				    size_t payld_sz, void *report, u32 *src_id);
>  };
>  
>  int scmi_notification_init(struct scmi_handle *handle);
> @@ -65,5 +72,7 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
>  				  const struct scmi_protocol_event_ops *ops,
>  				  const struct scmi_event *evt, int num_events,
>  				  int num_sources);
> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
> +		const void *buf, size_t len, u64 ts);
>  
>  #endif /* _SCMI_NOTIFY_H */



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 09/13] firmware: arm_scmi: Add Power notifications support
  2020-03-04 16:25   ` Cristian Marussi
@ 2020-03-09 12:28     ` Jonathan Cameron
  -1 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 12:28 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

On Wed, 4 Mar 2020 16:25:54 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Make SCMI Power protocol register with the notification core.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>

One comment inline on an unusual code construct, otherwise fine.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
> V3 --> V4
> - scmi_event field renamed
> V2 --> V3
> - added handle awareness
> V1 --> V2
> - simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
>   logic out of protocol. ALL_SRCIDs logic is now in charge of the
>   notification core, together with proper reference counting of enables
> - switched to devres protocol-registration
> ---
>  drivers/firmware/arm_scmi/power.c | 123 ++++++++++++++++++++++++++++++
>  include/linux/scmi_protocol.h     |  15 ++++
>  2 files changed, 138 insertions(+)
> 
> diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
> index cf7f0312381b..281da7e7e33a 100644
> --- a/drivers/firmware/arm_scmi/power.c
> +++ b/drivers/firmware/arm_scmi/power.c
> @@ -6,6 +6,7 @@
>   */
>  
>  #include "common.h"
> +#include "notify.h"
>  
>  enum scmi_power_protocol_cmd {
>  	POWER_DOMAIN_ATTRIBUTES = 0x3,
> @@ -48,6 +49,12 @@ struct scmi_power_state_notify {
>  	__le32 notify_enable;
>  };
>  
> +struct scmi_power_state_notify_payld {
> +	__le32 agent_id;
> +	__le32 domain_id;
> +	__le32 power_state;
> +};
> +
>  struct power_dom_info {
>  	bool state_set_sync;
>  	bool state_set_async;
> @@ -63,6 +70,11 @@ struct scmi_power_info {
>  	struct power_dom_info *dom_info;
>  };
>  
> +static enum scmi_power_protocol_cmd evt_2_cmd[] = {
> +	POWER_STATE_NOTIFY,
> +	POWER_STATE_CHANGE_REQUESTED_NOTIFY,
> +};
> +
>  static int scmi_power_attributes_get(const struct scmi_handle *handle,
>  				     struct scmi_power_info *pi)
>  {
> @@ -186,6 +198,111 @@ static struct scmi_power_ops power_ops = {
>  	.state_get = scmi_power_state_get,
>  };
>  
> +static int scmi_power_request_notify(const struct scmi_handle *handle,
> +				     u32 domain, int message_id, bool enable)
> +{
> +	int ret;
> +	struct scmi_xfer *t;
> +	struct scmi_power_state_notify *notify;
> +
> +	ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_POWER,
> +				 sizeof(*notify), 0, &t);
> +	if (ret)
> +		return ret;
> +
> +	notify = t->tx.buf;
> +	notify->domain = cpu_to_le32(domain);
> +	notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
> +
> +	ret = scmi_do_xfer(handle, t);
> +
> +	scmi_xfer_put(handle, t);
> +	return ret;
> +}
> +
> +static bool scmi_power_set_notify_enabled(const struct scmi_handle *handle,
> +					  u8 evt_id, u32 src_id, bool enable)
> +{
> +	int ret, cmd_id;
> +
> +	cmd_id = MAP_EVT_TO_ENABLE_CMD(evt_id, evt_2_cmd);
> +	if (cmd_id < 0)
> +		return false;
> +
> +	ret = scmi_power_request_notify(handle, src_id, cmd_id, enable);
> +	if (ret)
> +		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n",
> +				SCMI_PROTOCOL_POWER, evt_id, src_id, ret);
> +
> +	return !ret ? true : false;

	return !ret;

	Is the same thing...

> +}
> +
> +static void *scmi_power_fill_custom_report(u8 evt_id, u64 timestamp,
> +					   const void *payld, size_t payld_sz,
> +					   void *report, u32 *src_id)
> +{
> +	void *rep = NULL;
> +
> +	switch (evt_id) {
> +	case POWER_STATE_CHANGED:
> +	{
> +		const struct scmi_power_state_notify_payld *p = payld;
> +		struct scmi_power_state_changed_report *r = report;
> +
> +		if (sizeof(*p) != payld_sz)
> +			break;
> +
> +		r->timestamp = timestamp;
> +		r->agent_id = le32_to_cpu(p->agent_id);
> +		r->domain_id = le32_to_cpu(p->domain_id);
> +		r->power_state = le32_to_cpu(p->power_state);
> +		*src_id = r->domain_id;
> +		rep = r;
> +		break;
> +	}
> +	case POWER_STATE_CHANGE_REQUESTED:
> +	{
> +		const struct scmi_power_state_notify_payld *p = payld;
> +		struct scmi_power_state_change_requested_report *r = report;
> +
> +		if (sizeof(*p) != payld_sz)
> +			break;
> +
> +		r->timestamp = timestamp;
> +		r->agent_id = le32_to_cpu(p->agent_id);
> +		r->domain_id = le32_to_cpu(p->domain_id);
> +		r->power_state = le32_to_cpu(p->power_state);
> +		*src_id = r->domain_id;
> +		rep = r;
> +		break;
> +	}
> +	default:
> +		break;
> +	}
> +
> +	return rep;
> +}
> +
> +static const struct scmi_event power_events[] = {
> +	{
> +		.id = POWER_STATE_CHANGED,
> +		.max_payld_sz = 12,
> +		.max_report_sz =
> +			sizeof(struct scmi_power_state_changed_report),
> +	},
> +	{
> +		.id = POWER_STATE_CHANGE_REQUESTED,
> +		.max_payld_sz = 12,
> +		.max_report_sz =
> +			sizeof(struct scmi_power_state_change_requested_report),
> +	},
> +};
> +
> +static const struct scmi_protocol_event_ops power_event_ops = {
> +	.set_notify_enabled = scmi_power_set_notify_enabled,
> +	.fill_custom_report = scmi_power_fill_custom_report,
> +};
> +
>  static int scmi_power_protocol_init(struct scmi_handle *handle)
>  {
>  	int domain;
> @@ -214,6 +331,12 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
>  		scmi_power_domain_attributes_get(handle, domain, dom);
>  	}
>  
> +	scmi_register_protocol_events(handle,
> +				      SCMI_PROTOCOL_POWER, PAGE_SIZE,
> +				      &power_event_ops, power_events,
> +				      ARRAY_SIZE(power_events),
> +				      pinfo->num_domains);
> +
>  	pinfo->version = version;
>  	handle->power_ops = &power_ops;
>  	handle->power_priv = pinfo;
> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
> index 797e1e03ae52..baa117f9eda3 100644
> --- a/include/linux/scmi_protocol.h
> +++ b/include/linux/scmi_protocol.h
> @@ -377,4 +377,19 @@ typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
>  int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
>  void scmi_protocol_unregister(int protocol_id);
>  
> +/* SCMI Notification API - Custom Event Reports */
> +struct scmi_power_state_changed_report {
> +	ktime_t	timestamp;
> +	u32	agent_id;
> +	u32	domain_id;
> +	u32	power_state;
> +};
> +
> +struct scmi_power_state_change_requested_report {
> +	ktime_t	timestamp;
> +	u32	agent_id;
> +	u32	domain_id;
> +	u32	power_state;
> +};
> +
>  #endif /* _LINUX_SCMI_PROTOCOL_H */



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 09/13] firmware: arm_scmi: Add Power notifications support
@ 2020-03-09 12:28     ` Jonathan Cameron
  0 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 12:28 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

On Wed, 4 Mar 2020 16:25:54 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Make SCMI Power protocol register with the notification core.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>

One comment inline on an unusual code construct, otherwise fine.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> ---
> V3 --> V4
> - scmi_event field renamed
> V2 --> V3
> - added handle awareness
> V1 --> V2
> - simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
>   logic out of protocol. ALL_SRCIDs logic is now in charge of the
>   notification core, together with proper reference counting of enables
> - switched to devres protocol-registration
> ---
>  drivers/firmware/arm_scmi/power.c | 123 ++++++++++++++++++++++++++++++
>  include/linux/scmi_protocol.h     |  15 ++++
>  2 files changed, 138 insertions(+)
> 
> diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
> index cf7f0312381b..281da7e7e33a 100644
> --- a/drivers/firmware/arm_scmi/power.c
> +++ b/drivers/firmware/arm_scmi/power.c
> @@ -6,6 +6,7 @@
>   */
>  
>  #include "common.h"
> +#include "notify.h"
>  
>  enum scmi_power_protocol_cmd {
>  	POWER_DOMAIN_ATTRIBUTES = 0x3,
> @@ -48,6 +49,12 @@ struct scmi_power_state_notify {
>  	__le32 notify_enable;
>  };
>  
> +struct scmi_power_state_notify_payld {
> +	__le32 agent_id;
> +	__le32 domain_id;
> +	__le32 power_state;
> +};
> +
>  struct power_dom_info {
>  	bool state_set_sync;
>  	bool state_set_async;
> @@ -63,6 +70,11 @@ struct scmi_power_info {
>  	struct power_dom_info *dom_info;
>  };
>  
> +static enum scmi_power_protocol_cmd evt_2_cmd[] = {
> +	POWER_STATE_NOTIFY,
> +	POWER_STATE_CHANGE_REQUESTED_NOTIFY,
> +};
> +
>  static int scmi_power_attributes_get(const struct scmi_handle *handle,
>  				     struct scmi_power_info *pi)
>  {
> @@ -186,6 +198,111 @@ static struct scmi_power_ops power_ops = {
>  	.state_get = scmi_power_state_get,
>  };
>  
> +static int scmi_power_request_notify(const struct scmi_handle *handle,
> +				     u32 domain, int message_id, bool enable)
> +{
> +	int ret;
> +	struct scmi_xfer *t;
> +	struct scmi_power_state_notify *notify;
> +
> +	ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_POWER,
> +				 sizeof(*notify), 0, &t);
> +	if (ret)
> +		return ret;
> +
> +	notify = t->tx.buf;
> +	notify->domain = cpu_to_le32(domain);
> +	notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
> +
> +	ret = scmi_do_xfer(handle, t);
> +
> +	scmi_xfer_put(handle, t);
> +	return ret;
> +}
> +
> +static bool scmi_power_set_notify_enabled(const struct scmi_handle *handle,
> +					  u8 evt_id, u32 src_id, bool enable)
> +{
> +	int ret, cmd_id;
> +
> +	cmd_id = MAP_EVT_TO_ENABLE_CMD(evt_id, evt_2_cmd);
> +	if (cmd_id < 0)
> +		return false;
> +
> +	ret = scmi_power_request_notify(handle, src_id, cmd_id, enable);
> +	if (ret)
> +		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n",
> +				SCMI_PROTOCOL_POWER, evt_id, src_id, ret);
> +
> +	return !ret ? true : false;

	return !ret;

	Is the same thing...

> +}
> +
> +static void *scmi_power_fill_custom_report(u8 evt_id, u64 timestamp,
> +					   const void *payld, size_t payld_sz,
> +					   void *report, u32 *src_id)
> +{
> +	void *rep = NULL;
> +
> +	switch (evt_id) {
> +	case POWER_STATE_CHANGED:
> +	{
> +		const struct scmi_power_state_notify_payld *p = payld;
> +		struct scmi_power_state_changed_report *r = report;
> +
> +		if (sizeof(*p) != payld_sz)
> +			break;
> +
> +		r->timestamp = timestamp;
> +		r->agent_id = le32_to_cpu(p->agent_id);
> +		r->domain_id = le32_to_cpu(p->domain_id);
> +		r->power_state = le32_to_cpu(p->power_state);
> +		*src_id = r->domain_id;
> +		rep = r;
> +		break;
> +	}
> +	case POWER_STATE_CHANGE_REQUESTED:
> +	{
> +		const struct scmi_power_state_notify_payld *p = payld;
> +		struct scmi_power_state_change_requested_report *r = report;
> +
> +		if (sizeof(*p) != payld_sz)
> +			break;
> +
> +		r->timestamp = timestamp;
> +		r->agent_id = le32_to_cpu(p->agent_id);
> +		r->domain_id = le32_to_cpu(p->domain_id);
> +		r->power_state = le32_to_cpu(p->power_state);
> +		*src_id = r->domain_id;
> +		rep = r;
> +		break;
> +	}
> +	default:
> +		break;
> +	}
> +
> +	return rep;
> +}
> +
> +static const struct scmi_event power_events[] = {
> +	{
> +		.id = POWER_STATE_CHANGED,
> +		.max_payld_sz = 12,
> +		.max_report_sz =
> +			sizeof(struct scmi_power_state_changed_report),
> +	},
> +	{
> +		.id = POWER_STATE_CHANGE_REQUESTED,
> +		.max_payld_sz = 12,
> +		.max_report_sz =
> +			sizeof(struct scmi_power_state_change_requested_report),
> +	},
> +};
> +
> +static const struct scmi_protocol_event_ops power_event_ops = {
> +	.set_notify_enabled = scmi_power_set_notify_enabled,
> +	.fill_custom_report = scmi_power_fill_custom_report,
> +};
> +
>  static int scmi_power_protocol_init(struct scmi_handle *handle)
>  {
>  	int domain;
> @@ -214,6 +331,12 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
>  		scmi_power_domain_attributes_get(handle, domain, dom);
>  	}
>  
> +	scmi_register_protocol_events(handle,
> +				      SCMI_PROTOCOL_POWER, PAGE_SIZE,
> +				      &power_event_ops, power_events,
> +				      ARRAY_SIZE(power_events),
> +				      pinfo->num_domains);
> +
>  	pinfo->version = version;
>  	handle->power_ops = &power_ops;
>  	handle->power_priv = pinfo;
> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
> index 797e1e03ae52..baa117f9eda3 100644
> --- a/include/linux/scmi_protocol.h
> +++ b/include/linux/scmi_protocol.h
> @@ -377,4 +377,19 @@ typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
>  int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
>  void scmi_protocol_unregister(int protocol_id);
>  
> +/* SCMI Notification API - Custom Event Reports */
> +struct scmi_power_state_changed_report {
> +	ktime_t	timestamp;
> +	u32	agent_id;
> +	u32	domain_id;
> +	u32	power_state;
> +};
> +
> +struct scmi_power_state_change_requested_report {
> +	ktime_t	timestamp;
> +	u32	agent_id;
> +	u32	domain_id;
> +	u32	power_state;
> +};
> +
>  #endif /* _LINUX_SCMI_PROTOCOL_H */



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 00/13] SCMI Notifications Core Support
  2020-03-04 16:25 ` Cristian Marussi
@ 2020-03-09 12:33   ` Jonathan Cameron
  -1 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 12:33 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

On Wed, 4 Mar 2020 16:25:45 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Hi all,
> 
> this series wants to introduce SCMI Notification Support, built on top of
> the standard Kernel notification chain subsystem.
> 
> At initialization time each SCMI Protocol takes care to register with the
> new SCMI notification core the set of its own events which it intends to
> support.
> 
> Using the API exposed via scmi_handle.notify_ops a Kernel user can register
> its own notifier_t callback (via a notifier_block as usual) against any
> registered event as identified by the tuple:
> 
> 		(proto_id, event_id, src_id)
> 
> where src_id represents a generic source identifier which is protocol
> dependent like domain_id, performance_id, sensor_id and so forth.
> (users can anyway do NOT provide any src_id, and subscribe instead to ALL
>  the existing (if any) src_id sources for that proto_id/evt_id combination)
> 
> Each of the above tuple-specified event will be served on its own dedicated
> blocking notification chain, dynamically allocated on-demand when at least
> one user has shown interest on that event.
> 
> Upon a notification delivery all the users' registered notifier_t callbacks
> will be in turn invoked and fed with the event_id as @action param and a
> generated custom per-event struct _report as @data param.
> (as in include/linux/scmi_protocol.h)
> 
> The final step of notification delivery via users' callback invocation is
> instead delegated to a pool of deferred workers (Kernel cmwq): each
> SCMI protocol has its own dedicated worker and dedicated queue to push
> events from the rx ISR to the worker.
> 
> Based on scmi-next 5.6 [1], on top of:
> 
> commit 5c8a47a5a91d ("firmware: arm_scmi: Make scmi core independent of
> 		      the transport type")
> 
> This series has been tested on JUNO with an experimental firmware only
> supporting Perf Notifications.

I've looked through all the patches.  A few of the comments go across
multiple patches, but once resolved feel free to add.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
to the ones I haven't specifically commented on.

Thanks,

Jonathan

> 
> Thanks
> 
> Cristian
> ----
> 
> v3 --> v4:
> - dropped RFC tag
> - avoid one unneeded evt payload memcpy on the ISR RC code path by
>   redesigning dispatcher to handle partial queue-reads (in_flight events,
>   only header)
> - fixed the initialization issue exposed by late SCMI modules loading by
>   reviewing the init process to support possible late events registrations
>   by protocols and early callbacks registrations by users (pending)
> - cleanup/simplification of exit path: SCMI protocols are generally never
>   de-initialized after the initial device creation, so do not deinit
>   notification core either (we do halt the delivery, stop the wq and empty
>   the queues though)
> - reduced contention on regustered_events_handler to the minimum during
>   delivery by splitting the common registered_events_handlers hashtable
>   into a number of per-protocol tables
> - converted registered_protocols and registered_events hastable to
>   fixed size arrays: simpler and lockless in our usage scenario
> 
> v2 --> v3:
> - added platform instance awareness to the notification core: a
>   notification instance is created for each known handle
> - reviewed notification core initialization and shutdown process
> - removed generic non-handle-rooted registration API
> - added WQ_SYSFS flag to workqueue instance
> 
> v1 --> v2:
> - dropped anti-tampering patch
> - rebased on top of scmi-for-next-5.6, which includes Viresh series that
>   make SCMI core independent of transport (5c8a47a5a91d)
> - add a few new SCMI transport methods on top of Viresh patch to address
>   needs of SCMI Notifications
> - reviewed/renamed scmi_handle_xfer_delayed_resp()
> - split main SCMI Notification core patch (~1k lines) into three chunks:
>   protocol-registration / callbacks-registration / dispatch-and-delivery
> - removed awkward usage of IDR maps in favour of pure hashtables
> - added enable/disable refcounting in notification core (was broken in v1)
> - removed per-protocol candidate API: a single generic API is now proposed
>   instead of scmi_register_<proto>_event_notifier(evt_id, *src_id, *nb)
> - added handle->notify_ops as an alternative notification API
>   for scmi_driver
> - moved ALL_SRCIDs enabled handling from protocol code to core code
> - reviewed protocol registration/unregistration logic to use devres
> - reviewed cleanup phase on shutdown
> - fixed  ERROR: reference preceded by free as reported by kbuild test robot
> 
> [1] git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git
> 
> 
> Cristian Marussi (10):
>   firmware: arm_scmi: Add notifications support in transport layer
>   firmware: arm_scmi: Add notification protocol-registration
>   firmware: arm_scmi: Add notification callbacks-registration
>   firmware: arm_scmi: Add notification dispatch and delivery
>   firmware: arm_scmi: Enable notification core
>   firmware: arm_scmi: Add Power notifications support
>   firmware: arm_scmi: Add Perf notifications support
>   firmware: arm_scmi: Add Sensor notifications support
>   firmware: arm_scmi: Add Reset notifications support
>   firmware: arm_scmi: Add Base notifications support
> 
> Sudeep Holla (3):
>   firmware: arm_scmi: Add receive buffer support for notifications
>   firmware: arm_scmi: Update protocol commands and notification list
>   firmware: arm_scmi: Add support for notifications message processing
> 
>  drivers/firmware/arm_scmi/Makefile  |    2 +-
>  drivers/firmware/arm_scmi/base.c    |  116 +++
>  drivers/firmware/arm_scmi/common.h  |   12 +
>  drivers/firmware/arm_scmi/driver.c  |  118 ++-
>  drivers/firmware/arm_scmi/mailbox.c |   17 +
>  drivers/firmware/arm_scmi/notify.c  | 1471 +++++++++++++++++++++++++++
>  drivers/firmware/arm_scmi/notify.h  |   78 ++
>  drivers/firmware/arm_scmi/perf.c    |  135 +++
>  drivers/firmware/arm_scmi/power.c   |  129 +++
>  drivers/firmware/arm_scmi/reset.c   |   96 ++
>  drivers/firmware/arm_scmi/sensors.c |   73 ++
>  drivers/firmware/arm_scmi/shmem.c   |   15 +
>  include/linux/scmi_protocol.h       |  110 ++
>  13 files changed, 2345 insertions(+), 27 deletions(-)
>  create mode 100644 drivers/firmware/arm_scmi/notify.c
>  create mode 100644 drivers/firmware/arm_scmi/notify.h
> 



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 00/13] SCMI Notifications Core Support
@ 2020-03-09 12:33   ` Jonathan Cameron
  0 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-09 12:33 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

On Wed, 4 Mar 2020 16:25:45 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Hi all,
> 
> this series wants to introduce SCMI Notification Support, built on top of
> the standard Kernel notification chain subsystem.
> 
> At initialization time each SCMI Protocol takes care to register with the
> new SCMI notification core the set of its own events which it intends to
> support.
> 
> Using the API exposed via scmi_handle.notify_ops a Kernel user can register
> its own notifier_t callback (via a notifier_block as usual) against any
> registered event as identified by the tuple:
> 
> 		(proto_id, event_id, src_id)
> 
> where src_id represents a generic source identifier which is protocol
> dependent like domain_id, performance_id, sensor_id and so forth.
> (users can anyway do NOT provide any src_id, and subscribe instead to ALL
>  the existing (if any) src_id sources for that proto_id/evt_id combination)
> 
> Each of the above tuple-specified event will be served on its own dedicated
> blocking notification chain, dynamically allocated on-demand when at least
> one user has shown interest on that event.
> 
> Upon a notification delivery all the users' registered notifier_t callbacks
> will be in turn invoked and fed with the event_id as @action param and a
> generated custom per-event struct _report as @data param.
> (as in include/linux/scmi_protocol.h)
> 
> The final step of notification delivery via users' callback invocation is
> instead delegated to a pool of deferred workers (Kernel cmwq): each
> SCMI protocol has its own dedicated worker and dedicated queue to push
> events from the rx ISR to the worker.
> 
> Based on scmi-next 5.6 [1], on top of:
> 
> commit 5c8a47a5a91d ("firmware: arm_scmi: Make scmi core independent of
> 		      the transport type")
> 
> This series has been tested on JUNO with an experimental firmware only
> supporting Perf Notifications.

I've looked through all the patches.  A few of the comments go across
multiple patches, but once resolved feel free to add.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
to the ones I haven't specifically commented on.

Thanks,

Jonathan

> 
> Thanks
> 
> Cristian
> ----
> 
> v3 --> v4:
> - dropped RFC tag
> - avoid one unneeded evt payload memcpy on the ISR RC code path by
>   redesigning dispatcher to handle partial queue-reads (in_flight events,
>   only header)
> - fixed the initialization issue exposed by late SCMI modules loading by
>   reviewing the init process to support possible late events registrations
>   by protocols and early callbacks registrations by users (pending)
> - cleanup/simplification of exit path: SCMI protocols are generally never
>   de-initialized after the initial device creation, so do not deinit
>   notification core either (we do halt the delivery, stop the wq and empty
>   the queues though)
> - reduced contention on regustered_events_handler to the minimum during
>   delivery by splitting the common registered_events_handlers hashtable
>   into a number of per-protocol tables
> - converted registered_protocols and registered_events hastable to
>   fixed size arrays: simpler and lockless in our usage scenario
> 
> v2 --> v3:
> - added platform instance awareness to the notification core: a
>   notification instance is created for each known handle
> - reviewed notification core initialization and shutdown process
> - removed generic non-handle-rooted registration API
> - added WQ_SYSFS flag to workqueue instance
> 
> v1 --> v2:
> - dropped anti-tampering patch
> - rebased on top of scmi-for-next-5.6, which includes Viresh series that
>   make SCMI core independent of transport (5c8a47a5a91d)
> - add a few new SCMI transport methods on top of Viresh patch to address
>   needs of SCMI Notifications
> - reviewed/renamed scmi_handle_xfer_delayed_resp()
> - split main SCMI Notification core patch (~1k lines) into three chunks:
>   protocol-registration / callbacks-registration / dispatch-and-delivery
> - removed awkward usage of IDR maps in favour of pure hashtables
> - added enable/disable refcounting in notification core (was broken in v1)
> - removed per-protocol candidate API: a single generic API is now proposed
>   instead of scmi_register_<proto>_event_notifier(evt_id, *src_id, *nb)
> - added handle->notify_ops as an alternative notification API
>   for scmi_driver
> - moved ALL_SRCIDs enabled handling from protocol code to core code
> - reviewed protocol registration/unregistration logic to use devres
> - reviewed cleanup phase on shutdown
> - fixed  ERROR: reference preceded by free as reported by kbuild test robot
> 
> [1] git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git
> 
> 
> Cristian Marussi (10):
>   firmware: arm_scmi: Add notifications support in transport layer
>   firmware: arm_scmi: Add notification protocol-registration
>   firmware: arm_scmi: Add notification callbacks-registration
>   firmware: arm_scmi: Add notification dispatch and delivery
>   firmware: arm_scmi: Enable notification core
>   firmware: arm_scmi: Add Power notifications support
>   firmware: arm_scmi: Add Perf notifications support
>   firmware: arm_scmi: Add Sensor notifications support
>   firmware: arm_scmi: Add Reset notifications support
>   firmware: arm_scmi: Add Base notifications support
> 
> Sudeep Holla (3):
>   firmware: arm_scmi: Add receive buffer support for notifications
>   firmware: arm_scmi: Update protocol commands and notification list
>   firmware: arm_scmi: Add support for notifications message processing
> 
>  drivers/firmware/arm_scmi/Makefile  |    2 +-
>  drivers/firmware/arm_scmi/base.c    |  116 +++
>  drivers/firmware/arm_scmi/common.h  |   12 +
>  drivers/firmware/arm_scmi/driver.c  |  118 ++-
>  drivers/firmware/arm_scmi/mailbox.c |   17 +
>  drivers/firmware/arm_scmi/notify.c  | 1471 +++++++++++++++++++++++++++
>  drivers/firmware/arm_scmi/notify.h  |   78 ++
>  drivers/firmware/arm_scmi/perf.c    |  135 +++
>  drivers/firmware/arm_scmi/power.c   |  129 +++
>  drivers/firmware/arm_scmi/reset.c   |   96 ++
>  drivers/firmware/arm_scmi/sensors.c |   73 ++
>  drivers/firmware/arm_scmi/shmem.c   |   15 +
>  include/linux/scmi_protocol.h       |  110 ++
>  13 files changed, 2345 insertions(+), 27 deletions(-)
>  create mode 100644 drivers/firmware/arm_scmi/notify.c
>  create mode 100644 drivers/firmware/arm_scmi/notify.h
> 



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-09 12:26     ` Jonathan Cameron
@ 2020-03-09 16:37       ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-09 16:37 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

Hi

On 09/03/2020 12:26, Jonathan Cameron wrote:
> On Wed, 4 Mar 2020 16:25:52 +0000
> Cristian Marussi <cristian.marussi@arm.com> wrote:
> 
>> Add core SCMI Notifications dispatch and delivery support logic which is
>> able, at first, to dispatch well-known received events from the RX ISR to
>> the dedicated deferred worker, and then, from there, to final deliver the
>> events to the registered users' callbacks.
>>
>> Dispatch and delivery is just added here, still not enabled.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> 
> Hmm.  Doing that magic in_flight stuff looks fine, but it feels like
> the wrong way to approach a problem which is down to the lack of
> atomicity of the kfifo_in pair.   Could we just make that atomic via
> a bit of custom manipulation of the kfifo?
> 
> The snag is that stuff isn't exported from the innards of kfifo...

My initial approach till v3 was to collate header and payload in a pre-allocated
scratch buffer and then doing a single kfifo_in so as to avoid to worry about workqueue
empting the kfifo and going to sleep right after a header has been read and a payload is
in flight, but that, as pointed out indirectly by Jim Quinlan led to an unneeded memcpy...
in fact I was copying in/out the fifo a total of 2*h + 3*p bytes, instead with this handling
I can avoid such intermediate collation step and stick to the bare minimum needed 2*h + 2*p
bytes memcopies.

On one side I was worried to make the code complex to avoid just a few bytes of memcpy, on the other
side the redundant memcpy is on the ISR side and also I cannot assume that the unneded p bytes
copied there are necessarily small ... being SMCI extensible you could possibly add a proprietary
(or not) protocol with jumbo payloads of KBs so that the p-bytes redundant copy is no more so
negligible.

At the end I did not find so horrible and complex the new in flight handling (tested introducing
horrible mdelays in between the kfifo_inS inside the ISR...), so I went for that.

> 
> Maybe what you have here is the best option.
> 

I like the solution you propose down below, but the fact that it relies on the inner kfifo function
is in fact a show stopper being based on the inernal api (and I have not found other viable ways to
abuse the kfifo API :D ... as of now)...I wonder if it is not worth propose upstream (not in this series)
a generic kfifo "light" scatter/gather in/out interface for this particular usecase; _kfifo_dma* seem to use
the full fledged scatter/gather kernel structs, but that's certainly overkill for this scenario.

Thanks

Cristian

> Jonathan
> 
>> ---
>> V3 --> V4
>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>   handling of these in_flight events let us remove one unneeded memcpy
>>   on RX interrupt path (scmi_notify)
>> - deferred dispatcher now access their own per-protocol handlers' table
>>   reducing locking contention on the RX path
>> V2 --> V3
>> - exposing wq in sysfs via WQ_SYSFS
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store event_handlers
>> - simplified delivery logic
>> ---
>>  drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>  drivers/firmware/arm_scmi/notify.h |   9 +
>>  2 files changed, 342 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
>> index d6c08cce3c63..0854d48d5886 100644
>> --- a/drivers/firmware/arm_scmi/notify.c
>> +++ b/drivers/firmware/arm_scmi/notify.c
>> @@ -44,6 +44,27 @@
>>   * as described in the SCMI Protocol specification, while src_id represents an
>>   * optional, protocol dependent, source identifier (like domain_id, perf_id
>>   * or sensor_id and so forth).
>> + *
>> + * Upon reception of a notification message from the platform the SCMI RX ISR
[snip]

>> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
>> +		pr_err("SCMI Notifications: discard badly sized message\n");
>> +		return -EINVAL;
>> +	}
>> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>> +		     sizeof(eh) + len)) {
>> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
>> +			proto_id, evt_id, ts);
>> +		return -ENOMEM;
>> +	}
>> +
>> +	eh.timestamp = ts;
>> +	eh.evt_id = evt_id;
>> +	eh.payld_sz = len;
>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
> 
> I'd add a comment that this potential race here is the reason (I think) for all
> the inflight handling above.
> 
> Either that or create a kfifo_in_pair_unsafe that just makes these atomic by only
> updating the kfifo->in point after adding both parts.
> 
> It will be as simple as (I think, kfifo magic always give me a headache).
> {
> 	struct __kfifo *__kfifo = &kfifo->kfifo;
> 	kfifo_copy_in(fifo, &eh, sizeof(eh), fifo->in);
> 	kfifo_copy_in(fifo, &buf, len, fifo->in + sizeof(eh));
> 	fifo->in += len + sizeof(eh);
> }
> 
> It's unsafe because crazy things will happen if there isn't enough room, but you
> can't get there in this code because of the check above and we are making
> horrendous assumptions about the kfifo type.
> 

As said above.
>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>> +	queue_work(r_evt->proto->equeue.wq,
>> +		   &r_evt->proto->equeue.notify_work);
>> +
>> +	return 0;
>> +}
>> +
>>  /**
>>   * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
>>   *
>> @@ -332,12 +604,21 @@ static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
>>  static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
>>  					struct events_queue *equeue, size_t sz)
>>  {
>> +	int ret = 0;
> 
> ret looks to be always initialized below.
> 

Right.
>> +
>>  	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
>>  	if (!equeue->qbuf)
>>  		return -ENOMEM;
>>  	equeue->sz = sz;
>>  
>> -	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
>> +	ret = kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
>> +	if (ret)
>> +		return ret;
>> +
>> +	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
>> +	equeue->wq = ni->notify_wq;
>> +
>> +	return ret;
>>  }
>>  
>>  /**
>> @@ -740,6 +1021,38 @@ scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
>>  	return __scmi_event_handler_get_ops(ni, evt_key, true);
>>  }
>>  
>> +/**
>> + * scmi_get_active_handler  - Helper to get active handlers only
>> + *
>> + * Search for the desired handler matching the key only in the per-protocol
>> + * table of registered handlers: this is called only from the dispatching path
>> + * so want to be as quick as possible and do not care about pending.
>> + *
>> + * @ni: A reference to the notification instance to use
>> + * @evt_key: The event key to use
>> + *
>> + * Return: A properly refcounted active handler
>> + */
>> +static struct scmi_event_handler *
>> +scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
>> +{
>> +	struct scmi_registered_event *r_evt;
>> +	struct scmi_event_handler *hndl = NULL;
>> +
>> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
>> +			      KEY_XTRACT_EVT_ID(evt_key));
>> +	if (likely(r_evt)) {
>> +		mutex_lock(&r_evt->proto->registered_mtx);
>> +		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
>> +				hndl, evt_key);
>> +		if (likely(hndl))
>> +			refcount_inc(&hndl->users);
>> +		mutex_unlock(&r_evt->proto->registered_mtx);
>> +	}
>> +
>> +	return hndl;
>> +}
>> +
>>  /**
>>   * __scmi_enable_evt  - Enable/disable events generation
>>   *
>> @@ -861,6 +1174,16 @@ static void scmi_put_handler(struct scmi_notify_instance *ni,
>>  	mutex_unlock(&ni->pending_mtx);
>>  }
>>  
>> +static void scmi_put_active_handler(struct scmi_notify_instance *ni,
>> +					  struct scmi_event_handler *hndl)
>> +{
>> +	struct scmi_registered_event *r_evt = hndl->r_evt;
>> +
>> +	mutex_lock(&r_evt->proto->registered_mtx);
>> +	scmi_put_handler_unlocked(ni, hndl);
>> +	mutex_unlock(&r_evt->proto->registered_mtx);
>> +}
>> +
>>  /**
>>   * scmi_event_handler_enable_events  - Enable events associated to an handler
>>   *
>> @@ -1087,6 +1410,12 @@ int scmi_notification_init(struct scmi_handle *handle)
>>  	ni->gid = gid;
>>  	ni->handle = handle;
>>  
>> +	ni->notify_wq = alloc_workqueue("scmi_notify",
>> +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
>> +					0);
>> +	if (!ni->notify_wq)
>> +		goto err;
>> +
>>  	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
>>  						sizeof(char *), GFP_KERNEL);
>>  	if (!ni->registered_protocols)
>> @@ -1133,6 +1462,9 @@ void scmi_notification_exit(struct scmi_handle *handle)
>>  	/* Ensure atomic values are updated */
>>  	smp_mb__after_atomic();
>>  
>> +	/* Destroy while letting pending work complete */
>> +	destroy_workqueue(ni->notify_wq);
>> +
>>  	devres_release_group(ni->handle->dev, ni->gid);
>>  
>>  	pr_info("SCMI Notifications Core Shutdown.\n");
>> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
>> index f765acda2311..6cd386649d5a 100644
>> --- a/drivers/firmware/arm_scmi/notify.h
>> +++ b/drivers/firmware/arm_scmi/notify.h
>> @@ -51,10 +51,17 @@ struct scmi_event {
>>   *			using the proper custom protocol commands.
>>   *			Return true if at least one the required src_id
>>   *			has been successfully enabled/disabled
>> + * @fill_custom_report: fills a custom event report from the provided
>> + *			event message payld identifying the event
>> + *			specific src_id.
>> + *			Return NULL on failure otherwise @report now fully
>> + *			populated
>>   */
>>  struct scmi_protocol_event_ops {
>>  	bool (*set_notify_enabled)(const struct scmi_handle *handle,
>>  				   u8 evt_id, u32 src_id, bool enabled);
>> +	void *(*fill_custom_report)(u8 evt_id, u64 timestamp, const void *payld,
>> +				    size_t payld_sz, void *report, u32 *src_id);
>>  };
>>  
>>  int scmi_notification_init(struct scmi_handle *handle);
>> @@ -65,5 +72,7 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
>>  				  const struct scmi_protocol_event_ops *ops,
>>  				  const struct scmi_event *evt, int num_events,
>>  				  int num_sources);
>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>> +		const void *buf, size_t len, u64 ts);
>>  
>>  #endif /* _SCMI_NOTIFY_H */
> 
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-09 16:37       ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-09 16:37 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

Hi

On 09/03/2020 12:26, Jonathan Cameron wrote:
> On Wed, 4 Mar 2020 16:25:52 +0000
> Cristian Marussi <cristian.marussi@arm.com> wrote:
> 
>> Add core SCMI Notifications dispatch and delivery support logic which is
>> able, at first, to dispatch well-known received events from the RX ISR to
>> the dedicated deferred worker, and then, from there, to final deliver the
>> events to the registered users' callbacks.
>>
>> Dispatch and delivery is just added here, still not enabled.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> 
> Hmm.  Doing that magic in_flight stuff looks fine, but it feels like
> the wrong way to approach a problem which is down to the lack of
> atomicity of the kfifo_in pair.   Could we just make that atomic via
> a bit of custom manipulation of the kfifo?
> 
> The snag is that stuff isn't exported from the innards of kfifo...

My initial approach till v3 was to collate header and payload in a pre-allocated
scratch buffer and then doing a single kfifo_in so as to avoid to worry about workqueue
empting the kfifo and going to sleep right after a header has been read and a payload is
in flight, but that, as pointed out indirectly by Jim Quinlan led to an unneeded memcpy...
in fact I was copying in/out the fifo a total of 2*h + 3*p bytes, instead with this handling
I can avoid such intermediate collation step and stick to the bare minimum needed 2*h + 2*p
bytes memcopies.

On one side I was worried to make the code complex to avoid just a few bytes of memcpy, on the other
side the redundant memcpy is on the ISR side and also I cannot assume that the unneded p bytes
copied there are necessarily small ... being SMCI extensible you could possibly add a proprietary
(or not) protocol with jumbo payloads of KBs so that the p-bytes redundant copy is no more so
negligible.

At the end I did not find so horrible and complex the new in flight handling (tested introducing
horrible mdelays in between the kfifo_inS inside the ISR...), so I went for that.

> 
> Maybe what you have here is the best option.
> 

I like the solution you propose down below, but the fact that it relies on the inner kfifo function
is in fact a show stopper being based on the inernal api (and I have not found other viable ways to
abuse the kfifo API :D ... as of now)...I wonder if it is not worth propose upstream (not in this series)
a generic kfifo "light" scatter/gather in/out interface for this particular usecase; _kfifo_dma* seem to use
the full fledged scatter/gather kernel structs, but that's certainly overkill for this scenario.

Thanks

Cristian

> Jonathan
> 
>> ---
>> V3 --> V4
>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>   handling of these in_flight events let us remove one unneeded memcpy
>>   on RX interrupt path (scmi_notify)
>> - deferred dispatcher now access their own per-protocol handlers' table
>>   reducing locking contention on the RX path
>> V2 --> V3
>> - exposing wq in sysfs via WQ_SYSFS
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store event_handlers
>> - simplified delivery logic
>> ---
>>  drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>  drivers/firmware/arm_scmi/notify.h |   9 +
>>  2 files changed, 342 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
>> index d6c08cce3c63..0854d48d5886 100644
>> --- a/drivers/firmware/arm_scmi/notify.c
>> +++ b/drivers/firmware/arm_scmi/notify.c
>> @@ -44,6 +44,27 @@
>>   * as described in the SCMI Protocol specification, while src_id represents an
>>   * optional, protocol dependent, source identifier (like domain_id, perf_id
>>   * or sensor_id and so forth).
>> + *
>> + * Upon reception of a notification message from the platform the SCMI RX ISR
[snip]

>> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
>> +		pr_err("SCMI Notifications: discard badly sized message\n");
>> +		return -EINVAL;
>> +	}
>> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>> +		     sizeof(eh) + len)) {
>> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
>> +			proto_id, evt_id, ts);
>> +		return -ENOMEM;
>> +	}
>> +
>> +	eh.timestamp = ts;
>> +	eh.evt_id = evt_id;
>> +	eh.payld_sz = len;
>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
> 
> I'd add a comment that this potential race here is the reason (I think) for all
> the inflight handling above.
> 
> Either that or create a kfifo_in_pair_unsafe that just makes these atomic by only
> updating the kfifo->in point after adding both parts.
> 
> It will be as simple as (I think, kfifo magic always give me a headache).
> {
> 	struct __kfifo *__kfifo = &kfifo->kfifo;
> 	kfifo_copy_in(fifo, &eh, sizeof(eh), fifo->in);
> 	kfifo_copy_in(fifo, &buf, len, fifo->in + sizeof(eh));
> 	fifo->in += len + sizeof(eh);
> }
> 
> It's unsafe because crazy things will happen if there isn't enough room, but you
> can't get there in this code because of the check above and we are making
> horrendous assumptions about the kfifo type.
> 

As said above.
>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>> +	queue_work(r_evt->proto->equeue.wq,
>> +		   &r_evt->proto->equeue.notify_work);
>> +
>> +	return 0;
>> +}
>> +
>>  /**
>>   * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
>>   *
>> @@ -332,12 +604,21 @@ static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
>>  static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
>>  					struct events_queue *equeue, size_t sz)
>>  {
>> +	int ret = 0;
> 
> ret looks to be always initialized below.
> 

Right.
>> +
>>  	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
>>  	if (!equeue->qbuf)
>>  		return -ENOMEM;
>>  	equeue->sz = sz;
>>  
>> -	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
>> +	ret = kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
>> +	if (ret)
>> +		return ret;
>> +
>> +	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
>> +	equeue->wq = ni->notify_wq;
>> +
>> +	return ret;
>>  }
>>  
>>  /**
>> @@ -740,6 +1021,38 @@ scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
>>  	return __scmi_event_handler_get_ops(ni, evt_key, true);
>>  }
>>  
>> +/**
>> + * scmi_get_active_handler  - Helper to get active handlers only
>> + *
>> + * Search for the desired handler matching the key only in the per-protocol
>> + * table of registered handlers: this is called only from the dispatching path
>> + * so want to be as quick as possible and do not care about pending.
>> + *
>> + * @ni: A reference to the notification instance to use
>> + * @evt_key: The event key to use
>> + *
>> + * Return: A properly refcounted active handler
>> + */
>> +static struct scmi_event_handler *
>> +scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
>> +{
>> +	struct scmi_registered_event *r_evt;
>> +	struct scmi_event_handler *hndl = NULL;
>> +
>> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
>> +			      KEY_XTRACT_EVT_ID(evt_key));
>> +	if (likely(r_evt)) {
>> +		mutex_lock(&r_evt->proto->registered_mtx);
>> +		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
>> +				hndl, evt_key);
>> +		if (likely(hndl))
>> +			refcount_inc(&hndl->users);
>> +		mutex_unlock(&r_evt->proto->registered_mtx);
>> +	}
>> +
>> +	return hndl;
>> +}
>> +
>>  /**
>>   * __scmi_enable_evt  - Enable/disable events generation
>>   *
>> @@ -861,6 +1174,16 @@ static void scmi_put_handler(struct scmi_notify_instance *ni,
>>  	mutex_unlock(&ni->pending_mtx);
>>  }
>>  
>> +static void scmi_put_active_handler(struct scmi_notify_instance *ni,
>> +					  struct scmi_event_handler *hndl)
>> +{
>> +	struct scmi_registered_event *r_evt = hndl->r_evt;
>> +
>> +	mutex_lock(&r_evt->proto->registered_mtx);
>> +	scmi_put_handler_unlocked(ni, hndl);
>> +	mutex_unlock(&r_evt->proto->registered_mtx);
>> +}
>> +
>>  /**
>>   * scmi_event_handler_enable_events  - Enable events associated to an handler
>>   *
>> @@ -1087,6 +1410,12 @@ int scmi_notification_init(struct scmi_handle *handle)
>>  	ni->gid = gid;
>>  	ni->handle = handle;
>>  
>> +	ni->notify_wq = alloc_workqueue("scmi_notify",
>> +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
>> +					0);
>> +	if (!ni->notify_wq)
>> +		goto err;
>> +
>>  	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
>>  						sizeof(char *), GFP_KERNEL);
>>  	if (!ni->registered_protocols)
>> @@ -1133,6 +1462,9 @@ void scmi_notification_exit(struct scmi_handle *handle)
>>  	/* Ensure atomic values are updated */
>>  	smp_mb__after_atomic();
>>  
>> +	/* Destroy while letting pending work complete */
>> +	destroy_workqueue(ni->notify_wq);
>> +
>>  	devres_release_group(ni->handle->dev, ni->gid);
>>  
>>  	pr_info("SCMI Notifications Core Shutdown.\n");
>> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
>> index f765acda2311..6cd386649d5a 100644
>> --- a/drivers/firmware/arm_scmi/notify.h
>> +++ b/drivers/firmware/arm_scmi/notify.h
>> @@ -51,10 +51,17 @@ struct scmi_event {
>>   *			using the proper custom protocol commands.
>>   *			Return true if at least one the required src_id
>>   *			has been successfully enabled/disabled
>> + * @fill_custom_report: fills a custom event report from the provided
>> + *			event message payld identifying the event
>> + *			specific src_id.
>> + *			Return NULL on failure otherwise @report now fully
>> + *			populated
>>   */
>>  struct scmi_protocol_event_ops {
>>  	bool (*set_notify_enabled)(const struct scmi_handle *handle,
>>  				   u8 evt_id, u32 src_id, bool enabled);
>> +	void *(*fill_custom_report)(u8 evt_id, u64 timestamp, const void *payld,
>> +				    size_t payld_sz, void *report, u32 *src_id);
>>  };
>>  
>>  int scmi_notification_init(struct scmi_handle *handle);
>> @@ -65,5 +72,7 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
>>  				  const struct scmi_protocol_event_ops *ops,
>>  				  const struct scmi_event *evt, int num_events,
>>  				  int num_sources);
>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>> +		const void *buf, size_t len, u64 ts);
>>  
>>  #endif /* _SCMI_NOTIFY_H */
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 09/13] firmware: arm_scmi: Add Power notifications support
  2020-03-09 12:28     ` Jonathan Cameron
@ 2020-03-09 16:39       ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-09 16:39 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

On 09/03/2020 12:28, Jonathan Cameron wrote:
> On Wed, 4 Mar 2020 16:25:54 +0000
> Cristian Marussi <cristian.marussi@arm.com> wrote:
> 
>> Make SCMI Power protocol register with the notification core.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> 
> One comment inline on an unusual code construct, otherwise fine.
> 
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> 

Thanks

Cristian
>> ---
>> V3 --> V4
>> - scmi_event field renamed
>> V2 --> V3
>> - added handle awareness
>> V1 --> V2
>> - simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
>>   logic out of protocol. ALL_SRCIDs logic is now in charge of the
>>   notification core, together with proper reference counting of enables
>> - switched to devres protocol-registration
>> ---
>>  drivers/firmware/arm_scmi/power.c | 123 ++++++++++++++++++++++++++++++
>>  include/linux/scmi_protocol.h     |  15 ++++
>>  2 files changed, 138 insertions(+)
>>
>> diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
>> index cf7f0312381b..281da7e7e33a 100644
>> --- a/drivers/firmware/arm_scmi/power.c
>> +++ b/drivers/firmware/arm_scmi/power.c
>> @@ -6,6 +6,7 @@
>>   */
>>  
>>  #include "common.h"
>> +#include "notify.h"
>>  
>>  enum scmi_power_protocol_cmd {
>>  	POWER_DOMAIN_ATTRIBUTES = 0x3,
>> @@ -48,6 +49,12 @@ struct scmi_power_state_notify {
>>  	__le32 notify_enable;
>>  };
>>  
>> +struct scmi_power_state_notify_payld {
>> +	__le32 agent_id;
>> +	__le32 domain_id;
>> +	__le32 power_state;
>> +};
>> +
>>  struct power_dom_info {
>>  	bool state_set_sync;
>>  	bool state_set_async;
>> @@ -63,6 +70,11 @@ struct scmi_power_info {
>>  	struct power_dom_info *dom_info;
>>  };
>>  
>> +static enum scmi_power_protocol_cmd evt_2_cmd[] = {
>> +	POWER_STATE_NOTIFY,
>> +	POWER_STATE_CHANGE_REQUESTED_NOTIFY,
>> +};
>> +
>>  static int scmi_power_attributes_get(const struct scmi_handle *handle,
>>  				     struct scmi_power_info *pi)
>>  {
>> @@ -186,6 +198,111 @@ static struct scmi_power_ops power_ops = {
>>  	.state_get = scmi_power_state_get,
>>  };
>>  
>> +static int scmi_power_request_notify(const struct scmi_handle *handle,
>> +				     u32 domain, int message_id, bool enable)
>> +{
>> +	int ret;
>> +	struct scmi_xfer *t;
>> +	struct scmi_power_state_notify *notify;
>> +
>> +	ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_POWER,
>> +				 sizeof(*notify), 0, &t);
>> +	if (ret)
>> +		return ret;
>> +
>> +	notify = t->tx.buf;
>> +	notify->domain = cpu_to_le32(domain);
>> +	notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
>> +
>> +	ret = scmi_do_xfer(handle, t);
>> +
>> +	scmi_xfer_put(handle, t);
>> +	return ret;
>> +}
>> +
>> +static bool scmi_power_set_notify_enabled(const struct scmi_handle *handle,
>> +					  u8 evt_id, u32 src_id, bool enable)
>> +{
>> +	int ret, cmd_id;
>> +
>> +	cmd_id = MAP_EVT_TO_ENABLE_CMD(evt_id, evt_2_cmd);
>> +	if (cmd_id < 0)
>> +		return false;
>> +
>> +	ret = scmi_power_request_notify(handle, src_id, cmd_id, enable);
>> +	if (ret)
>> +		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n",
>> +				SCMI_PROTOCOL_POWER, evt_id, src_id, ret);
>> +
>> +	return !ret ? true : false;
> 
> 	return !ret;
> 
> 	Is the same thing...
> 

ops...I'll fix

>> +}
>> +
>> +static void *scmi_power_fill_custom_report(u8 evt_id, u64 timestamp,
>> +					   const void *payld, size_t payld_sz,
>> +					   void *report, u32 *src_id)
>> +{
>> +	void *rep = NULL;
>> +
>> +	switch (evt_id) {
>> +	case POWER_STATE_CHANGED:
>> +	{
>> +		const struct scmi_power_state_notify_payld *p = payld;
>> +		struct scmi_power_state_changed_report *r = report;
>> +
>> +		if (sizeof(*p) != payld_sz)
>> +			break;
>> +
>> +		r->timestamp = timestamp;
>> +		r->agent_id = le32_to_cpu(p->agent_id);
>> +		r->domain_id = le32_to_cpu(p->domain_id);
>> +		r->power_state = le32_to_cpu(p->power_state);
>> +		*src_id = r->domain_id;
>> +		rep = r;
>> +		break;
>> +	}
>> +	case POWER_STATE_CHANGE_REQUESTED:
>> +	{
>> +		const struct scmi_power_state_notify_payld *p = payld;
>> +		struct scmi_power_state_change_requested_report *r = report;
>> +
>> +		if (sizeof(*p) != payld_sz)
>> +			break;
>> +
>> +		r->timestamp = timestamp;
>> +		r->agent_id = le32_to_cpu(p->agent_id);
>> +		r->domain_id = le32_to_cpu(p->domain_id);
>> +		r->power_state = le32_to_cpu(p->power_state);
>> +		*src_id = r->domain_id;
>> +		rep = r;
>> +		break;
>> +	}
>> +	default:
>> +		break;
>> +	}
>> +
>> +	return rep;
>> +}
>> +
>> +static const struct scmi_event power_events[] = {
>> +	{
>> +		.id = POWER_STATE_CHANGED,
>> +		.max_payld_sz = 12,
>> +		.max_report_sz =
>> +			sizeof(struct scmi_power_state_changed_report),
>> +	},
>> +	{
>> +		.id = POWER_STATE_CHANGE_REQUESTED,
>> +		.max_payld_sz = 12,
>> +		.max_report_sz =
>> +			sizeof(struct scmi_power_state_change_requested_report),
>> +	},
>> +};
>> +
>> +static const struct scmi_protocol_event_ops power_event_ops = {
>> +	.set_notify_enabled = scmi_power_set_notify_enabled,
>> +	.fill_custom_report = scmi_power_fill_custom_report,
>> +};
>> +
>>  static int scmi_power_protocol_init(struct scmi_handle *handle)
>>  {
>>  	int domain;
>> @@ -214,6 +331,12 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
>>  		scmi_power_domain_attributes_get(handle, domain, dom);
>>  	}
>>  
>> +	scmi_register_protocol_events(handle,
>> +				      SCMI_PROTOCOL_POWER, PAGE_SIZE,
>> +				      &power_event_ops, power_events,
>> +				      ARRAY_SIZE(power_events),
>> +				      pinfo->num_domains);
>> +
>>  	pinfo->version = version;
>>  	handle->power_ops = &power_ops;
>>  	handle->power_priv = pinfo;
>> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
>> index 797e1e03ae52..baa117f9eda3 100644
>> --- a/include/linux/scmi_protocol.h
>> +++ b/include/linux/scmi_protocol.h
>> @@ -377,4 +377,19 @@ typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
>>  int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
>>  void scmi_protocol_unregister(int protocol_id);
>>  
>> +/* SCMI Notification API - Custom Event Reports */
>> +struct scmi_power_state_changed_report {
>> +	ktime_t	timestamp;
>> +	u32	agent_id;
>> +	u32	domain_id;
>> +	u32	power_state;
>> +};
>> +
>> +struct scmi_power_state_change_requested_report {
>> +	ktime_t	timestamp;
>> +	u32	agent_id;
>> +	u32	domain_id;
>> +	u32	power_state;
>> +};
>> +
>>  #endif /* _LINUX_SCMI_PROTOCOL_H */
> 
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 09/13] firmware: arm_scmi: Add Power notifications support
@ 2020-03-09 16:39       ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-09 16:39 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

On 09/03/2020 12:28, Jonathan Cameron wrote:
> On Wed, 4 Mar 2020 16:25:54 +0000
> Cristian Marussi <cristian.marussi@arm.com> wrote:
> 
>> Make SCMI Power protocol register with the notification core.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> 
> One comment inline on an unusual code construct, otherwise fine.
> 
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> 

Thanks

Cristian
>> ---
>> V3 --> V4
>> - scmi_event field renamed
>> V2 --> V3
>> - added handle awareness
>> V1 --> V2
>> - simplified .set_notify_enabled() implementation moving the ALL_SRCIDs
>>   logic out of protocol. ALL_SRCIDs logic is now in charge of the
>>   notification core, together with proper reference counting of enables
>> - switched to devres protocol-registration
>> ---
>>  drivers/firmware/arm_scmi/power.c | 123 ++++++++++++++++++++++++++++++
>>  include/linux/scmi_protocol.h     |  15 ++++
>>  2 files changed, 138 insertions(+)
>>
>> diff --git a/drivers/firmware/arm_scmi/power.c b/drivers/firmware/arm_scmi/power.c
>> index cf7f0312381b..281da7e7e33a 100644
>> --- a/drivers/firmware/arm_scmi/power.c
>> +++ b/drivers/firmware/arm_scmi/power.c
>> @@ -6,6 +6,7 @@
>>   */
>>  
>>  #include "common.h"
>> +#include "notify.h"
>>  
>>  enum scmi_power_protocol_cmd {
>>  	POWER_DOMAIN_ATTRIBUTES = 0x3,
>> @@ -48,6 +49,12 @@ struct scmi_power_state_notify {
>>  	__le32 notify_enable;
>>  };
>>  
>> +struct scmi_power_state_notify_payld {
>> +	__le32 agent_id;
>> +	__le32 domain_id;
>> +	__le32 power_state;
>> +};
>> +
>>  struct power_dom_info {
>>  	bool state_set_sync;
>>  	bool state_set_async;
>> @@ -63,6 +70,11 @@ struct scmi_power_info {
>>  	struct power_dom_info *dom_info;
>>  };
>>  
>> +static enum scmi_power_protocol_cmd evt_2_cmd[] = {
>> +	POWER_STATE_NOTIFY,
>> +	POWER_STATE_CHANGE_REQUESTED_NOTIFY,
>> +};
>> +
>>  static int scmi_power_attributes_get(const struct scmi_handle *handle,
>>  				     struct scmi_power_info *pi)
>>  {
>> @@ -186,6 +198,111 @@ static struct scmi_power_ops power_ops = {
>>  	.state_get = scmi_power_state_get,
>>  };
>>  
>> +static int scmi_power_request_notify(const struct scmi_handle *handle,
>> +				     u32 domain, int message_id, bool enable)
>> +{
>> +	int ret;
>> +	struct scmi_xfer *t;
>> +	struct scmi_power_state_notify *notify;
>> +
>> +	ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_POWER,
>> +				 sizeof(*notify), 0, &t);
>> +	if (ret)
>> +		return ret;
>> +
>> +	notify = t->tx.buf;
>> +	notify->domain = cpu_to_le32(domain);
>> +	notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
>> +
>> +	ret = scmi_do_xfer(handle, t);
>> +
>> +	scmi_xfer_put(handle, t);
>> +	return ret;
>> +}
>> +
>> +static bool scmi_power_set_notify_enabled(const struct scmi_handle *handle,
>> +					  u8 evt_id, u32 src_id, bool enable)
>> +{
>> +	int ret, cmd_id;
>> +
>> +	cmd_id = MAP_EVT_TO_ENABLE_CMD(evt_id, evt_2_cmd);
>> +	if (cmd_id < 0)
>> +		return false;
>> +
>> +	ret = scmi_power_request_notify(handle, src_id, cmd_id, enable);
>> +	if (ret)
>> +		pr_warn("SCMI Notifications - Proto:%X - FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n",
>> +				SCMI_PROTOCOL_POWER, evt_id, src_id, ret);
>> +
>> +	return !ret ? true : false;
> 
> 	return !ret;
> 
> 	Is the same thing...
> 

ops...I'll fix

>> +}
>> +
>> +static void *scmi_power_fill_custom_report(u8 evt_id, u64 timestamp,
>> +					   const void *payld, size_t payld_sz,
>> +					   void *report, u32 *src_id)
>> +{
>> +	void *rep = NULL;
>> +
>> +	switch (evt_id) {
>> +	case POWER_STATE_CHANGED:
>> +	{
>> +		const struct scmi_power_state_notify_payld *p = payld;
>> +		struct scmi_power_state_changed_report *r = report;
>> +
>> +		if (sizeof(*p) != payld_sz)
>> +			break;
>> +
>> +		r->timestamp = timestamp;
>> +		r->agent_id = le32_to_cpu(p->agent_id);
>> +		r->domain_id = le32_to_cpu(p->domain_id);
>> +		r->power_state = le32_to_cpu(p->power_state);
>> +		*src_id = r->domain_id;
>> +		rep = r;
>> +		break;
>> +	}
>> +	case POWER_STATE_CHANGE_REQUESTED:
>> +	{
>> +		const struct scmi_power_state_notify_payld *p = payld;
>> +		struct scmi_power_state_change_requested_report *r = report;
>> +
>> +		if (sizeof(*p) != payld_sz)
>> +			break;
>> +
>> +		r->timestamp = timestamp;
>> +		r->agent_id = le32_to_cpu(p->agent_id);
>> +		r->domain_id = le32_to_cpu(p->domain_id);
>> +		r->power_state = le32_to_cpu(p->power_state);
>> +		*src_id = r->domain_id;
>> +		rep = r;
>> +		break;
>> +	}
>> +	default:
>> +		break;
>> +	}
>> +
>> +	return rep;
>> +}
>> +
>> +static const struct scmi_event power_events[] = {
>> +	{
>> +		.id = POWER_STATE_CHANGED,
>> +		.max_payld_sz = 12,
>> +		.max_report_sz =
>> +			sizeof(struct scmi_power_state_changed_report),
>> +	},
>> +	{
>> +		.id = POWER_STATE_CHANGE_REQUESTED,
>> +		.max_payld_sz = 12,
>> +		.max_report_sz =
>> +			sizeof(struct scmi_power_state_change_requested_report),
>> +	},
>> +};
>> +
>> +static const struct scmi_protocol_event_ops power_event_ops = {
>> +	.set_notify_enabled = scmi_power_set_notify_enabled,
>> +	.fill_custom_report = scmi_power_fill_custom_report,
>> +};
>> +
>>  static int scmi_power_protocol_init(struct scmi_handle *handle)
>>  {
>>  	int domain;
>> @@ -214,6 +331,12 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
>>  		scmi_power_domain_attributes_get(handle, domain, dom);
>>  	}
>>  
>> +	scmi_register_protocol_events(handle,
>> +				      SCMI_PROTOCOL_POWER, PAGE_SIZE,
>> +				      &power_event_ops, power_events,
>> +				      ARRAY_SIZE(power_events),
>> +				      pinfo->num_domains);
>> +
>>  	pinfo->version = version;
>>  	handle->power_ops = &power_ops;
>>  	handle->power_priv = pinfo;
>> diff --git a/include/linux/scmi_protocol.h b/include/linux/scmi_protocol.h
>> index 797e1e03ae52..baa117f9eda3 100644
>> --- a/include/linux/scmi_protocol.h
>> +++ b/include/linux/scmi_protocol.h
>> @@ -377,4 +377,19 @@ typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *);
>>  int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn);
>>  void scmi_protocol_unregister(int protocol_id);
>>  
>> +/* SCMI Notification API - Custom Event Reports */
>> +struct scmi_power_state_changed_report {
>> +	ktime_t	timestamp;
>> +	u32	agent_id;
>> +	u32	domain_id;
>> +	u32	power_state;
>> +};
>> +
>> +struct scmi_power_state_change_requested_report {
>> +	ktime_t	timestamp;
>> +	u32	agent_id;
>> +	u32	domain_id;
>> +	u32	power_state;
>> +};
>> +
>>  #endif /* _LINUX_SCMI_PROTOCOL_H */
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-09 16:37       ` Cristian Marussi
@ 2020-03-10 10:01         ` Jonathan Cameron
  -1 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-10 10:01 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, lukasz.luba, james.quinlan

On Mon, 9 Mar 2020 16:37:53 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Hi
> 
> On 09/03/2020 12:26, Jonathan Cameron wrote:
> > On Wed, 4 Mar 2020 16:25:52 +0000
> > Cristian Marussi <cristian.marussi@arm.com> wrote:
> >   
> >> Add core SCMI Notifications dispatch and delivery support logic which is
> >> able, at first, to dispatch well-known received events from the RX ISR to
> >> the dedicated deferred worker, and then, from there, to final deliver the
> >> events to the registered users' callbacks.
> >>
> >> Dispatch and delivery is just added here, still not enabled.
> >>
> >> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>  
> > 
> > Hmm.  Doing that magic in_flight stuff looks fine, but it feels like
> > the wrong way to approach a problem which is down to the lack of
> > atomicity of the kfifo_in pair.   Could we just make that atomic via
> > a bit of custom manipulation of the kfifo?
> > 
> > The snag is that stuff isn't exported from the innards of kfifo...  
> 
> My initial approach till v3 was to collate header and payload in a pre-allocated
> scratch buffer and then doing a single kfifo_in so as to avoid to worry about workqueue
> empting the kfifo and going to sleep right after a header has been read and a payload is
> in flight, but that, as pointed out indirectly by Jim Quinlan led to an unneeded memcpy...
> in fact I was copying in/out the fifo a total of 2*h + 3*p bytes, instead with this handling
> I can avoid such intermediate collation step and stick to the bare minimum needed 2*h + 2*p
> bytes memcopies.
> 
> On one side I was worried to make the code complex to avoid just a few bytes of memcpy, on the other
> side the redundant memcpy is on the ISR side and also I cannot assume that the unneded p bytes
> copied there are necessarily small ... being SMCI extensible you could possibly add a proprietary
> (or not) protocol with jumbo payloads of KBs so that the p-bytes redundant copy is no more so
> negligible.
> 
> At the end I did not find so horrible and complex the new in flight handling (tested introducing
> horrible mdelays in between the kfifo_inS inside the ISR...), so I went for that.
> 
> > 
> > Maybe what you have here is the best option.
> >   
> 
> I like the solution you propose down below, but the fact that it relies on the inner kfifo function
> is in fact a show stopper being based on the inernal api (and I have not found other viable ways to
> abuse the kfifo API :D ... as of now)...I wonder if it is not worth propose upstream (not in this series)
> a generic kfifo "light" scatter/gather in/out interface for this particular usecase; _kfifo_dma* seem to use
> the full fledged scatter/gather kernel structs, but that's certainly overkill for this scenario.

Seems sensible to me.  Whether via scatterlists is needed, I'm not sure.
Seems to me that most usecases will be header / payload like you have here.

The fiddly stuff in kfifo is always dealing with handling both variable and fixed
size versions.  Here we probably just want to reject the fixed size option at
compile time as it doesn't make sense.

Certainly worth exploring if the kfifo maintainers will allow this sort of interface.

Jonathan

> 
> Thanks
> 
> Cristian
> 
> > Jonathan
> >   
> >> ---
> >> V3 --> V4
> >> - dispatcher now handles dequeuing of events in chunks (header+payload):
> >>   handling of these in_flight events let us remove one unneeded memcpy
> >>   on RX interrupt path (scmi_notify)
> >> - deferred dispatcher now access their own per-protocol handlers' table
> >>   reducing locking contention on the RX path
> >> V2 --> V3
> >> - exposing wq in sysfs via WQ_SYSFS
> >> V1 --> V2
> >> - splitted out of V1 patch 04
> >> - moved from IDR maps to real HashTables to store event_handlers
> >> - simplified delivery logic
> >> ---
> >>  drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
> >>  drivers/firmware/arm_scmi/notify.h |   9 +
> >>  2 files changed, 342 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> >> index d6c08cce3c63..0854d48d5886 100644
> >> --- a/drivers/firmware/arm_scmi/notify.c
> >> +++ b/drivers/firmware/arm_scmi/notify.c
> >> @@ -44,6 +44,27 @@
> >>   * as described in the SCMI Protocol specification, while src_id represents an
> >>   * optional, protocol dependent, source identifier (like domain_id, perf_id
> >>   * or sensor_id and so forth).
> >> + *
> >> + * Upon reception of a notification message from the platform the SCMI RX ISR  
> [snip]
> 
> >> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
> >> +		pr_err("SCMI Notifications: discard badly sized message\n");
> >> +		return -EINVAL;
> >> +	}
> >> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
> >> +		     sizeof(eh) + len)) {
> >> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
> >> +			proto_id, evt_id, ts);
> >> +		return -ENOMEM;
> >> +	}
> >> +
> >> +	eh.timestamp = ts;
> >> +	eh.evt_id = evt_id;
> >> +	eh.payld_sz = len;
> >> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));  
> > 
> > I'd add a comment that this potential race here is the reason (I think) for all
> > the inflight handling above.
> > 
> > Either that or create a kfifo_in_pair_unsafe that just makes these atomic by only
> > updating the kfifo->in point after adding both parts.
> > 
> > It will be as simple as (I think, kfifo magic always give me a headache).
> > {
> > 	struct __kfifo *__kfifo = &kfifo->kfifo;
> > 	kfifo_copy_in(fifo, &eh, sizeof(eh), fifo->in);
> > 	kfifo_copy_in(fifo, &buf, len, fifo->in + sizeof(eh));
> > 	fifo->in += len + sizeof(eh);
> > }
> > 
> > It's unsafe because crazy things will happen if there isn't enough room, but you
> > can't get there in this code because of the check above and we are making
> > horrendous assumptions about the kfifo type.
> >   
> 
> As said above.
> >> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> >> +	queue_work(r_evt->proto->equeue.wq,
> >> +		   &r_evt->proto->equeue.notify_work);
> >> +
> >> +	return 0;
> >> +}
> >> +
> >>  /**
> >>   * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
> >>   *
> >> @@ -332,12 +604,21 @@ static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
> >>  static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
> >>  					struct events_queue *equeue, size_t sz)
> >>  {
> >> +	int ret = 0;  
> > 
> > ret looks to be always initialized below.
> >   
> 
> Right.
> >> +
> >>  	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
> >>  	if (!equeue->qbuf)
> >>  		return -ENOMEM;
> >>  	equeue->sz = sz;
> >>  
> >> -	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> >> +	ret = kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
> >> +	equeue->wq = ni->notify_wq;
> >> +
> >> +	return ret;
> >>  }
> >>  
> >>  /**
> >> @@ -740,6 +1021,38 @@ scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
> >>  	return __scmi_event_handler_get_ops(ni, evt_key, true);
> >>  }
> >>  
> >> +/**
> >> + * scmi_get_active_handler  - Helper to get active handlers only
> >> + *
> >> + * Search for the desired handler matching the key only in the per-protocol
> >> + * table of registered handlers: this is called only from the dispatching path
> >> + * so want to be as quick as possible and do not care about pending.
> >> + *
> >> + * @ni: A reference to the notification instance to use
> >> + * @evt_key: The event key to use
> >> + *
> >> + * Return: A properly refcounted active handler
> >> + */
> >> +static struct scmi_event_handler *
> >> +scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
> >> +{
> >> +	struct scmi_registered_event *r_evt;
> >> +	struct scmi_event_handler *hndl = NULL;
> >> +
> >> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
> >> +			      KEY_XTRACT_EVT_ID(evt_key));
> >> +	if (likely(r_evt)) {
> >> +		mutex_lock(&r_evt->proto->registered_mtx);
> >> +		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
> >> +				hndl, evt_key);
> >> +		if (likely(hndl))
> >> +			refcount_inc(&hndl->users);
> >> +		mutex_unlock(&r_evt->proto->registered_mtx);
> >> +	}
> >> +
> >> +	return hndl;
> >> +}
> >> +
> >>  /**
> >>   * __scmi_enable_evt  - Enable/disable events generation
> >>   *
> >> @@ -861,6 +1174,16 @@ static void scmi_put_handler(struct scmi_notify_instance *ni,
> >>  	mutex_unlock(&ni->pending_mtx);
> >>  }
> >>  
> >> +static void scmi_put_active_handler(struct scmi_notify_instance *ni,
> >> +					  struct scmi_event_handler *hndl)
> >> +{
> >> +	struct scmi_registered_event *r_evt = hndl->r_evt;
> >> +
> >> +	mutex_lock(&r_evt->proto->registered_mtx);
> >> +	scmi_put_handler_unlocked(ni, hndl);
> >> +	mutex_unlock(&r_evt->proto->registered_mtx);
> >> +}
> >> +
> >>  /**
> >>   * scmi_event_handler_enable_events  - Enable events associated to an handler
> >>   *
> >> @@ -1087,6 +1410,12 @@ int scmi_notification_init(struct scmi_handle *handle)
> >>  	ni->gid = gid;
> >>  	ni->handle = handle;
> >>  
> >> +	ni->notify_wq = alloc_workqueue("scmi_notify",
> >> +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
> >> +					0);
> >> +	if (!ni->notify_wq)
> >> +		goto err;
> >> +
> >>  	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
> >>  						sizeof(char *), GFP_KERNEL);
> >>  	if (!ni->registered_protocols)
> >> @@ -1133,6 +1462,9 @@ void scmi_notification_exit(struct scmi_handle *handle)
> >>  	/* Ensure atomic values are updated */
> >>  	smp_mb__after_atomic();
> >>  
> >> +	/* Destroy while letting pending work complete */
> >> +	destroy_workqueue(ni->notify_wq);
> >> +
> >>  	devres_release_group(ni->handle->dev, ni->gid);
> >>  
> >>  	pr_info("SCMI Notifications Core Shutdown.\n");
> >> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
> >> index f765acda2311..6cd386649d5a 100644
> >> --- a/drivers/firmware/arm_scmi/notify.h
> >> +++ b/drivers/firmware/arm_scmi/notify.h
> >> @@ -51,10 +51,17 @@ struct scmi_event {
> >>   *			using the proper custom protocol commands.
> >>   *			Return true if at least one the required src_id
> >>   *			has been successfully enabled/disabled
> >> + * @fill_custom_report: fills a custom event report from the provided
> >> + *			event message payld identifying the event
> >> + *			specific src_id.
> >> + *			Return NULL on failure otherwise @report now fully
> >> + *			populated
> >>   */
> >>  struct scmi_protocol_event_ops {
> >>  	bool (*set_notify_enabled)(const struct scmi_handle *handle,
> >>  				   u8 evt_id, u32 src_id, bool enabled);
> >> +	void *(*fill_custom_report)(u8 evt_id, u64 timestamp, const void *payld,
> >> +				    size_t payld_sz, void *report, u32 *src_id);
> >>  };
> >>  
> >>  int scmi_notification_init(struct scmi_handle *handle);
> >> @@ -65,5 +72,7 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
> >>  				  const struct scmi_protocol_event_ops *ops,
> >>  				  const struct scmi_event *evt, int num_events,
> >>  				  int num_sources);
> >> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
> >> +		const void *buf, size_t len, u64 ts);
> >>  
> >>  #endif /* _SCMI_NOTIFY_H */  
> > 
> >   
> 



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-10 10:01         ` Jonathan Cameron
  0 siblings, 0 replies; 70+ messages in thread
From: Jonathan Cameron @ 2020-03-10 10:01 UTC (permalink / raw)
  To: Cristian Marussi
  Cc: james.quinlan, lukasz.luba, linux-kernel, linux-arm-kernel, sudeep.holla

On Mon, 9 Mar 2020 16:37:53 +0000
Cristian Marussi <cristian.marussi@arm.com> wrote:

> Hi
> 
> On 09/03/2020 12:26, Jonathan Cameron wrote:
> > On Wed, 4 Mar 2020 16:25:52 +0000
> > Cristian Marussi <cristian.marussi@arm.com> wrote:
> >   
> >> Add core SCMI Notifications dispatch and delivery support logic which is
> >> able, at first, to dispatch well-known received events from the RX ISR to
> >> the dedicated deferred worker, and then, from there, to final deliver the
> >> events to the registered users' callbacks.
> >>
> >> Dispatch and delivery is just added here, still not enabled.
> >>
> >> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>  
> > 
> > Hmm.  Doing that magic in_flight stuff looks fine, but it feels like
> > the wrong way to approach a problem which is down to the lack of
> > atomicity of the kfifo_in pair.   Could we just make that atomic via
> > a bit of custom manipulation of the kfifo?
> > 
> > The snag is that stuff isn't exported from the innards of kfifo...  
> 
> My initial approach till v3 was to collate header and payload in a pre-allocated
> scratch buffer and then doing a single kfifo_in so as to avoid to worry about workqueue
> empting the kfifo and going to sleep right after a header has been read and a payload is
> in flight, but that, as pointed out indirectly by Jim Quinlan led to an unneeded memcpy...
> in fact I was copying in/out the fifo a total of 2*h + 3*p bytes, instead with this handling
> I can avoid such intermediate collation step and stick to the bare minimum needed 2*h + 2*p
> bytes memcopies.
> 
> On one side I was worried to make the code complex to avoid just a few bytes of memcpy, on the other
> side the redundant memcpy is on the ISR side and also I cannot assume that the unneded p bytes
> copied there are necessarily small ... being SMCI extensible you could possibly add a proprietary
> (or not) protocol with jumbo payloads of KBs so that the p-bytes redundant copy is no more so
> negligible.
> 
> At the end I did not find so horrible and complex the new in flight handling (tested introducing
> horrible mdelays in between the kfifo_inS inside the ISR...), so I went for that.
> 
> > 
> > Maybe what you have here is the best option.
> >   
> 
> I like the solution you propose down below, but the fact that it relies on the inner kfifo function
> is in fact a show stopper being based on the inernal api (and I have not found other viable ways to
> abuse the kfifo API :D ... as of now)...I wonder if it is not worth propose upstream (not in this series)
> a generic kfifo "light" scatter/gather in/out interface for this particular usecase; _kfifo_dma* seem to use
> the full fledged scatter/gather kernel structs, but that's certainly overkill for this scenario.

Seems sensible to me.  Whether via scatterlists is needed, I'm not sure.
Seems to me that most usecases will be header / payload like you have here.

The fiddly stuff in kfifo is always dealing with handling both variable and fixed
size versions.  Here we probably just want to reject the fixed size option at
compile time as it doesn't make sense.

Certainly worth exploring if the kfifo maintainers will allow this sort of interface.

Jonathan

> 
> Thanks
> 
> Cristian
> 
> > Jonathan
> >   
> >> ---
> >> V3 --> V4
> >> - dispatcher now handles dequeuing of events in chunks (header+payload):
> >>   handling of these in_flight events let us remove one unneeded memcpy
> >>   on RX interrupt path (scmi_notify)
> >> - deferred dispatcher now access their own per-protocol handlers' table
> >>   reducing locking contention on the RX path
> >> V2 --> V3
> >> - exposing wq in sysfs via WQ_SYSFS
> >> V1 --> V2
> >> - splitted out of V1 patch 04
> >> - moved from IDR maps to real HashTables to store event_handlers
> >> - simplified delivery logic
> >> ---
> >>  drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
> >>  drivers/firmware/arm_scmi/notify.h |   9 +
> >>  2 files changed, 342 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> >> index d6c08cce3c63..0854d48d5886 100644
> >> --- a/drivers/firmware/arm_scmi/notify.c
> >> +++ b/drivers/firmware/arm_scmi/notify.c
> >> @@ -44,6 +44,27 @@
> >>   * as described in the SCMI Protocol specification, while src_id represents an
> >>   * optional, protocol dependent, source identifier (like domain_id, perf_id
> >>   * or sensor_id and so forth).
> >> + *
> >> + * Upon reception of a notification message from the platform the SCMI RX ISR  
> [snip]
> 
> >> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
> >> +		pr_err("SCMI Notifications: discard badly sized message\n");
> >> +		return -EINVAL;
> >> +	}
> >> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
> >> +		     sizeof(eh) + len)) {
> >> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
> >> +			proto_id, evt_id, ts);
> >> +		return -ENOMEM;
> >> +	}
> >> +
> >> +	eh.timestamp = ts;
> >> +	eh.evt_id = evt_id;
> >> +	eh.payld_sz = len;
> >> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));  
> > 
> > I'd add a comment that this potential race here is the reason (I think) for all
> > the inflight handling above.
> > 
> > Either that or create a kfifo_in_pair_unsafe that just makes these atomic by only
> > updating the kfifo->in point after adding both parts.
> > 
> > It will be as simple as (I think, kfifo magic always give me a headache).
> > {
> > 	struct __kfifo *__kfifo = &kfifo->kfifo;
> > 	kfifo_copy_in(fifo, &eh, sizeof(eh), fifo->in);
> > 	kfifo_copy_in(fifo, &buf, len, fifo->in + sizeof(eh));
> > 	fifo->in += len + sizeof(eh);
> > }
> > 
> > It's unsafe because crazy things will happen if there isn't enough room, but you
> > can't get there in this code because of the check above and we are making
> > horrendous assumptions about the kfifo type.
> >   
> 
> As said above.
> >> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> >> +	queue_work(r_evt->proto->equeue.wq,
> >> +		   &r_evt->proto->equeue.notify_work);
> >> +
> >> +	return 0;
> >> +}
> >> +
> >>  /**
> >>   * scmi_initialize_events_queue  - Allocate/Initialize a kfifo buffer
> >>   *
> >> @@ -332,12 +604,21 @@ static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
> >>  static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
> >>  					struct events_queue *equeue, size_t sz)
> >>  {
> >> +	int ret = 0;  
> > 
> > ret looks to be always initialized below.
> >   
> 
> Right.
> >> +
> >>  	equeue->qbuf = devm_kzalloc(ni->handle->dev, sz, GFP_KERNEL);
> >>  	if (!equeue->qbuf)
> >>  		return -ENOMEM;
> >>  	equeue->sz = sz;
> >>  
> >> -	return kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> >> +	ret = kfifo_init(&equeue->kfifo, equeue->qbuf, equeue->sz);
> >> +	if (ret)
> >> +		return ret;
> >> +
> >> +	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
> >> +	equeue->wq = ni->notify_wq;
> >> +
> >> +	return ret;
> >>  }
> >>  
> >>  /**
> >> @@ -740,6 +1021,38 @@ scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
> >>  	return __scmi_event_handler_get_ops(ni, evt_key, true);
> >>  }
> >>  
> >> +/**
> >> + * scmi_get_active_handler  - Helper to get active handlers only
> >> + *
> >> + * Search for the desired handler matching the key only in the per-protocol
> >> + * table of registered handlers: this is called only from the dispatching path
> >> + * so want to be as quick as possible and do not care about pending.
> >> + *
> >> + * @ni: A reference to the notification instance to use
> >> + * @evt_key: The event key to use
> >> + *
> >> + * Return: A properly refcounted active handler
> >> + */
> >> +static struct scmi_event_handler *
> >> +scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
> >> +{
> >> +	struct scmi_registered_event *r_evt;
> >> +	struct scmi_event_handler *hndl = NULL;
> >> +
> >> +	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
> >> +			      KEY_XTRACT_EVT_ID(evt_key));
> >> +	if (likely(r_evt)) {
> >> +		mutex_lock(&r_evt->proto->registered_mtx);
> >> +		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
> >> +				hndl, evt_key);
> >> +		if (likely(hndl))
> >> +			refcount_inc(&hndl->users);
> >> +		mutex_unlock(&r_evt->proto->registered_mtx);
> >> +	}
> >> +
> >> +	return hndl;
> >> +}
> >> +
> >>  /**
> >>   * __scmi_enable_evt  - Enable/disable events generation
> >>   *
> >> @@ -861,6 +1174,16 @@ static void scmi_put_handler(struct scmi_notify_instance *ni,
> >>  	mutex_unlock(&ni->pending_mtx);
> >>  }
> >>  
> >> +static void scmi_put_active_handler(struct scmi_notify_instance *ni,
> >> +					  struct scmi_event_handler *hndl)
> >> +{
> >> +	struct scmi_registered_event *r_evt = hndl->r_evt;
> >> +
> >> +	mutex_lock(&r_evt->proto->registered_mtx);
> >> +	scmi_put_handler_unlocked(ni, hndl);
> >> +	mutex_unlock(&r_evt->proto->registered_mtx);
> >> +}
> >> +
> >>  /**
> >>   * scmi_event_handler_enable_events  - Enable events associated to an handler
> >>   *
> >> @@ -1087,6 +1410,12 @@ int scmi_notification_init(struct scmi_handle *handle)
> >>  	ni->gid = gid;
> >>  	ni->handle = handle;
> >>  
> >> +	ni->notify_wq = alloc_workqueue("scmi_notify",
> >> +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
> >> +					0);
> >> +	if (!ni->notify_wq)
> >> +		goto err;
> >> +
> >>  	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
> >>  						sizeof(char *), GFP_KERNEL);
> >>  	if (!ni->registered_protocols)
> >> @@ -1133,6 +1462,9 @@ void scmi_notification_exit(struct scmi_handle *handle)
> >>  	/* Ensure atomic values are updated */
> >>  	smp_mb__after_atomic();
> >>  
> >> +	/* Destroy while letting pending work complete */
> >> +	destroy_workqueue(ni->notify_wq);
> >> +
> >>  	devres_release_group(ni->handle->dev, ni->gid);
> >>  
> >>  	pr_info("SCMI Notifications Core Shutdown.\n");
> >> diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
> >> index f765acda2311..6cd386649d5a 100644
> >> --- a/drivers/firmware/arm_scmi/notify.h
> >> +++ b/drivers/firmware/arm_scmi/notify.h
> >> @@ -51,10 +51,17 @@ struct scmi_event {
> >>   *			using the proper custom protocol commands.
> >>   *			Return true if at least one the required src_id
> >>   *			has been successfully enabled/disabled
> >> + * @fill_custom_report: fills a custom event report from the provided
> >> + *			event message payld identifying the event
> >> + *			specific src_id.
> >> + *			Return NULL on failure otherwise @report now fully
> >> + *			populated
> >>   */
> >>  struct scmi_protocol_event_ops {
> >>  	bool (*set_notify_enabled)(const struct scmi_handle *handle,
> >>  				   u8 evt_id, u32 src_id, bool enabled);
> >> +	void *(*fill_custom_report)(u8 evt_id, u64 timestamp, const void *payld,
> >> +				    size_t payld_sz, void *report, u32 *src_id);
> >>  };
> >>  
> >>  int scmi_notification_init(struct scmi_handle *handle);
> >> @@ -65,5 +72,7 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
> >>  				  const struct scmi_protocol_event_ops *ops,
> >>  				  const struct scmi_event *evt, int num_events,
> >>  				  int num_sources);
> >> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
> >> +		const void *buf, size_t len, u64 ts);
> >>  
> >>  #endif /* _SCMI_NOTIFY_H */  
> > 
> >   
> 



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-04 16:25   ` Cristian Marussi
@ 2020-03-12 13:51     ` Lukasz Luba
  -1 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-12 13:51 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, james.quinlan, Jonathan.Cameron

Hi Cristian,

just one comment below...

On 3/4/20 4:25 PM, Cristian Marussi wrote:
> Add core SCMI Notifications dispatch and delivery support logic which is
> able, at first, to dispatch well-known received events from the RX ISR to
> the dedicated deferred worker, and then, from there, to final deliver the
> events to the registered users' callbacks.
> 
> Dispatch and delivery is just added here, still not enabled.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> ---
> V3 --> V4
> - dispatcher now handles dequeuing of events in chunks (header+payload):
>    handling of these in_flight events let us remove one unneeded memcpy
>    on RX interrupt path (scmi_notify)
> - deferred dispatcher now access their own per-protocol handlers' table
>    reducing locking contention on the RX path
> V2 --> V3
> - exposing wq in sysfs via WQ_SYSFS
> V1 --> V2
> - splitted out of V1 patch 04
> - moved from IDR maps to real HashTables to store event_handlers
> - simplified delivery logic
> ---
>   drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>   drivers/firmware/arm_scmi/notify.h |   9 +
>   2 files changed, 342 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c

[snip]

> +
> +/**
> + * scmi_notify  - Queues a notification for further deferred processing
> + *
> + * This is called in interrupt context to queue a received event for
> + * deferred processing.
> + *
> + * @handle: The handle identifying the platform instance from which the
> + *	    dispatched event is generated
> + * @proto_id: Protocol ID
> + * @evt_id: Event ID (msgID)
> + * @buf: Event Message Payload (without the header)
> + * @len: Event Message Payload size
> + * @ts: RX Timestamp in nanoseconds (boottime)
> + *
> + * Return: 0 on Success
> + */
> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
> +		const void *buf, size_t len, u64 ts)
> +{
> +	struct scmi_registered_event *r_evt;
> +	struct scmi_event_header eh;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	/* Ensure atomic value is updated */
> +	smp_mb__before_atomic();
> +	if (unlikely(!atomic_read(&ni->enabled)))
> +		return 0;
> +
> +	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
> +	if (unlikely(!r_evt))
> +		return -EINVAL;
> +
> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
> +		pr_err("SCMI Notifications: discard badly sized message\n");
> +		return -EINVAL;
> +	}
> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
> +		     sizeof(eh) + len)) {
> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
> +			proto_id, evt_id, ts);
> +		return -ENOMEM;
> +	}
> +
> +	eh.timestamp = ts;
> +	eh.evt_id = evt_id;
> +	eh.payld_sz = len;
> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> +	queue_work(r_evt->proto->equeue.wq,
> +		   &r_evt->proto->equeue.notify_work);

Is it safe to ignore the return value from the queue_work here?

Regards,
Lukasz



^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-12 13:51     ` Lukasz Luba
  0 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-12 13:51 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, james.quinlan, sudeep.holla

Hi Cristian,

just one comment below...

On 3/4/20 4:25 PM, Cristian Marussi wrote:
> Add core SCMI Notifications dispatch and delivery support logic which is
> able, at first, to dispatch well-known received events from the RX ISR to
> the dedicated deferred worker, and then, from there, to final deliver the
> events to the registered users' callbacks.
> 
> Dispatch and delivery is just added here, still not enabled.
> 
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> ---
> V3 --> V4
> - dispatcher now handles dequeuing of events in chunks (header+payload):
>    handling of these in_flight events let us remove one unneeded memcpy
>    on RX interrupt path (scmi_notify)
> - deferred dispatcher now access their own per-protocol handlers' table
>    reducing locking contention on the RX path
> V2 --> V3
> - exposing wq in sysfs via WQ_SYSFS
> V1 --> V2
> - splitted out of V1 patch 04
> - moved from IDR maps to real HashTables to store event_handlers
> - simplified delivery logic
> ---
>   drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>   drivers/firmware/arm_scmi/notify.h |   9 +
>   2 files changed, 342 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c

[snip]

> +
> +/**
> + * scmi_notify  - Queues a notification for further deferred processing
> + *
> + * This is called in interrupt context to queue a received event for
> + * deferred processing.
> + *
> + * @handle: The handle identifying the platform instance from which the
> + *	    dispatched event is generated
> + * @proto_id: Protocol ID
> + * @evt_id: Event ID (msgID)
> + * @buf: Event Message Payload (without the header)
> + * @len: Event Message Payload size
> + * @ts: RX Timestamp in nanoseconds (boottime)
> + *
> + * Return: 0 on Success
> + */
> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
> +		const void *buf, size_t len, u64 ts)
> +{
> +	struct scmi_registered_event *r_evt;
> +	struct scmi_event_header eh;
> +	struct scmi_notify_instance *ni = handle->notify_priv;
> +
> +	/* Ensure atomic value is updated */
> +	smp_mb__before_atomic();
> +	if (unlikely(!atomic_read(&ni->enabled)))
> +		return 0;
> +
> +	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
> +	if (unlikely(!r_evt))
> +		return -EINVAL;
> +
> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
> +		pr_err("SCMI Notifications: discard badly sized message\n");
> +		return -EINVAL;
> +	}
> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
> +		     sizeof(eh) + len)) {
> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
> +			proto_id, evt_id, ts);
> +		return -ENOMEM;
> +	}
> +
> +	eh.timestamp = ts;
> +	eh.evt_id = evt_id;
> +	eh.payld_sz = len;
> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> +	queue_work(r_evt->proto->equeue.wq,
> +		   &r_evt->proto->equeue.notify_work);

Is it safe to ignore the return value from the queue_work here?

Regards,
Lukasz



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-12 13:51     ` Lukasz Luba
@ 2020-03-12 14:06       ` Lukasz Luba
  -1 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-12 14:06 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, james.quinlan, Jonathan.Cameron



On 3/12/20 1:51 PM, Lukasz Luba wrote:
> Hi Cristian,
> 
> just one comment below...
> 
> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>> Add core SCMI Notifications dispatch and delivery support logic which is
>> able, at first, to dispatch well-known received events from the RX ISR to
>> the dedicated deferred worker, and then, from there, to final deliver the
>> events to the registered users' callbacks.
>>
>> Dispatch and delivery is just added here, still not enabled.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>> ---
>> V3 --> V4
>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>    handling of these in_flight events let us remove one unneeded memcpy
>>    on RX interrupt path (scmi_notify)
>> - deferred dispatcher now access their own per-protocol handlers' table
>>    reducing locking contention on the RX path
>> V2 --> V3
>> - exposing wq in sysfs via WQ_SYSFS
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store event_handlers
>> - simplified delivery logic
>> ---
>>   drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>   drivers/firmware/arm_scmi/notify.h |   9 +
>>   2 files changed, 342 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/firmware/arm_scmi/notify.c 
>> b/drivers/firmware/arm_scmi/notify.c
> 
> [snip]
> 
>> +
>> +/**
>> + * scmi_notify  - Queues a notification for further deferred processing
>> + *
>> + * This is called in interrupt context to queue a received event for
>> + * deferred processing.
>> + *
>> + * @handle: The handle identifying the platform instance from which the
>> + *        dispatched event is generated
>> + * @proto_id: Protocol ID
>> + * @evt_id: Event ID (msgID)
>> + * @buf: Event Message Payload (without the header)
>> + * @len: Event Message Payload size
>> + * @ts: RX Timestamp in nanoseconds (boottime)
>> + *
>> + * Return: 0 on Success
>> + */
>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 
>> evt_id,
>> +        const void *buf, size_t len, u64 ts)
>> +{
>> +    struct scmi_registered_event *r_evt;
>> +    struct scmi_event_header eh;
>> +    struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +    /* Ensure atomic value is updated */
>> +    smp_mb__before_atomic();
>> +    if (unlikely(!atomic_read(&ni->enabled)))
>> +        return 0;
>> +
>> +    r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>> +    if (unlikely(!r_evt))
>> +        return -EINVAL;
>> +
>> +    if (unlikely(len > r_evt->evt->max_payld_sz)) {
>> +        pr_err("SCMI Notifications: discard badly sized message\n");
>> +        return -EINVAL;
>> +    }
>> +    if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>> +             sizeof(eh) + len)) {
>> +        pr_warn("SCMI Notifications: queue full dropping proto_id:%d  
>> evt_id:%d  ts:%lld\n",
>> +            proto_id, evt_id, ts);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    eh.timestamp = ts;
>> +    eh.evt_id = evt_id;
>> +    eh.payld_sz = len;
>> +    kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>> +    kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>> +    queue_work(r_evt->proto->equeue.wq,
>> +           &r_evt->proto->equeue.notify_work);
> 
> Is it safe to ignore the return value from the queue_work here?

and also from the kfifo_in

> 
> Regards,
> Lukasz
> 
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-12 14:06       ` Lukasz Luba
  0 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-12 14:06 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, james.quinlan, sudeep.holla



On 3/12/20 1:51 PM, Lukasz Luba wrote:
> Hi Cristian,
> 
> just one comment below...
> 
> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>> Add core SCMI Notifications dispatch and delivery support logic which is
>> able, at first, to dispatch well-known received events from the RX ISR to
>> the dedicated deferred worker, and then, from there, to final deliver the
>> events to the registered users' callbacks.
>>
>> Dispatch and delivery is just added here, still not enabled.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>> ---
>> V3 --> V4
>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>    handling of these in_flight events let us remove one unneeded memcpy
>>    on RX interrupt path (scmi_notify)
>> - deferred dispatcher now access their own per-protocol handlers' table
>>    reducing locking contention on the RX path
>> V2 --> V3
>> - exposing wq in sysfs via WQ_SYSFS
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store event_handlers
>> - simplified delivery logic
>> ---
>>   drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>   drivers/firmware/arm_scmi/notify.h |   9 +
>>   2 files changed, 342 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/firmware/arm_scmi/notify.c 
>> b/drivers/firmware/arm_scmi/notify.c
> 
> [snip]
> 
>> +
>> +/**
>> + * scmi_notify  - Queues a notification for further deferred processing
>> + *
>> + * This is called in interrupt context to queue a received event for
>> + * deferred processing.
>> + *
>> + * @handle: The handle identifying the platform instance from which the
>> + *        dispatched event is generated
>> + * @proto_id: Protocol ID
>> + * @evt_id: Event ID (msgID)
>> + * @buf: Event Message Payload (without the header)
>> + * @len: Event Message Payload size
>> + * @ts: RX Timestamp in nanoseconds (boottime)
>> + *
>> + * Return: 0 on Success
>> + */
>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 
>> evt_id,
>> +        const void *buf, size_t len, u64 ts)
>> +{
>> +    struct scmi_registered_event *r_evt;
>> +    struct scmi_event_header eh;
>> +    struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +    /* Ensure atomic value is updated */
>> +    smp_mb__before_atomic();
>> +    if (unlikely(!atomic_read(&ni->enabled)))
>> +        return 0;
>> +
>> +    r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>> +    if (unlikely(!r_evt))
>> +        return -EINVAL;
>> +
>> +    if (unlikely(len > r_evt->evt->max_payld_sz)) {
>> +        pr_err("SCMI Notifications: discard badly sized message\n");
>> +        return -EINVAL;
>> +    }
>> +    if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>> +             sizeof(eh) + len)) {
>> +        pr_warn("SCMI Notifications: queue full dropping proto_id:%d  
>> evt_id:%d  ts:%lld\n",
>> +            proto_id, evt_id, ts);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    eh.timestamp = ts;
>> +    eh.evt_id = evt_id;
>> +    eh.payld_sz = len;
>> +    kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>> +    kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>> +    queue_work(r_evt->proto->equeue.wq,
>> +           &r_evt->proto->equeue.notify_work);
> 
> Is it safe to ignore the return value from the queue_work here?

and also from the kfifo_in

> 
> Regards,
> Lukasz
> 
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-12 13:51     ` Lukasz Luba
@ 2020-03-12 18:34       ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-12 18:34 UTC (permalink / raw)
  To: Lukasz Luba, linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, james.quinlan, Jonathan.Cameron

On 12/03/2020 13:51, Lukasz Luba wrote:
> Hi Cristian,
> 
> just one comment below...

Hi Lukasz

Thanks for the review

> 
> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>> Add core SCMI Notifications dispatch and delivery support logic which is
>> able, at first, to dispatch well-known received events from the RX ISR to
>> the dedicated deferred worker, and then, from there, to final deliver the
>> events to the registered users' callbacks.
>>
>> Dispatch and delivery is just added here, still not enabled.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>> ---
>> V3 --> V4
>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>    handling of these in_flight events let us remove one unneeded memcpy
>>    on RX interrupt path (scmi_notify)
>> - deferred dispatcher now access their own per-protocol handlers' table
>>    reducing locking contention on the RX path
>> V2 --> V3
>> - exposing wq in sysfs via WQ_SYSFS
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store event_handlers
>> - simplified delivery logic
>> ---
>>   drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>   drivers/firmware/arm_scmi/notify.h |   9 +
>>   2 files changed, 342 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> 
> [snip]
> 
>> +
>> +/**
>> + * scmi_notify  - Queues a notification for further deferred processing
>> + *
>> + * This is called in interrupt context to queue a received event for
>> + * deferred processing.
>> + *
>> + * @handle: The handle identifying the platform instance from which the
>> + *	    dispatched event is generated
>> + * @proto_id: Protocol ID
>> + * @evt_id: Event ID (msgID)
>> + * @buf: Event Message Payload (without the header)
>> + * @len: Event Message Payload size
>> + * @ts: RX Timestamp in nanoseconds (boottime)
>> + *
>> + * Return: 0 on Success
>> + */
>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>> +		const void *buf, size_t len, u64 ts)
>> +{
>> +	struct scmi_registered_event *r_evt;
>> +	struct scmi_event_header eh;
>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +	/* Ensure atomic value is updated */
>> +	smp_mb__before_atomic();
>> +	if (unlikely(!atomic_read(&ni->enabled)))
>> +		return 0;
>> +
>> +	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>> +	if (unlikely(!r_evt))
>> +		return -EINVAL;
>> +
>> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
>> +		pr_err("SCMI Notifications: discard badly sized message\n");
>> +		return -EINVAL;
>> +	}
>> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>> +		     sizeof(eh) + len)) {
>> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
>> +			proto_id, evt_id, ts);
>> +		return -ENOMEM;
>> +	}
>> +
>> +	eh.timestamp = ts;
>> +	eh.evt_id = evt_id;
>> +	eh.payld_sz = len;
>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>> +	queue_work(r_evt->proto->equeue.wq,
>> +		   &r_evt->proto->equeue.notify_work);
> 
> Is it safe to ignore the return value from the queue_work here?
> 

In fact yes, we do not want to care: it returns true or false depending on the
fact that the specific work was or not already queued, and we just rely on
this behavior to keep kicking the worker only when needed but never kick
more than one instance of it per-queue (so that there's only one reader
wq and one writer here in the scmi_notify)...explaining better:

1. we push an event (hdr+payld) to the protocol queue if we found that there was
enough space on the queue

2a. if at the time of the kfifo_in( ) the worker was already running
(queue not empty) it will process our new event sooner or later and here
the queue_work will return false, but we do not care in fact ... we
tried to kick it just in case

2b. if instead at the time of the kfifo_in() the queue was empty the worker would
have probably already gone to the sleep and this queue_work() will return true and
so this time it will effectively wake up the worker to process our items

The important thing here is that we are sure to wakeup the worker when needed
but we are equally sure we are never causing the scheduling of more than one worker
thread consuming from the same queue (because that would break the one reader/one writer
assumption which let us use the fifo in a lockless manner): this is possible because
queue_work checks if the required work item is already pending and in such a case backs
out returning false and we have one work_item (notify_work) defined per-protocol and
so per-queue.

Now probably I wrote too much of an explanation and confuse stuff more ... :D

Regards

Cristian

> Regards,
> Lukasz
> 
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-12 18:34       ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-12 18:34 UTC (permalink / raw)
  To: Lukasz Luba, linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, james.quinlan, sudeep.holla

On 12/03/2020 13:51, Lukasz Luba wrote:
> Hi Cristian,
> 
> just one comment below...

Hi Lukasz

Thanks for the review

> 
> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>> Add core SCMI Notifications dispatch and delivery support logic which is
>> able, at first, to dispatch well-known received events from the RX ISR to
>> the dedicated deferred worker, and then, from there, to final deliver the
>> events to the registered users' callbacks.
>>
>> Dispatch and delivery is just added here, still not enabled.
>>
>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>> ---
>> V3 --> V4
>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>    handling of these in_flight events let us remove one unneeded memcpy
>>    on RX interrupt path (scmi_notify)
>> - deferred dispatcher now access their own per-protocol handlers' table
>>    reducing locking contention on the RX path
>> V2 --> V3
>> - exposing wq in sysfs via WQ_SYSFS
>> V1 --> V2
>> - splitted out of V1 patch 04
>> - moved from IDR maps to real HashTables to store event_handlers
>> - simplified delivery logic
>> ---
>>   drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>   drivers/firmware/arm_scmi/notify.h |   9 +
>>   2 files changed, 342 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> 
> [snip]
> 
>> +
>> +/**
>> + * scmi_notify  - Queues a notification for further deferred processing
>> + *
>> + * This is called in interrupt context to queue a received event for
>> + * deferred processing.
>> + *
>> + * @handle: The handle identifying the platform instance from which the
>> + *	    dispatched event is generated
>> + * @proto_id: Protocol ID
>> + * @evt_id: Event ID (msgID)
>> + * @buf: Event Message Payload (without the header)
>> + * @len: Event Message Payload size
>> + * @ts: RX Timestamp in nanoseconds (boottime)
>> + *
>> + * Return: 0 on Success
>> + */
>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>> +		const void *buf, size_t len, u64 ts)
>> +{
>> +	struct scmi_registered_event *r_evt;
>> +	struct scmi_event_header eh;
>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>> +
>> +	/* Ensure atomic value is updated */
>> +	smp_mb__before_atomic();
>> +	if (unlikely(!atomic_read(&ni->enabled)))
>> +		return 0;
>> +
>> +	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>> +	if (unlikely(!r_evt))
>> +		return -EINVAL;
>> +
>> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
>> +		pr_err("SCMI Notifications: discard badly sized message\n");
>> +		return -EINVAL;
>> +	}
>> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>> +		     sizeof(eh) + len)) {
>> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
>> +			proto_id, evt_id, ts);
>> +		return -ENOMEM;
>> +	}
>> +
>> +	eh.timestamp = ts;
>> +	eh.evt_id = evt_id;
>> +	eh.payld_sz = len;
>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>> +	queue_work(r_evt->proto->equeue.wq,
>> +		   &r_evt->proto->equeue.notify_work);
> 
> Is it safe to ignore the return value from the queue_work here?
> 

In fact yes, we do not want to care: it returns true or false depending on the
fact that the specific work was or not already queued, and we just rely on
this behavior to keep kicking the worker only when needed but never kick
more than one instance of it per-queue (so that there's only one reader
wq and one writer here in the scmi_notify)...explaining better:

1. we push an event (hdr+payld) to the protocol queue if we found that there was
enough space on the queue

2a. if at the time of the kfifo_in( ) the worker was already running
(queue not empty) it will process our new event sooner or later and here
the queue_work will return false, but we do not care in fact ... we
tried to kick it just in case

2b. if instead at the time of the kfifo_in() the queue was empty the worker would
have probably already gone to the sleep and this queue_work() will return true and
so this time it will effectively wake up the worker to process our items

The important thing here is that we are sure to wakeup the worker when needed
but we are equally sure we are never causing the scheduling of more than one worker
thread consuming from the same queue (because that would break the one reader/one writer
assumption which let us use the fifo in a lockless manner): this is possible because
queue_work checks if the required work item is already pending and in such a case backs
out returning false and we have one work_item (notify_work) defined per-protocol and
so per-queue.

Now probably I wrote too much of an explanation and confuse stuff more ... :D

Regards

Cristian

> Regards,
> Lukasz
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-12 14:06       ` Lukasz Luba
@ 2020-03-12 19:24         ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-12 19:24 UTC (permalink / raw)
  To: Lukasz Luba, linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, james.quinlan, Jonathan.Cameron

On 12/03/2020 14:06, Lukasz Luba wrote:
> 
> 
> On 3/12/20 1:51 PM, Lukasz Luba wrote:
>> Hi Cristian,
>>

Hi Lukasz

>> just one comment below...
>>
>> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>>> Add core SCMI Notifications dispatch and delivery support logic which is
>>> able, at first, to dispatch well-known received events from the RX ISR to
>>> the dedicated deferred worker, and then, from there, to final deliver the
>>> events to the registered users' callbacks.
>>>
>>> Dispatch and delivery is just added here, still not enabled.
>>>
>>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>>> ---
>>> V3 --> V4
>>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>>    handling of these in_flight events let us remove one unneeded memcpy
>>>    on RX interrupt path (scmi_notify)
>>> - deferred dispatcher now access their own per-protocol handlers' table
>>>    reducing locking contention on the RX path
>>> V2 --> V3
>>> - exposing wq in sysfs via WQ_SYSFS
>>> V1 --> V2
>>> - splitted out of V1 patch 04
>>> - moved from IDR maps to real HashTables to store event_handlers
>>> - simplified delivery logic
>>> ---
>>>   drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>>   drivers/firmware/arm_scmi/notify.h |   9 +
>>>   2 files changed, 342 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/firmware/arm_scmi/notify.c 
>>> b/drivers/firmware/arm_scmi/notify.c
>>
>> [snip]
>>
>>> +
>>> +/**
>>> + * scmi_notify  - Queues a notification for further deferred processing
>>> + *
>>> + * This is called in interrupt context to queue a received event for
>>> + * deferred processing.
>>> + *
>>> + * @handle: The handle identifying the platform instance from which the
>>> + *        dispatched event is generated
>>> + * @proto_id: Protocol ID
>>> + * @evt_id: Event ID (msgID)
>>> + * @buf: Event Message Payload (without the header)
>>> + * @len: Event Message Payload size
>>> + * @ts: RX Timestamp in nanoseconds (boottime)
>>> + *
>>> + * Return: 0 on Success
>>> + */
>>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 
>>> evt_id,
>>> +        const void *buf, size_t len, u64 ts)
>>> +{
>>> +    struct scmi_registered_event *r_evt;
>>> +    struct scmi_event_header eh;
>>> +    struct scmi_notify_instance *ni = handle->notify_priv;
>>> +
>>> +    /* Ensure atomic value is updated */
>>> +    smp_mb__before_atomic();
>>> +    if (unlikely(!atomic_read(&ni->enabled)))
>>> +        return 0;
>>> +
>>> +    r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>>> +    if (unlikely(!r_evt))
>>> +        return -EINVAL;
>>> +
>>> +    if (unlikely(len > r_evt->evt->max_payld_sz)) {
>>> +        pr_err("SCMI Notifications: discard badly sized message\n");
>>> +        return -EINVAL;
>>> +    }
>>> +    if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>>> +             sizeof(eh) + len)) {
>>> +        pr_warn("SCMI Notifications: queue full dropping proto_id:%d  
>>> evt_id:%d  ts:%lld\n",
>>> +            proto_id, evt_id, ts);
>>> +        return -ENOMEM;
>>> +    }
>>> +
>>> +    eh.timestamp = ts;
>>> +    eh.evt_id = evt_id;
>>> +    eh.payld_sz = len;
>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>> +    queue_work(r_evt->proto->equeue.wq,
>>> +           &r_evt->proto->equeue.notify_work);
>>
>> Is it safe to ignore the return value from the queue_work here?
> 
> and also from the kfifo_in
> 

kfifo_in returns the number of effectively written bytes (using __kfifo_in),
possibly capped to the effectively maximum available space in the fifo, BUT since I
absolutely cannot afford to write an incomplete/truncated event into the queue, I check
that in advance and backout on queue full:

if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) < sizeof(eh) + len)) {
	return -ENOMEM;

and given that the ISR scmi_notify() is the only possible writer on this queue
I can be sure that the kfifo_in() will succeed in writing the required number of
bytes after the above check...so I don't need to check the return value.

Regards

Cristian

>>
>> Regards,
>> Lukasz
>>
>>


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-12 19:24         ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-12 19:24 UTC (permalink / raw)
  To: Lukasz Luba, linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, james.quinlan, sudeep.holla

On 12/03/2020 14:06, Lukasz Luba wrote:
> 
> 
> On 3/12/20 1:51 PM, Lukasz Luba wrote:
>> Hi Cristian,
>>

Hi Lukasz

>> just one comment below...
>>
>> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>>> Add core SCMI Notifications dispatch and delivery support logic which is
>>> able, at first, to dispatch well-known received events from the RX ISR to
>>> the dedicated deferred worker, and then, from there, to final deliver the
>>> events to the registered users' callbacks.
>>>
>>> Dispatch and delivery is just added here, still not enabled.
>>>
>>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>>> ---
>>> V3 --> V4
>>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>>    handling of these in_flight events let us remove one unneeded memcpy
>>>    on RX interrupt path (scmi_notify)
>>> - deferred dispatcher now access their own per-protocol handlers' table
>>>    reducing locking contention on the RX path
>>> V2 --> V3
>>> - exposing wq in sysfs via WQ_SYSFS
>>> V1 --> V2
>>> - splitted out of V1 patch 04
>>> - moved from IDR maps to real HashTables to store event_handlers
>>> - simplified delivery logic
>>> ---
>>>   drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>>   drivers/firmware/arm_scmi/notify.h |   9 +
>>>   2 files changed, 342 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/firmware/arm_scmi/notify.c 
>>> b/drivers/firmware/arm_scmi/notify.c
>>
>> [snip]
>>
>>> +
>>> +/**
>>> + * scmi_notify  - Queues a notification for further deferred processing
>>> + *
>>> + * This is called in interrupt context to queue a received event for
>>> + * deferred processing.
>>> + *
>>> + * @handle: The handle identifying the platform instance from which the
>>> + *        dispatched event is generated
>>> + * @proto_id: Protocol ID
>>> + * @evt_id: Event ID (msgID)
>>> + * @buf: Event Message Payload (without the header)
>>> + * @len: Event Message Payload size
>>> + * @ts: RX Timestamp in nanoseconds (boottime)
>>> + *
>>> + * Return: 0 on Success
>>> + */
>>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 
>>> evt_id,
>>> +        const void *buf, size_t len, u64 ts)
>>> +{
>>> +    struct scmi_registered_event *r_evt;
>>> +    struct scmi_event_header eh;
>>> +    struct scmi_notify_instance *ni = handle->notify_priv;
>>> +
>>> +    /* Ensure atomic value is updated */
>>> +    smp_mb__before_atomic();
>>> +    if (unlikely(!atomic_read(&ni->enabled)))
>>> +        return 0;
>>> +
>>> +    r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>>> +    if (unlikely(!r_evt))
>>> +        return -EINVAL;
>>> +
>>> +    if (unlikely(len > r_evt->evt->max_payld_sz)) {
>>> +        pr_err("SCMI Notifications: discard badly sized message\n");
>>> +        return -EINVAL;
>>> +    }
>>> +    if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>>> +             sizeof(eh) + len)) {
>>> +        pr_warn("SCMI Notifications: queue full dropping proto_id:%d  
>>> evt_id:%d  ts:%lld\n",
>>> +            proto_id, evt_id, ts);
>>> +        return -ENOMEM;
>>> +    }
>>> +
>>> +    eh.timestamp = ts;
>>> +    eh.evt_id = evt_id;
>>> +    eh.payld_sz = len;
>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>> +    queue_work(r_evt->proto->equeue.wq,
>>> +           &r_evt->proto->equeue.notify_work);
>>
>> Is it safe to ignore the return value from the queue_work here?
> 
> and also from the kfifo_in
> 

kfifo_in returns the number of effectively written bytes (using __kfifo_in),
possibly capped to the effectively maximum available space in the fifo, BUT since I
absolutely cannot afford to write an incomplete/truncated event into the queue, I check
that in advance and backout on queue full:

if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) < sizeof(eh) + len)) {
	return -ENOMEM;

and given that the ISR scmi_notify() is the only possible writer on this queue
I can be sure that the kfifo_in() will succeed in writing the required number of
bytes after the above check...so I don't need to check the return value.

Regards

Cristian

>>
>> Regards,
>> Lukasz
>>
>>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-12 19:24         ` Cristian Marussi
@ 2020-03-12 20:57           ` Lukasz Luba
  -1 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-12 20:57 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, james.quinlan, Jonathan.Cameron



On 3/12/20 7:24 PM, Cristian Marussi wrote:
> On 12/03/2020 14:06, Lukasz Luba wrote:
>>
>>
>> On 3/12/20 1:51 PM, Lukasz Luba wrote:
>>> Hi Cristian,
>>>
> 
> Hi Lukasz
> 
>>> just one comment below...
>>>
>>> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>>>> Add core SCMI Notifications dispatch and delivery support logic which is
>>>> able, at first, to dispatch well-known received events from the RX ISR to
>>>> the dedicated deferred worker, and then, from there, to final deliver the
>>>> events to the registered users' callbacks.
>>>>
>>>> Dispatch and delivery is just added here, still not enabled.
>>>>
>>>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>>>> ---
>>>> V3 --> V4
>>>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>>>     handling of these in_flight events let us remove one unneeded memcpy
>>>>     on RX interrupt path (scmi_notify)
>>>> - deferred dispatcher now access their own per-protocol handlers' table
>>>>     reducing locking contention on the RX path
>>>> V2 --> V3
>>>> - exposing wq in sysfs via WQ_SYSFS
>>>> V1 --> V2
>>>> - splitted out of V1 patch 04
>>>> - moved from IDR maps to real HashTables to store event_handlers
>>>> - simplified delivery logic
>>>> ---
>>>>    drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>>>    drivers/firmware/arm_scmi/notify.h |   9 +
>>>>    2 files changed, 342 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/firmware/arm_scmi/notify.c
>>>> b/drivers/firmware/arm_scmi/notify.c
>>>
>>> [snip]
>>>
>>>> +
>>>> +/**
>>>> + * scmi_notify  - Queues a notification for further deferred processing
>>>> + *
>>>> + * This is called in interrupt context to queue a received event for
>>>> + * deferred processing.
>>>> + *
>>>> + * @handle: The handle identifying the platform instance from which the
>>>> + *        dispatched event is generated
>>>> + * @proto_id: Protocol ID
>>>> + * @evt_id: Event ID (msgID)
>>>> + * @buf: Event Message Payload (without the header)
>>>> + * @len: Event Message Payload size
>>>> + * @ts: RX Timestamp in nanoseconds (boottime)
>>>> + *
>>>> + * Return: 0 on Success
>>>> + */
>>>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8
>>>> evt_id,
>>>> +        const void *buf, size_t len, u64 ts)
>>>> +{
>>>> +    struct scmi_registered_event *r_evt;
>>>> +    struct scmi_event_header eh;
>>>> +    struct scmi_notify_instance *ni = handle->notify_priv;
>>>> +
>>>> +    /* Ensure atomic value is updated */
>>>> +    smp_mb__before_atomic();
>>>> +    if (unlikely(!atomic_read(&ni->enabled)))
>>>> +        return 0;
>>>> +
>>>> +    r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>>>> +    if (unlikely(!r_evt))
>>>> +        return -EINVAL;
>>>> +
>>>> +    if (unlikely(len > r_evt->evt->max_payld_sz)) {
>>>> +        pr_err("SCMI Notifications: discard badly sized message\n");
>>>> +        return -EINVAL;
>>>> +    }
>>>> +    if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>>>> +             sizeof(eh) + len)) {
>>>> +        pr_warn("SCMI Notifications: queue full dropping proto_id:%d
>>>> evt_id:%d  ts:%lld\n",
>>>> +            proto_id, evt_id, ts);
>>>> +        return -ENOMEM;
>>>> +    }
>>>> +
>>>> +    eh.timestamp = ts;
>>>> +    eh.evt_id = evt_id;
>>>> +    eh.payld_sz = len;
>>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>>> +    queue_work(r_evt->proto->equeue.wq,
>>>> +           &r_evt->proto->equeue.notify_work);
>>>
>>> Is it safe to ignore the return value from the queue_work here?
>>
>> and also from the kfifo_in
>>
> 
> kfifo_in returns the number of effectively written bytes (using __kfifo_in),
> possibly capped to the effectively maximum available space in the fifo, BUT since I
> absolutely cannot afford to write an incomplete/truncated event into the queue, I check
> that in advance and backout on queue full:
> 
> if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) < sizeof(eh) + len)) {
> 	return -ENOMEM;
> 
> and given that the ISR scmi_notify() is the only possible writer on this queue

Yes, your are right, no other IRQ will show up for this channel till
we exit mailbox rx callback and clean the bits.

> I can be sure that the kfifo_in() will succeed in writing the required number of
> bytes after the above check...so I don't need to check the return value.
> 
> Regards
> 
> Cristian
> 
>>>
>>> Regards,
>>> Lukasz
>>>
>>>
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-12 20:57           ` Lukasz Luba
  0 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-12 20:57 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, james.quinlan, sudeep.holla



On 3/12/20 7:24 PM, Cristian Marussi wrote:
> On 12/03/2020 14:06, Lukasz Luba wrote:
>>
>>
>> On 3/12/20 1:51 PM, Lukasz Luba wrote:
>>> Hi Cristian,
>>>
> 
> Hi Lukasz
> 
>>> just one comment below...
>>>
>>> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>>>> Add core SCMI Notifications dispatch and delivery support logic which is
>>>> able, at first, to dispatch well-known received events from the RX ISR to
>>>> the dedicated deferred worker, and then, from there, to final deliver the
>>>> events to the registered users' callbacks.
>>>>
>>>> Dispatch and delivery is just added here, still not enabled.
>>>>
>>>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>>>> ---
>>>> V3 --> V4
>>>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>>>     handling of these in_flight events let us remove one unneeded memcpy
>>>>     on RX interrupt path (scmi_notify)
>>>> - deferred dispatcher now access their own per-protocol handlers' table
>>>>     reducing locking contention on the RX path
>>>> V2 --> V3
>>>> - exposing wq in sysfs via WQ_SYSFS
>>>> V1 --> V2
>>>> - splitted out of V1 patch 04
>>>> - moved from IDR maps to real HashTables to store event_handlers
>>>> - simplified delivery logic
>>>> ---
>>>>    drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>>>    drivers/firmware/arm_scmi/notify.h |   9 +
>>>>    2 files changed, 342 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/firmware/arm_scmi/notify.c
>>>> b/drivers/firmware/arm_scmi/notify.c
>>>
>>> [snip]
>>>
>>>> +
>>>> +/**
>>>> + * scmi_notify  - Queues a notification for further deferred processing
>>>> + *
>>>> + * This is called in interrupt context to queue a received event for
>>>> + * deferred processing.
>>>> + *
>>>> + * @handle: The handle identifying the platform instance from which the
>>>> + *        dispatched event is generated
>>>> + * @proto_id: Protocol ID
>>>> + * @evt_id: Event ID (msgID)
>>>> + * @buf: Event Message Payload (without the header)
>>>> + * @len: Event Message Payload size
>>>> + * @ts: RX Timestamp in nanoseconds (boottime)
>>>> + *
>>>> + * Return: 0 on Success
>>>> + */
>>>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8
>>>> evt_id,
>>>> +        const void *buf, size_t len, u64 ts)
>>>> +{
>>>> +    struct scmi_registered_event *r_evt;
>>>> +    struct scmi_event_header eh;
>>>> +    struct scmi_notify_instance *ni = handle->notify_priv;
>>>> +
>>>> +    /* Ensure atomic value is updated */
>>>> +    smp_mb__before_atomic();
>>>> +    if (unlikely(!atomic_read(&ni->enabled)))
>>>> +        return 0;
>>>> +
>>>> +    r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>>>> +    if (unlikely(!r_evt))
>>>> +        return -EINVAL;
>>>> +
>>>> +    if (unlikely(len > r_evt->evt->max_payld_sz)) {
>>>> +        pr_err("SCMI Notifications: discard badly sized message\n");
>>>> +        return -EINVAL;
>>>> +    }
>>>> +    if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>>>> +             sizeof(eh) + len)) {
>>>> +        pr_warn("SCMI Notifications: queue full dropping proto_id:%d
>>>> evt_id:%d  ts:%lld\n",
>>>> +            proto_id, evt_id, ts);
>>>> +        return -ENOMEM;
>>>> +    }
>>>> +
>>>> +    eh.timestamp = ts;
>>>> +    eh.evt_id = evt_id;
>>>> +    eh.payld_sz = len;
>>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>>> +    queue_work(r_evt->proto->equeue.wq,
>>>> +           &r_evt->proto->equeue.notify_work);
>>>
>>> Is it safe to ignore the return value from the queue_work here?
>>
>> and also from the kfifo_in
>>
> 
> kfifo_in returns the number of effectively written bytes (using __kfifo_in),
> possibly capped to the effectively maximum available space in the fifo, BUT since I
> absolutely cannot afford to write an incomplete/truncated event into the queue, I check
> that in advance and backout on queue full:
> 
> if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) < sizeof(eh) + len)) {
> 	return -ENOMEM;
> 
> and given that the ISR scmi_notify() is the only possible writer on this queue

Yes, your are right, no other IRQ will show up for this channel till
we exit mailbox rx callback and clean the bits.

> I can be sure that the kfifo_in() will succeed in writing the required number of
> bytes after the above check...so I don't need to check the return value.
> 
> Regards
> 
> Cristian
> 
>>>
>>> Regards,
>>> Lukasz
>>>
>>>
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-12 18:34       ` Cristian Marussi
@ 2020-03-12 21:43         ` Lukasz Luba
  -1 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-12 21:43 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel
  Cc: sudeep.holla, james.quinlan, Jonathan.Cameron



On 3/12/20 6:34 PM, Cristian Marussi wrote:
> On 12/03/2020 13:51, Lukasz Luba wrote:
>> Hi Cristian,
>>
>> just one comment below...
> 
> Hi Lukasz
> 
> Thanks for the review
> 
>>
>> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>>> Add core SCMI Notifications dispatch and delivery support logic which is
>>> able, at first, to dispatch well-known received events from the RX ISR to
>>> the dedicated deferred worker, and then, from there, to final deliver the
>>> events to the registered users' callbacks.
>>>
>>> Dispatch and delivery is just added here, still not enabled.
>>>
>>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>>> ---
>>> V3 --> V4
>>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>>     handling of these in_flight events let us remove one unneeded memcpy
>>>     on RX interrupt path (scmi_notify)
>>> - deferred dispatcher now access their own per-protocol handlers' table
>>>     reducing locking contention on the RX path
>>> V2 --> V3
>>> - exposing wq in sysfs via WQ_SYSFS
>>> V1 --> V2
>>> - splitted out of V1 patch 04
>>> - moved from IDR maps to real HashTables to store event_handlers
>>> - simplified delivery logic
>>> ---
>>>    drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>>    drivers/firmware/arm_scmi/notify.h |   9 +
>>>    2 files changed, 342 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
>>
>> [snip]
>>
>>> +
>>> +/**
>>> + * scmi_notify  - Queues a notification for further deferred processing
>>> + *
>>> + * This is called in interrupt context to queue a received event for
>>> + * deferred processing.
>>> + *
>>> + * @handle: The handle identifying the platform instance from which the
>>> + *	    dispatched event is generated
>>> + * @proto_id: Protocol ID
>>> + * @evt_id: Event ID (msgID)
>>> + * @buf: Event Message Payload (without the header)
>>> + * @len: Event Message Payload size
>>> + * @ts: RX Timestamp in nanoseconds (boottime)
>>> + *
>>> + * Return: 0 on Success
>>> + */
>>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>>> +		const void *buf, size_t len, u64 ts)
>>> +{
>>> +	struct scmi_registered_event *r_evt;
>>> +	struct scmi_event_header eh;
>>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>>> +
>>> +	/* Ensure atomic value is updated */
>>> +	smp_mb__before_atomic();
>>> +	if (unlikely(!atomic_read(&ni->enabled)))
>>> +		return 0;
>>> +
>>> +	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>>> +	if (unlikely(!r_evt))
>>> +		return -EINVAL;
>>> +
>>> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
>>> +		pr_err("SCMI Notifications: discard badly sized message\n");
>>> +		return -EINVAL;
>>> +	}
>>> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>>> +		     sizeof(eh) + len)) {
>>> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
>>> +			proto_id, evt_id, ts);
>>> +		return -ENOMEM;
>>> +	}
>>> +
>>> +	eh.timestamp = ts;
>>> +	eh.evt_id = evt_id;
>>> +	eh.payld_sz = len;
>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>> +	queue_work(r_evt->proto->equeue.wq,
>>> +		   &r_evt->proto->equeue.notify_work);
>>
>> Is it safe to ignore the return value from the queue_work here?
>>
> 
> In fact yes, we do not want to care: it returns true or false depending on the
> fact that the specific work was or not already queued, and we just rely on
> this behavior to keep kicking the worker only when needed but never kick
> more than one instance of it per-queue (so that there's only one reader
> wq and one writer here in the scmi_notify)...explaining better:
> 
> 1. we push an event (hdr+payld) to the protocol queue if we found that there was
> enough space on the queue
> 
> 2a. if at the time of the kfifo_in( ) the worker was already running
> (queue not empty) it will process our new event sooner or later and here
> the queue_work will return false, but we do not care in fact ... we
> tried to kick it just in case
> 
> 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
> have probably already gone to the sleep and this queue_work() will return true and
> so this time it will effectively wake up the worker to process our items
> 
> The important thing here is that we are sure to wakeup the worker when needed
> but we are equally sure we are never causing the scheduling of more than one worker
> thread consuming from the same queue (because that would break the one reader/one writer
> assumption which let us use the fifo in a lockless manner): this is possible because
> queue_work checks if the required work item is already pending and in such a case backs
> out returning false and we have one work_item (notify_work) defined per-protocol and
> so per-queue.

I see. That's a good assumption: one work_item per protocol and simplify
the locking. What if there would be an edge case scenario when the
consumer (work_item) has handled the last item (there was NULL from 
scmi_process_event_header()), while in meantime scmi_notify put into
the fifo new event but couldn't kick the queue_work. Would it stay there
till the next IRQ which triggers queue_work to consume two events (one
potentially a bit old)? Or we can ignore such race situation assuming
that cleaning of work item is instant and kfifo_in is slow?

> 
> Now probably I wrote too much of an explanation and confuse stuff more ... :D

No, thank you for the detailed explanation. I will continue my review.

Regards,
Lukasz

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-12 21:43         ` Lukasz Luba
  0 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-12 21:43 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel
  Cc: Jonathan.Cameron, james.quinlan, sudeep.holla



On 3/12/20 6:34 PM, Cristian Marussi wrote:
> On 12/03/2020 13:51, Lukasz Luba wrote:
>> Hi Cristian,
>>
>> just one comment below...
> 
> Hi Lukasz
> 
> Thanks for the review
> 
>>
>> On 3/4/20 4:25 PM, Cristian Marussi wrote:
>>> Add core SCMI Notifications dispatch and delivery support logic which is
>>> able, at first, to dispatch well-known received events from the RX ISR to
>>> the dedicated deferred worker, and then, from there, to final deliver the
>>> events to the registered users' callbacks.
>>>
>>> Dispatch and delivery is just added here, still not enabled.
>>>
>>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>>> ---
>>> V3 --> V4
>>> - dispatcher now handles dequeuing of events in chunks (header+payload):
>>>     handling of these in_flight events let us remove one unneeded memcpy
>>>     on RX interrupt path (scmi_notify)
>>> - deferred dispatcher now access their own per-protocol handlers' table
>>>     reducing locking contention on the RX path
>>> V2 --> V3
>>> - exposing wq in sysfs via WQ_SYSFS
>>> V1 --> V2
>>> - splitted out of V1 patch 04
>>> - moved from IDR maps to real HashTables to store event_handlers
>>> - simplified delivery logic
>>> ---
>>>    drivers/firmware/arm_scmi/notify.c | 334 ++++++++++++++++++++++++++++-
>>>    drivers/firmware/arm_scmi/notify.h |   9 +
>>>    2 files changed, 342 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
>>
>> [snip]
>>
>>> +
>>> +/**
>>> + * scmi_notify  - Queues a notification for further deferred processing
>>> + *
>>> + * This is called in interrupt context to queue a received event for
>>> + * deferred processing.
>>> + *
>>> + * @handle: The handle identifying the platform instance from which the
>>> + *	    dispatched event is generated
>>> + * @proto_id: Protocol ID
>>> + * @evt_id: Event ID (msgID)
>>> + * @buf: Event Message Payload (without the header)
>>> + * @len: Event Message Payload size
>>> + * @ts: RX Timestamp in nanoseconds (boottime)
>>> + *
>>> + * Return: 0 on Success
>>> + */
>>> +int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>>> +		const void *buf, size_t len, u64 ts)
>>> +{
>>> +	struct scmi_registered_event *r_evt;
>>> +	struct scmi_event_header eh;
>>> +	struct scmi_notify_instance *ni = handle->notify_priv;
>>> +
>>> +	/* Ensure atomic value is updated */
>>> +	smp_mb__before_atomic();
>>> +	if (unlikely(!atomic_read(&ni->enabled)))
>>> +		return 0;
>>> +
>>> +	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
>>> +	if (unlikely(!r_evt))
>>> +		return -EINVAL;
>>> +
>>> +	if (unlikely(len > r_evt->evt->max_payld_sz)) {
>>> +		pr_err("SCMI Notifications: discard badly sized message\n");
>>> +		return -EINVAL;
>>> +	}
>>> +	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
>>> +		     sizeof(eh) + len)) {
>>> +		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
>>> +			proto_id, evt_id, ts);
>>> +		return -ENOMEM;
>>> +	}
>>> +
>>> +	eh.timestamp = ts;
>>> +	eh.evt_id = evt_id;
>>> +	eh.payld_sz = len;
>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>> +	queue_work(r_evt->proto->equeue.wq,
>>> +		   &r_evt->proto->equeue.notify_work);
>>
>> Is it safe to ignore the return value from the queue_work here?
>>
> 
> In fact yes, we do not want to care: it returns true or false depending on the
> fact that the specific work was or not already queued, and we just rely on
> this behavior to keep kicking the worker only when needed but never kick
> more than one instance of it per-queue (so that there's only one reader
> wq and one writer here in the scmi_notify)...explaining better:
> 
> 1. we push an event (hdr+payld) to the protocol queue if we found that there was
> enough space on the queue
> 
> 2a. if at the time of the kfifo_in( ) the worker was already running
> (queue not empty) it will process our new event sooner or later and here
> the queue_work will return false, but we do not care in fact ... we
> tried to kick it just in case
> 
> 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
> have probably already gone to the sleep and this queue_work() will return true and
> so this time it will effectively wake up the worker to process our items
> 
> The important thing here is that we are sure to wakeup the worker when needed
> but we are equally sure we are never causing the scheduling of more than one worker
> thread consuming from the same queue (because that would break the one reader/one writer
> assumption which let us use the fifo in a lockless manner): this is possible because
> queue_work checks if the required work item is already pending and in such a case backs
> out returning false and we have one work_item (notify_work) defined per-protocol and
> so per-queue.

I see. That's a good assumption: one work_item per protocol and simplify
the locking. What if there would be an edge case scenario when the
consumer (work_item) has handled the last item (there was NULL from 
scmi_process_event_header()), while in meantime scmi_notify put into
the fifo new event but couldn't kick the queue_work. Would it stay there
till the next IRQ which triggers queue_work to consume two events (one
potentially a bit old)? Or we can ignore such race situation assuming
that cleaning of work item is instant and kfifo_in is slow?

> 
> Now probably I wrote too much of an explanation and confuse stuff more ... :D

No, thank you for the detailed explanation. I will continue my review.

Regards,
Lukasz

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-12 21:43         ` Lukasz Luba
@ 2020-03-16 14:46           ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-16 14:46 UTC (permalink / raw)
  To: Lukasz Luba
  Cc: linux-kernel, linux-arm-kernel, sudeep.holla, james.quinlan,
	Jonathan.Cameron

On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
> 
> 
> On 3/12/20 6:34 PM, Cristian Marussi wrote:
> > On 12/03/2020 13:51, Lukasz Luba wrote:
> > > Hi Cristian,
> > > 
Hi Lukasz

> > > just one comment below...
[snip]
> > > > +	eh.timestamp = ts;
> > > > +	eh.evt_id = evt_id;
> > > > +	eh.payld_sz = len;
> > > > +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
> > > > +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> > > > +	queue_work(r_evt->proto->equeue.wq,
> > > > +		   &r_evt->proto->equeue.notify_work);
> > > 
> > > Is it safe to ignore the return value from the queue_work here?
> > > 
> > 
> > In fact yes, we do not want to care: it returns true or false depending on the
> > fact that the specific work was or not already queued, and we just rely on
> > this behavior to keep kicking the worker only when needed but never kick
> > more than one instance of it per-queue (so that there's only one reader
> > wq and one writer here in the scmi_notify)...explaining better:
> > 
> > 1. we push an event (hdr+payld) to the protocol queue if we found that there was
> > enough space on the queue
> > 
> > 2a. if at the time of the kfifo_in( ) the worker was already running
> > (queue not empty) it will process our new event sooner or later and here
> > the queue_work will return false, but we do not care in fact ... we
> > tried to kick it just in case
> > 
> > 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
> > have probably already gone to the sleep and this queue_work() will return true and
> > so this time it will effectively wake up the worker to process our items
> > 
> > The important thing here is that we are sure to wakeup the worker when needed
> > but we are equally sure we are never causing the scheduling of more than one worker
> > thread consuming from the same queue (because that would break the one reader/one writer
> > assumption which let us use the fifo in a lockless manner): this is possible because
> > queue_work checks if the required work item is already pending and in such a case backs
> > out returning false and we have one work_item (notify_work) defined per-protocol and
> > so per-queue.
> 
> I see. That's a good assumption: one work_item per protocol and simplify
> the locking. What if there would be an edge case scenario when the
> consumer (work_item) has handled the last item (there was NULL from
> scmi_process_event_header()), while in meantime scmi_notify put into
> the fifo new event but couldn't kick the queue_work. Would it stay there
> till the next IRQ which triggers queue_work to consume two events (one
> potentially a bit old)? Or we can ignore such race situation assuming
> that cleaning of work item is instant and kfifo_in is slow?
> 

In fact, this is a very good point, since between the moment the worker
determines that the queue is empty and the moment in which the worker
effectively exits (and it's marked as no more pending by the Kernel cmwq)
there is a window of opportunity for a race in which the ISR could fill
the queue with one more event and then fail to kick with queue_work() since
the work is in fact still nominally marked as pending from the point of view
of Kernel cmwq, as below:

ISR (core N)		|	WQ (core N+1)		cmwq flags	       	queued events
------------------------------------------------------------------------------------------------
			| if (queue_is_empty)		- WORK_PENDING		0 events queued
			+     ...			- WORK_PENDING		0 events queued
			+ } while (scmi_process_event_payload);
			+}// worker function exit 
kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
queue_work()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
  -> FALSE (pending)	+     ...cmwq backing out	- WORK_PENDING		1 events queued
			+     ...cmwq backing out	- WORK_PENDING		1 events queued
			+     ...cmwq backing out	- WORK_PENDING		1 events queued
			| ---- WORKER THREAD EXIT	- !WORK_PENDING		1 events queued
			| 		 		- !WORK_PENDING		1 events queued
kfifo_in()		|     				- !WORK_PENDING		2 events queued
kfifo_in()		|  				- !WORK_PENDING		2 events queued
queue_work()		|     				- !WORK_PENDING		2 events queued
   -> TRUE		| --- WORKER ENTER		- WORK_PENDING		2 events queued
			|  				- WORK_PENDING		2 events consumed
		
where effectively the last event queued won't be consumed till the next
iteration once another event is queued.

Given the fact that the ISR and the dedicated WQ on an SMP run effectively
in parallel I do not think unfortunately that we can simply count on the fact
the worker exit is faster than the kifos_in, enough to close the race window
opportunity. (even if rare)

On the other side considering the impact of such scenario, I can imagine that
it's not simply that we could only have a delayed delivery, but we must consider
that if the delayed event is effectively the last one ever it would remain
undelivered forever; this is particularly worrying in a scenario in which such
last event is particularly important: imagine a system shutdown where a last
system-power-off remains undelivered.

As a consequence I think this rare racy condition should be addressed somehow.

Looking at this scenario, it seems the classic situation in which you want to
use some sort of completion to avoid missing out on events delivery, BUT in our
usecase:

- placing the workers loaned from cmwq into an unbounded wait_for_completion()
  once the queue is empty seems not the best to use resources (and probably
  frowned upon)....using a few dedicated kernel threads to simply let them idle
  waiting most of the time seems equally frowned upon (I could be wrong...))
- the needed complete() in the ISR would introduce a spinlock_irqsave into the
  interrupt path (there's already one inside queue_work in fact) so it is not
  desirable, at least not if used on a regular base (for each event notified)

So I was thinking to try to reduce sensibly the above race window, more
than eliminate it completely, by adding an early flag to be checked under
specific conditions in order to retry the queue_work a few times when the race
is hit, something like:

ISR (core N)		|	WQ (core N+1)
-------------------------------------------------------------------------------
			| atomic_set(&exiting, 0);
			|
			| do {
			|	...
			| 	if (queue_is_empty)		- WORK_PENDING		0 events queued
			+          atomic_set(&exiting, 1)	- WORK_PENDING		0 events queued
static int cnt=3	|          --> breakout of while	- WORK_PENDING		0 events queued
kfifo_in()		|	....
			| } while (scmi_process_event_payload);
kfifo_in()		|
exiting = atomic_read()	|     ...cmwq backing out		- WORK_PENDING		1 events queued
do {			|     ...cmwq backing out		- WORK_PENDING		1 events queued
    ret = queue_work() 	|     ...cmwq backing out		- WORK_PENDING		1 events queued
    if (ret || !exiting)|     ...cmwq backing out		- WORK_PENDING		1 events queued
	break;		|     ...cmwq backing out		- WORK_PENDING		1 events queued
    mdelay(5);		|     ...cmwq backing out		- WORK_PENDING		1 events queued
    exiting =		|     ...cmwq backing out		- WORK_PENDING		1 events queued
      atomic_read;	|     ...cmwq backing out		- WORK_PENDING		1 events queued
} while (--cnt);	|     ...cmwq backing out		- WORK_PENDING		1 events queued
			| ---- WORKER EXIT 			- !WORK_PENDING		0 events queued

like down below between the scissors.

Not tested or tried....I could be missing something...and the mdelay is horrible (and not
the cleanest thing you've ever seen probably :D)...I'll have a chat with Sudeep too.

> > 
> > Now probably I wrote too much of an explanation and confuse stuff more ... :D
> 
> No, thank you for the detailed explanation. I will continue my review.
> 

Thanks

Regards

Cristian



-->8-----------------
diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
index 9eb6b8b71bac..8719e077358c 100644
--- a/drivers/firmware/arm_scmi/notify.c
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -223,6 +223,7 @@ struct scmi_notify_instance {
  */
 struct events_queue {
 	size_t				sz;
+	atomic_t			exiting;
 	struct kfifo			kfifo;
 	struct work_struct		notify_work;
 	struct workqueue_struct		*wq;
@@ -406,11 +407,16 @@ scmi_process_event_header(struct events_queue *eq,
 
 	outs = kfifo_out(&eq->kfifo, pd->eh,
 			 sizeof(struct scmi_event_header));
-	if (!outs)
+	if (!outs) {
+		atomic_set(&eq->exiting, 1);
+		smp_mb__after_atomic();
 		return NULL;
+	}
 	if (outs != sizeof(struct scmi_event_header)) {
 		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
 		kfifo_reset_out(&eq->kfifo);
+		atomic_set(&eq->exiting, 1);
+		smp_mb__after_atomic();
 		return NULL;
 	}
 
@@ -446,6 +452,8 @@ scmi_process_event_payload(struct events_queue *eq,
 	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
 	if (unlikely(!outs)) {
 		pr_warn("--- EMPTY !!!!\n");
+		atomic_set(&eq->exiting, 1);
+		smp_mb__after_atomic();
 		return false;
 	}
 
@@ -455,6 +463,8 @@ scmi_process_event_payload(struct events_queue *eq,
 	if (unlikely(outs != pd->eh->payld_sz)) {
 		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
 		kfifo_reset_out(&eq->kfifo);
+		atomic_set(&eq->exiting, 1);
+		smp_mb__after_atomic();
 		return false;
 	}
 
@@ -526,6 +536,8 @@ static void scmi_events_dispatcher(struct work_struct *work)
 	mdelay(200);
 
 	eq = container_of(work, struct events_queue, notify_work);
+	atomic_set(&eq->exiting, 0);
+	smp_mb__after_atomic();
 	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
 			  equeue);
 	/*
@@ -579,6 +591,8 @@ static void scmi_events_dispatcher(struct work_struct *work)
 int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
 		const void *buf, size_t len, u64 ts)
 {
+	bool exiting;
+	static u8 cnt = 3;
 	struct scmi_registered_event *r_evt;
 	struct scmi_event_header eh;
 	struct scmi_notify_instance *ni = handle->notify_priv;
@@ -616,8 +630,20 @@ int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
 	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
 	mdelay(30);
 	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
-	queue_work(r_evt->proto->equeue.wq,
-		   &r_evt->proto->equeue.notify_work);
+
+	smp_mb__before_atomic();
+	exiting = atomic_read(&r_evt->proto->equeue.exiting);
+	do {
+		bool ret;
+
+		ret = queue_work(r_evt->proto->equeue.wq,
+				 &r_evt->proto->equeue.notify_work);
+		if (likely(ret || !exiting))
+			break;
+		mdelay(5);
+		smp_mb__before_atomic();
+		exiting = atomic_read(&r_evt->proto->equeue.exiting);
+	} while (--cnt);
 
 	return 0;
 }
@@ -655,6 +681,7 @@ static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
 				       &equeue->kfifo);
 	if (ret)
 		return ret;
+	atomic_set(&equeue->exiting, 0);
 
 	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
 	equeue->wq = ni->notify_wq;
--<8-----------------------

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-16 14:46           ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-16 14:46 UTC (permalink / raw)
  To: Lukasz Luba
  Cc: james.quinlan, Jonathan.Cameron, linux-kernel, linux-arm-kernel,
	sudeep.holla

On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
> 
> 
> On 3/12/20 6:34 PM, Cristian Marussi wrote:
> > On 12/03/2020 13:51, Lukasz Luba wrote:
> > > Hi Cristian,
> > > 
Hi Lukasz

> > > just one comment below...
[snip]
> > > > +	eh.timestamp = ts;
> > > > +	eh.evt_id = evt_id;
> > > > +	eh.payld_sz = len;
> > > > +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
> > > > +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> > > > +	queue_work(r_evt->proto->equeue.wq,
> > > > +		   &r_evt->proto->equeue.notify_work);
> > > 
> > > Is it safe to ignore the return value from the queue_work here?
> > > 
> > 
> > In fact yes, we do not want to care: it returns true or false depending on the
> > fact that the specific work was or not already queued, and we just rely on
> > this behavior to keep kicking the worker only when needed but never kick
> > more than one instance of it per-queue (so that there's only one reader
> > wq and one writer here in the scmi_notify)...explaining better:
> > 
> > 1. we push an event (hdr+payld) to the protocol queue if we found that there was
> > enough space on the queue
> > 
> > 2a. if at the time of the kfifo_in( ) the worker was already running
> > (queue not empty) it will process our new event sooner or later and here
> > the queue_work will return false, but we do not care in fact ... we
> > tried to kick it just in case
> > 
> > 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
> > have probably already gone to the sleep and this queue_work() will return true and
> > so this time it will effectively wake up the worker to process our items
> > 
> > The important thing here is that we are sure to wakeup the worker when needed
> > but we are equally sure we are never causing the scheduling of more than one worker
> > thread consuming from the same queue (because that would break the one reader/one writer
> > assumption which let us use the fifo in a lockless manner): this is possible because
> > queue_work checks if the required work item is already pending and in such a case backs
> > out returning false and we have one work_item (notify_work) defined per-protocol and
> > so per-queue.
> 
> I see. That's a good assumption: one work_item per protocol and simplify
> the locking. What if there would be an edge case scenario when the
> consumer (work_item) has handled the last item (there was NULL from
> scmi_process_event_header()), while in meantime scmi_notify put into
> the fifo new event but couldn't kick the queue_work. Would it stay there
> till the next IRQ which triggers queue_work to consume two events (one
> potentially a bit old)? Or we can ignore such race situation assuming
> that cleaning of work item is instant and kfifo_in is slow?
> 

In fact, this is a very good point, since between the moment the worker
determines that the queue is empty and the moment in which the worker
effectively exits (and it's marked as no more pending by the Kernel cmwq)
there is a window of opportunity for a race in which the ISR could fill
the queue with one more event and then fail to kick with queue_work() since
the work is in fact still nominally marked as pending from the point of view
of Kernel cmwq, as below:

ISR (core N)		|	WQ (core N+1)		cmwq flags	       	queued events
------------------------------------------------------------------------------------------------
			| if (queue_is_empty)		- WORK_PENDING		0 events queued
			+     ...			- WORK_PENDING		0 events queued
			+ } while (scmi_process_event_payload);
			+}// worker function exit 
kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
queue_work()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
  -> FALSE (pending)	+     ...cmwq backing out	- WORK_PENDING		1 events queued
			+     ...cmwq backing out	- WORK_PENDING		1 events queued
			+     ...cmwq backing out	- WORK_PENDING		1 events queued
			| ---- WORKER THREAD EXIT	- !WORK_PENDING		1 events queued
			| 		 		- !WORK_PENDING		1 events queued
kfifo_in()		|     				- !WORK_PENDING		2 events queued
kfifo_in()		|  				- !WORK_PENDING		2 events queued
queue_work()		|     				- !WORK_PENDING		2 events queued
   -> TRUE		| --- WORKER ENTER		- WORK_PENDING		2 events queued
			|  				- WORK_PENDING		2 events consumed
		
where effectively the last event queued won't be consumed till the next
iteration once another event is queued.

Given the fact that the ISR and the dedicated WQ on an SMP run effectively
in parallel I do not think unfortunately that we can simply count on the fact
the worker exit is faster than the kifos_in, enough to close the race window
opportunity. (even if rare)

On the other side considering the impact of such scenario, I can imagine that
it's not simply that we could only have a delayed delivery, but we must consider
that if the delayed event is effectively the last one ever it would remain
undelivered forever; this is particularly worrying in a scenario in which such
last event is particularly important: imagine a system shutdown where a last
system-power-off remains undelivered.

As a consequence I think this rare racy condition should be addressed somehow.

Looking at this scenario, it seems the classic situation in which you want to
use some sort of completion to avoid missing out on events delivery, BUT in our
usecase:

- placing the workers loaned from cmwq into an unbounded wait_for_completion()
  once the queue is empty seems not the best to use resources (and probably
  frowned upon)....using a few dedicated kernel threads to simply let them idle
  waiting most of the time seems equally frowned upon (I could be wrong...))
- the needed complete() in the ISR would introduce a spinlock_irqsave into the
  interrupt path (there's already one inside queue_work in fact) so it is not
  desirable, at least not if used on a regular base (for each event notified)

So I was thinking to try to reduce sensibly the above race window, more
than eliminate it completely, by adding an early flag to be checked under
specific conditions in order to retry the queue_work a few times when the race
is hit, something like:

ISR (core N)		|	WQ (core N+1)
-------------------------------------------------------------------------------
			| atomic_set(&exiting, 0);
			|
			| do {
			|	...
			| 	if (queue_is_empty)		- WORK_PENDING		0 events queued
			+          atomic_set(&exiting, 1)	- WORK_PENDING		0 events queued
static int cnt=3	|          --> breakout of while	- WORK_PENDING		0 events queued
kfifo_in()		|	....
			| } while (scmi_process_event_payload);
kfifo_in()		|
exiting = atomic_read()	|     ...cmwq backing out		- WORK_PENDING		1 events queued
do {			|     ...cmwq backing out		- WORK_PENDING		1 events queued
    ret = queue_work() 	|     ...cmwq backing out		- WORK_PENDING		1 events queued
    if (ret || !exiting)|     ...cmwq backing out		- WORK_PENDING		1 events queued
	break;		|     ...cmwq backing out		- WORK_PENDING		1 events queued
    mdelay(5);		|     ...cmwq backing out		- WORK_PENDING		1 events queued
    exiting =		|     ...cmwq backing out		- WORK_PENDING		1 events queued
      atomic_read;	|     ...cmwq backing out		- WORK_PENDING		1 events queued
} while (--cnt);	|     ...cmwq backing out		- WORK_PENDING		1 events queued
			| ---- WORKER EXIT 			- !WORK_PENDING		0 events queued

like down below between the scissors.

Not tested or tried....I could be missing something...and the mdelay is horrible (and not
the cleanest thing you've ever seen probably :D)...I'll have a chat with Sudeep too.

> > 
> > Now probably I wrote too much of an explanation and confuse stuff more ... :D
> 
> No, thank you for the detailed explanation. I will continue my review.
> 

Thanks

Regards

Cristian



-->8-----------------
diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
index 9eb6b8b71bac..8719e077358c 100644
--- a/drivers/firmware/arm_scmi/notify.c
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -223,6 +223,7 @@ struct scmi_notify_instance {
  */
 struct events_queue {
 	size_t				sz;
+	atomic_t			exiting;
 	struct kfifo			kfifo;
 	struct work_struct		notify_work;
 	struct workqueue_struct		*wq;
@@ -406,11 +407,16 @@ scmi_process_event_header(struct events_queue *eq,
 
 	outs = kfifo_out(&eq->kfifo, pd->eh,
 			 sizeof(struct scmi_event_header));
-	if (!outs)
+	if (!outs) {
+		atomic_set(&eq->exiting, 1);
+		smp_mb__after_atomic();
 		return NULL;
+	}
 	if (outs != sizeof(struct scmi_event_header)) {
 		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
 		kfifo_reset_out(&eq->kfifo);
+		atomic_set(&eq->exiting, 1);
+		smp_mb__after_atomic();
 		return NULL;
 	}
 
@@ -446,6 +452,8 @@ scmi_process_event_payload(struct events_queue *eq,
 	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
 	if (unlikely(!outs)) {
 		pr_warn("--- EMPTY !!!!\n");
+		atomic_set(&eq->exiting, 1);
+		smp_mb__after_atomic();
 		return false;
 	}
 
@@ -455,6 +463,8 @@ scmi_process_event_payload(struct events_queue *eq,
 	if (unlikely(outs != pd->eh->payld_sz)) {
 		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
 		kfifo_reset_out(&eq->kfifo);
+		atomic_set(&eq->exiting, 1);
+		smp_mb__after_atomic();
 		return false;
 	}
 
@@ -526,6 +536,8 @@ static void scmi_events_dispatcher(struct work_struct *work)
 	mdelay(200);
 
 	eq = container_of(work, struct events_queue, notify_work);
+	atomic_set(&eq->exiting, 0);
+	smp_mb__after_atomic();
 	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
 			  equeue);
 	/*
@@ -579,6 +591,8 @@ static void scmi_events_dispatcher(struct work_struct *work)
 int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
 		const void *buf, size_t len, u64 ts)
 {
+	bool exiting;
+	static u8 cnt = 3;
 	struct scmi_registered_event *r_evt;
 	struct scmi_event_header eh;
 	struct scmi_notify_instance *ni = handle->notify_priv;
@@ -616,8 +630,20 @@ int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
 	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
 	mdelay(30);
 	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
-	queue_work(r_evt->proto->equeue.wq,
-		   &r_evt->proto->equeue.notify_work);
+
+	smp_mb__before_atomic();
+	exiting = atomic_read(&r_evt->proto->equeue.exiting);
+	do {
+		bool ret;
+
+		ret = queue_work(r_evt->proto->equeue.wq,
+				 &r_evt->proto->equeue.notify_work);
+		if (likely(ret || !exiting))
+			break;
+		mdelay(5);
+		smp_mb__before_atomic();
+		exiting = atomic_read(&r_evt->proto->equeue.exiting);
+	} while (--cnt);
 
 	return 0;
 }
@@ -655,6 +681,7 @@ static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
 				       &equeue->kfifo);
 	if (ret)
 		return ret;
+	atomic_set(&equeue->exiting, 0);
 
 	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
 	equeue->wq = ni->notify_wq;
--<8-----------------------

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-16 14:46           ` Cristian Marussi
@ 2020-03-18  8:26             ` Lukasz Luba
  -1 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-18  8:26 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, sudeep.holla, james.quinlan,
	Jonathan.Cameron

Hi Cristian,

On 3/16/20 2:46 PM, Cristian Marussi wrote:
> On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
>>
>>
>> On 3/12/20 6:34 PM, Cristian Marussi wrote:
>>> On 12/03/2020 13:51, Lukasz Luba wrote:
>>>> Hi Cristian,
>>>>
> Hi Lukasz
> 
>>>> just one comment below...
> [snip]
>>>>> +	eh.timestamp = ts;
>>>>> +	eh.evt_id = evt_id;
>>>>> +	eh.payld_sz = len;
>>>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>>>> +	queue_work(r_evt->proto->equeue.wq,
>>>>> +		   &r_evt->proto->equeue.notify_work);
>>>>
>>>> Is it safe to ignore the return value from the queue_work here?
>>>>
>>>
>>> In fact yes, we do not want to care: it returns true or false depending on the
>>> fact that the specific work was or not already queued, and we just rely on
>>> this behavior to keep kicking the worker only when needed but never kick
>>> more than one instance of it per-queue (so that there's only one reader
>>> wq and one writer here in the scmi_notify)...explaining better:
>>>
>>> 1. we push an event (hdr+payld) to the protocol queue if we found that there was
>>> enough space on the queue
>>>
>>> 2a. if at the time of the kfifo_in( ) the worker was already running
>>> (queue not empty) it will process our new event sooner or later and here
>>> the queue_work will return false, but we do not care in fact ... we
>>> tried to kick it just in case
>>>
>>> 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
>>> have probably already gone to the sleep and this queue_work() will return true and
>>> so this time it will effectively wake up the worker to process our items
>>>
>>> The important thing here is that we are sure to wakeup the worker when needed
>>> but we are equally sure we are never causing the scheduling of more than one worker
>>> thread consuming from the same queue (because that would break the one reader/one writer
>>> assumption which let us use the fifo in a lockless manner): this is possible because
>>> queue_work checks if the required work item is already pending and in such a case backs
>>> out returning false and we have one work_item (notify_work) defined per-protocol and
>>> so per-queue.
>>
>> I see. That's a good assumption: one work_item per protocol and simplify
>> the locking. What if there would be an edge case scenario when the
>> consumer (work_item) has handled the last item (there was NULL from
>> scmi_process_event_header()), while in meantime scmi_notify put into
>> the fifo new event but couldn't kick the queue_work. Would it stay there
>> till the next IRQ which triggers queue_work to consume two events (one
>> potentially a bit old)? Or we can ignore such race situation assuming
>> that cleaning of work item is instant and kfifo_in is slow?
>>
> 
> In fact, this is a very good point, since between the moment the worker
> determines that the queue is empty and the moment in which the worker
> effectively exits (and it's marked as no more pending by the Kernel cmwq)
> there is a window of opportunity for a race in which the ISR could fill
> the queue with one more event and then fail to kick with queue_work() since
> the work is in fact still nominally marked as pending from the point of view
> of Kernel cmwq, as below:
> 
> ISR (core N)		|	WQ (core N+1)		cmwq flags	       	queued events
> ------------------------------------------------------------------------------------------------
> 			| if (queue_is_empty)		- WORK_PENDING		0 events queued
> 			+     ...			- WORK_PENDING		0 events queued
> 			+ } while (scmi_process_event_payload);
> 			+}// worker function exit
> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
> queue_work()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>    -> FALSE (pending)	+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			| ---- WORKER THREAD EXIT	- !WORK_PENDING		1 events queued
> 			| 		 		- !WORK_PENDING		1 events queued
> kfifo_in()		|     				- !WORK_PENDING		2 events queued
> kfifo_in()		|  				- !WORK_PENDING		2 events queued
> queue_work()		|     				- !WORK_PENDING		2 events queued
>     -> TRUE		| --- WORKER ENTER		- WORK_PENDING		2 events queued
> 			|  				- WORK_PENDING		2 events consumed
> 		
> where effectively the last event queued won't be consumed till the next
> iteration once another event is queued.
> 
> Given the fact that the ISR and the dedicated WQ on an SMP run effectively
> in parallel I do not think unfortunately that we can simply count on the fact
> the worker exit is faster than the kifos_in, enough to close the race window
> opportunity. (even if rare)
> 
> On the other side considering the impact of such scenario, I can imagine that
> it's not simply that we could only have a delayed delivery, but we must consider
> that if the delayed event is effectively the last one ever it would remain
> undelivered forever; this is particularly worrying in a scenario in which such
> last event is particularly important: imagine a system shutdown where a last
> system-power-off remains undelivered.

Agree, another example could be a thermal notification for some critical
trip point.

> 
> As a consequence I think this rare racy condition should be addressed somehow.
> 
> Looking at this scenario, it seems the classic situation in which you want to
> use some sort of completion to avoid missing out on events delivery, BUT in our
> usecase:
> 
> - placing the workers loaned from cmwq into an unbounded wait_for_completion()
>    once the queue is empty seems not the best to use resources (and probably
>    frowned upon)....using a few dedicated kernel threads to simply let them idle
>    waiting most of the time seems equally frowned upon (I could be wrong...))
> - the needed complete() in the ISR would introduce a spinlock_irqsave into the
>    interrupt path (there's already one inside queue_work in fact) so it is not
>    desirable, at least not if used on a regular base (for each event notified)
> 
> So I was thinking to try to reduce sensibly the above race window, more
> than eliminate it completely, by adding an early flag to be checked under
> specific conditions in order to retry the queue_work a few times when the race
> is hit, something like:
> 
> ISR (core N)		|	WQ (core N+1)
> -------------------------------------------------------------------------------
> 			| atomic_set(&exiting, 0);
> 			|
> 			| do {
> 			|	...
> 			| 	if (queue_is_empty)		- WORK_PENDING		0 events queued
> 			+          atomic_set(&exiting, 1)	- WORK_PENDING		0 events queued
> static int cnt=3	|          --> breakout of while	- WORK_PENDING		0 events queued
> kfifo_in()		|	....
> 			| } while (scmi_process_event_payload);
> kfifo_in()		|
> exiting = atomic_read()	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> do {			|     ...cmwq backing out		- WORK_PENDING		1 events queued
>      ret = queue_work() 	|     ...cmwq backing out		- WORK_PENDING		1 events queued
>      if (ret || !exiting)|     ...cmwq backing out		- WORK_PENDING		1 events queued
> 	break;		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>      mdelay(5);		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>      exiting =		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>        atomic_read;	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> } while (--cnt);	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> 			| ---- WORKER EXIT 			- !WORK_PENDING		0 events queued
> 
> like down below between the scissors.
> 
> Not tested or tried....I could be missing something...and the mdelay is horrible (and not
> the cleanest thing you've ever seen probably :D)...I'll have a chat with Sudeep too.

Indeed it looks more complicated. If you like I can join your offline
discuss when Sudeep is back.

Regards,
Lukasz

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-18  8:26             ` Lukasz Luba
  0 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-03-18  8:26 UTC (permalink / raw)
  To: linux-kernel, linux-arm-kernel, sudeep.holla, james.quinlan,
	Jonathan.Cameron

Hi Cristian,

On 3/16/20 2:46 PM, Cristian Marussi wrote:
> On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
>>
>>
>> On 3/12/20 6:34 PM, Cristian Marussi wrote:
>>> On 12/03/2020 13:51, Lukasz Luba wrote:
>>>> Hi Cristian,
>>>>
> Hi Lukasz
> 
>>>> just one comment below...
> [snip]
>>>>> +	eh.timestamp = ts;
>>>>> +	eh.evt_id = evt_id;
>>>>> +	eh.payld_sz = len;
>>>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>>>> +	queue_work(r_evt->proto->equeue.wq,
>>>>> +		   &r_evt->proto->equeue.notify_work);
>>>>
>>>> Is it safe to ignore the return value from the queue_work here?
>>>>
>>>
>>> In fact yes, we do not want to care: it returns true or false depending on the
>>> fact that the specific work was or not already queued, and we just rely on
>>> this behavior to keep kicking the worker only when needed but never kick
>>> more than one instance of it per-queue (so that there's only one reader
>>> wq and one writer here in the scmi_notify)...explaining better:
>>>
>>> 1. we push an event (hdr+payld) to the protocol queue if we found that there was
>>> enough space on the queue
>>>
>>> 2a. if at the time of the kfifo_in( ) the worker was already running
>>> (queue not empty) it will process our new event sooner or later and here
>>> the queue_work will return false, but we do not care in fact ... we
>>> tried to kick it just in case
>>>
>>> 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
>>> have probably already gone to the sleep and this queue_work() will return true and
>>> so this time it will effectively wake up the worker to process our items
>>>
>>> The important thing here is that we are sure to wakeup the worker when needed
>>> but we are equally sure we are never causing the scheduling of more than one worker
>>> thread consuming from the same queue (because that would break the one reader/one writer
>>> assumption which let us use the fifo in a lockless manner): this is possible because
>>> queue_work checks if the required work item is already pending and in such a case backs
>>> out returning false and we have one work_item (notify_work) defined per-protocol and
>>> so per-queue.
>>
>> I see. That's a good assumption: one work_item per protocol and simplify
>> the locking. What if there would be an edge case scenario when the
>> consumer (work_item) has handled the last item (there was NULL from
>> scmi_process_event_header()), while in meantime scmi_notify put into
>> the fifo new event but couldn't kick the queue_work. Would it stay there
>> till the next IRQ which triggers queue_work to consume two events (one
>> potentially a bit old)? Or we can ignore such race situation assuming
>> that cleaning of work item is instant and kfifo_in is slow?
>>
> 
> In fact, this is a very good point, since between the moment the worker
> determines that the queue is empty and the moment in which the worker
> effectively exits (and it's marked as no more pending by the Kernel cmwq)
> there is a window of opportunity for a race in which the ISR could fill
> the queue with one more event and then fail to kick with queue_work() since
> the work is in fact still nominally marked as pending from the point of view
> of Kernel cmwq, as below:
> 
> ISR (core N)		|	WQ (core N+1)		cmwq flags	       	queued events
> ------------------------------------------------------------------------------------------------
> 			| if (queue_is_empty)		- WORK_PENDING		0 events queued
> 			+     ...			- WORK_PENDING		0 events queued
> 			+ } while (scmi_process_event_payload);
> 			+}// worker function exit
> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
> queue_work()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>    -> FALSE (pending)	+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			| ---- WORKER THREAD EXIT	- !WORK_PENDING		1 events queued
> 			| 		 		- !WORK_PENDING		1 events queued
> kfifo_in()		|     				- !WORK_PENDING		2 events queued
> kfifo_in()		|  				- !WORK_PENDING		2 events queued
> queue_work()		|     				- !WORK_PENDING		2 events queued
>     -> TRUE		| --- WORKER ENTER		- WORK_PENDING		2 events queued
> 			|  				- WORK_PENDING		2 events consumed
> 		
> where effectively the last event queued won't be consumed till the next
> iteration once another event is queued.
> 
> Given the fact that the ISR and the dedicated WQ on an SMP run effectively
> in parallel I do not think unfortunately that we can simply count on the fact
> the worker exit is faster than the kifos_in, enough to close the race window
> opportunity. (even if rare)
> 
> On the other side considering the impact of such scenario, I can imagine that
> it's not simply that we could only have a delayed delivery, but we must consider
> that if the delayed event is effectively the last one ever it would remain
> undelivered forever; this is particularly worrying in a scenario in which such
> last event is particularly important: imagine a system shutdown where a last
> system-power-off remains undelivered.

Agree, another example could be a thermal notification for some critical
trip point.

> 
> As a consequence I think this rare racy condition should be addressed somehow.
> 
> Looking at this scenario, it seems the classic situation in which you want to
> use some sort of completion to avoid missing out on events delivery, BUT in our
> usecase:
> 
> - placing the workers loaned from cmwq into an unbounded wait_for_completion()
>    once the queue is empty seems not the best to use resources (and probably
>    frowned upon)....using a few dedicated kernel threads to simply let them idle
>    waiting most of the time seems equally frowned upon (I could be wrong...))
> - the needed complete() in the ISR would introduce a spinlock_irqsave into the
>    interrupt path (there's already one inside queue_work in fact) so it is not
>    desirable, at least not if used on a regular base (for each event notified)
> 
> So I was thinking to try to reduce sensibly the above race window, more
> than eliminate it completely, by adding an early flag to be checked under
> specific conditions in order to retry the queue_work a few times when the race
> is hit, something like:
> 
> ISR (core N)		|	WQ (core N+1)
> -------------------------------------------------------------------------------
> 			| atomic_set(&exiting, 0);
> 			|
> 			| do {
> 			|	...
> 			| 	if (queue_is_empty)		- WORK_PENDING		0 events queued
> 			+          atomic_set(&exiting, 1)	- WORK_PENDING		0 events queued
> static int cnt=3	|          --> breakout of while	- WORK_PENDING		0 events queued
> kfifo_in()		|	....
> 			| } while (scmi_process_event_payload);
> kfifo_in()		|
> exiting = atomic_read()	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> do {			|     ...cmwq backing out		- WORK_PENDING		1 events queued
>      ret = queue_work() 	|     ...cmwq backing out		- WORK_PENDING		1 events queued
>      if (ret || !exiting)|     ...cmwq backing out		- WORK_PENDING		1 events queued
> 	break;		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>      mdelay(5);		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>      exiting =		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>        atomic_read;	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> } while (--cnt);	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> 			| ---- WORKER EXIT 			- !WORK_PENDING		0 events queued
> 
> like down below between the scissors.
> 
> Not tested or tried....I could be missing something...and the mdelay is horrible (and not
> the cleanest thing you've ever seen probably :D)...I'll have a chat with Sudeep too.

Indeed it looks more complicated. If you like I can join your offline
discuss when Sudeep is back.

Regards,
Lukasz

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-18  8:26             ` Lukasz Luba
@ 2020-03-23  8:28               ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-23  8:28 UTC (permalink / raw)
  To: Lukasz Luba, linux-kernel, linux-arm-kernel, sudeep.holla,
	james.quinlan, Jonathan.Cameron

Hi

On 3/18/20 8:26 AM, Lukasz Luba wrote:
> Hi Cristian,
> 
> On 3/16/20 2:46 PM, Cristian Marussi wrote:
>> On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
>>>
>>>
>>> On 3/12/20 6:34 PM, Cristian Marussi wrote:
>>>> On 12/03/2020 13:51, Lukasz Luba wrote:
>>>>> Hi Cristian,
>>>>>
>> Hi Lukasz
>>
>>>>> just one comment below...
>> [snip]
>>>>>> +    eh.timestamp = ts;
>>>>>> +    eh.evt_id = evt_id;
>>>>>> +    eh.payld_sz = len;
>>>>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>>>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>>>>> +    queue_work(r_evt->proto->equeue.wq,
>>>>>> +           &r_evt->proto->equeue.notify_work);
>>>>>
>>>>> Is it safe to ignore the return value from the queue_work here?
>>>>>
>>>>
[snip]

>> On the other side considering the impact of such scenario, I can imagine that
>> it's not simply that we could only have a delayed delivery, but we must consider
>> that if the delayed event is effectively the last one ever it would remain
>> undelivered forever; this is particularly worrying in a scenario in which such
>> last event is particularly important: imagine a system shutdown where a last
>> system-power-off remains undelivered.
> 
> Agree, another example could be a thermal notification for some critical
> trip point.
> 
>>
>> As a consequence I think this rare racy condition should be addressed somehow.
>>
>> Looking at this scenario, it seems the classic situation in which you want to
>> use some sort of completion to avoid missing out on events delivery, BUT in our
>> usecase:
>>
>> - placing the workers loaned from cmwq into an unbounded wait_for_completion()
>>    once the queue is empty seems not the best to use resources (and probably
>>    frowned upon)....using a few dedicated kernel threads to simply let them idle
>>    waiting most of the time seems equally frowned upon (I could be wrong...))
>> - the needed complete() in the ISR would introduce a spinlock_irqsave into the
>>    interrupt path (there's already one inside queue_work in fact) so it is not
>>    desirable, at least not if used on a regular base (for each event notified)
>>
>> So I was thinking to try to reduce sensibly the above race window, more
>> than eliminate it completely, by adding an early flag to be checked under
>> specific conditions in order to retry the queue_work a few times when the race
>> is hit, something like:
>>
>> ISR (core N)        |    WQ (core N+1)
>> -------------------------------------------------------------------------------
>>             | atomic_set(&exiting, 0);
>>             |
>>             | do {
>>             |    ...
>>             |     if (queue_is_empty)        - WORK_PENDING        0 events queued
>>             +          atomic_set(&exiting, 1)    - WORK_PENDING        0 events queued
>> static int cnt=3    |          --> breakout of while    - WORK_PENDING        0 events queued
>> kfifo_in()        |    ....
>>             | } while (scmi_process_event_payload);
>> kfifo_in()        |
>> exiting = atomic_read()    |     ...cmwq backing out        - WORK_PENDING        1 events queued
>> do {            |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>      ret = queue_work()     |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>      if (ret || !exiting)|     ...cmwq backing out        - WORK_PENDING        1 events queued
>>     break;        |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>      mdelay(5);        |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>      exiting =        |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>        atomic_read;    |     ...cmwq backing out        - WORK_PENDING        1 events queued
>> } while (--cnt);    |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>             | ---- WORKER EXIT             - !WORK_PENDING        0 events queued
>>
>> like down below between the scissors.
>>
>> Not tested or tried....I could be missing something...and the mdelay is horrible (and not
>> the cleanest thing you've ever seen probably :D)...I'll have a chat with Sudeep too.
> 
> Indeed it looks more complicated. If you like I can join your offline
> discuss when Sudeep is back.
> 
Yes this is as of now my main remaining issue to address for v6.
I'll wait for Sudeep general review/feedback and raise this point.

Regards

Cristian

> Regards,
> Lukasz

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-03-23  8:28               ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-03-23  8:28 UTC (permalink / raw)
  To: Lukasz Luba, linux-kernel, linux-arm-kernel, sudeep.holla,
	james.quinlan, Jonathan.Cameron

Hi

On 3/18/20 8:26 AM, Lukasz Luba wrote:
> Hi Cristian,
> 
> On 3/16/20 2:46 PM, Cristian Marussi wrote:
>> On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
>>>
>>>
>>> On 3/12/20 6:34 PM, Cristian Marussi wrote:
>>>> On 12/03/2020 13:51, Lukasz Luba wrote:
>>>>> Hi Cristian,
>>>>>
>> Hi Lukasz
>>
>>>>> just one comment below...
>> [snip]
>>>>>> +    eh.timestamp = ts;
>>>>>> +    eh.evt_id = evt_id;
>>>>>> +    eh.payld_sz = len;
>>>>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>>>>> +    kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>>>>> +    queue_work(r_evt->proto->equeue.wq,
>>>>>> +           &r_evt->proto->equeue.notify_work);
>>>>>
>>>>> Is it safe to ignore the return value from the queue_work here?
>>>>>
>>>>
[snip]

>> On the other side considering the impact of such scenario, I can imagine that
>> it's not simply that we could only have a delayed delivery, but we must consider
>> that if the delayed event is effectively the last one ever it would remain
>> undelivered forever; this is particularly worrying in a scenario in which such
>> last event is particularly important: imagine a system shutdown where a last
>> system-power-off remains undelivered.
> 
> Agree, another example could be a thermal notification for some critical
> trip point.
> 
>>
>> As a consequence I think this rare racy condition should be addressed somehow.
>>
>> Looking at this scenario, it seems the classic situation in which you want to
>> use some sort of completion to avoid missing out on events delivery, BUT in our
>> usecase:
>>
>> - placing the workers loaned from cmwq into an unbounded wait_for_completion()
>>    once the queue is empty seems not the best to use resources (and probably
>>    frowned upon)....using a few dedicated kernel threads to simply let them idle
>>    waiting most of the time seems equally frowned upon (I could be wrong...))
>> - the needed complete() in the ISR would introduce a spinlock_irqsave into the
>>    interrupt path (there's already one inside queue_work in fact) so it is not
>>    desirable, at least not if used on a regular base (for each event notified)
>>
>> So I was thinking to try to reduce sensibly the above race window, more
>> than eliminate it completely, by adding an early flag to be checked under
>> specific conditions in order to retry the queue_work a few times when the race
>> is hit, something like:
>>
>> ISR (core N)        |    WQ (core N+1)
>> -------------------------------------------------------------------------------
>>             | atomic_set(&exiting, 0);
>>             |
>>             | do {
>>             |    ...
>>             |     if (queue_is_empty)        - WORK_PENDING        0 events queued
>>             +          atomic_set(&exiting, 1)    - WORK_PENDING        0 events queued
>> static int cnt=3    |          --> breakout of while    - WORK_PENDING        0 events queued
>> kfifo_in()        |    ....
>>             | } while (scmi_process_event_payload);
>> kfifo_in()        |
>> exiting = atomic_read()    |     ...cmwq backing out        - WORK_PENDING        1 events queued
>> do {            |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>      ret = queue_work()     |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>      if (ret || !exiting)|     ...cmwq backing out        - WORK_PENDING        1 events queued
>>     break;        |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>      mdelay(5);        |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>      exiting =        |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>        atomic_read;    |     ...cmwq backing out        - WORK_PENDING        1 events queued
>> } while (--cnt);    |     ...cmwq backing out        - WORK_PENDING        1 events queued
>>             | ---- WORKER EXIT             - !WORK_PENDING        0 events queued
>>
>> like down below between the scissors.
>>
>> Not tested or tried....I could be missing something...and the mdelay is horrible (and not
>> the cleanest thing you've ever seen probably :D)...I'll have a chat with Sudeep too.
> 
> Indeed it looks more complicated. If you like I can join your offline
> discuss when Sudeep is back.
> 
Yes this is as of now my main remaining issue to address for v6.
I'll wait for Sudeep general review/feedback and raise this point.

Regards

Cristian

> Regards,
> Lukasz

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-03-16 14:46           ` Cristian Marussi
@ 2020-05-20  7:09             ` Cristian Marussi
  -1 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-05-20  7:09 UTC (permalink / raw)
  To: Lukasz Luba, linux-kernel, linux-arm-kernel
  Cc: cristian.marussi, sudeep.holla

On Mon, Mar 16, 2020 at 02:46:05PM +0000, Cristian Marussi wrote:
> On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
> > 
> > 

Hi Lukasz,

I went back looking deeper into the possible race issue you pointed out a
while ago understanding it a bit better down below.

> > On 3/12/20 6:34 PM, Cristian Marussi wrote:
> > > On 12/03/2020 13:51, Lukasz Luba wrote:
> > > > Hi Cristian,
> > > > 
> Hi Lukasz
> 
> > > > just one comment below...
> [snip]
> > > > > +	eh.timestamp = ts;
> > > > > +	eh.evt_id = evt_id;
> > > > > +	eh.payld_sz = len;
> > > > > +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
> > > > > +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> > > > > +	queue_work(r_evt->proto->equeue.wq,
> > > > > +		   &r_evt->proto->equeue.notify_work);
> > > > 
> > > > Is it safe to ignore the return value from the queue_work here?
> > > > 
> > > 
> > > In fact yes, we do not want to care: it returns true or false depending on the
> > > fact that the specific work was or not already queued, and we just rely on
> > > this behavior to keep kicking the worker only when needed but never kick
> > > more than one instance of it per-queue (so that there's only one reader
> > > wq and one writer here in the scmi_notify)...explaining better:
> > > 
> > > 1. we push an event (hdr+payld) to the protocol queue if we found that there was
> > > enough space on the queue
> > > 
> > > 2a. if at the time of the kfifo_in( ) the worker was already running
> > > (queue not empty) it will process our new event sooner or later and here
> > > the queue_work will return false, but we do not care in fact ... we
> > > tried to kick it just in case
> > > 
> > > 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
> > > have probably already gone to the sleep and this queue_work() will return true and
> > > so this time it will effectively wake up the worker to process our items
> > > 
> > > The important thing here is that we are sure to wakeup the worker when needed
> > > but we are equally sure we are never causing the scheduling of more than one worker
> > > thread consuming from the same queue (because that would break the one reader/one writer
> > > assumption which let us use the fifo in a lockless manner): this is possible because
> > > queue_work checks if the required work item is already pending and in such a case backs
> > > out returning false and we have one work_item (notify_work) defined per-protocol and
> > > so per-queue.
> > 
> > I see. That's a good assumption: one work_item per protocol and simplify
> > the locking. What if there would be an edge case scenario when the
> > consumer (work_item) has handled the last item (there was NULL from
> > scmi_process_event_header()), while in meantime scmi_notify put into
> > the fifo new event but couldn't kick the queue_work. Would it stay there
> > till the next IRQ which triggers queue_work to consume two events (one
> > potentially a bit old)? Or we can ignore such race situation assuming
> > that cleaning of work item is instant and kfifo_in is slow?
> > 
> 
> In fact, this is a very good point, since between the moment the worker
> determines that the queue is empty and the moment in which the worker
> effectively exits (and it's marked as no more pending by the Kernel cmwq)
> there is a window of opportunity for a race in which the ISR could fill
> the queue with one more event and then fail to kick with queue_work() since
> the work is in fact still nominally marked as pending from the point of view
> of Kernel cmwq, as below:
> 
> ISR (core N)		|	WQ (core N+1)		cmwq flags	       	queued events
> ------------------------------------------------------------------------------------------------
> 			| if (queue_is_empty)		- WORK_PENDING		0 events queued
> 			+     ...			- WORK_PENDING		0 events queued
> 			+ } while (scmi_process_event_payload);
> 			+}// worker function exit 
> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
> queue_work()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>   -> FALSE (pending)	+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			| ---- WORKER THREAD EXIT	- !WORK_PENDING		1 events queued
> 			| 		 		- !WORK_PENDING		1 events queued
> kfifo_in()		|     				- !WORK_PENDING		2 events queued
> kfifo_in()		|  				- !WORK_PENDING		2 events queued
> queue_work()		|     				- !WORK_PENDING		2 events queued
>    -> TRUE		| --- WORKER ENTER		- WORK_PENDING		2 events queued
> 			|  				- WORK_PENDING		2 events consumed
> 		
> where effectively the last event queued won't be consumed till the next
> iteration once another event is queued.
> 

In summary, looking better at Kernel cmwq code, my explanation above about
how the possible race could be exposed by a particular tricky limit condition
and the values assumed by the WORK_STRUCT_PENDING_BIT was ... bullshit :D

In fact there's no race at all because Kernel cmwq takes care to clear the above
PENDING flag BEFORE the user-provided worker-function starts to finally run:
such flag is active only when a work instance is queued pending for execution
but it is cleared just before execution effctively starts.

kernel/workqueue.c:process_one_work()

	set_work_pool_and_clear_pending(work, pool->id);
	....
	worker->current_func(work);

As a consequence in the racy scenario above where the ISR pushes events on the
queues after the worker has already determined the queue to be empty but while
the worker func is still being deactivated in terms of Kernel cmwq internal
handling, it is not a problem since the worker while running is already NO more
marked pending so the queue_work succeeds and a new work will simply be queued
and run once the current instance terminates fully and it is removed from pool.

On the other side in the normal non racy scenario, when the worker is processing
normally a non-empty queue, we'll end-up anyway queueing new items and a new work
from the ISR even if the currently executing one will in fact consume already
naturally the queued items: this will result (it's what I observe in fact) in a
final un-needed quick worker activation/deactivation processing zero items (empty
queue) which is in fact harmless.

Basically the racy condition is taken care by the Kernel cmwq itself, and in fact
there is an extensive explanation also of the barriers employed to properly
realize this in the comments around set_work_pool_and_clear_pending()

I'll add a comment in v8 just to note this behaviour.

Thanks

Cristian

> Given the fact that the ISR and the dedicated WQ on an SMP run effectively
> in parallel I do not think unfortunately that we can simply count on the fact
> the worker exit is faster than the kifos_in, enough to close the race window
> opportunity. (even if rare)
> 
> On the other side considering the impact of such scenario, I can imagine that
> it's not simply that we could only have a delayed delivery, but we must consider
> that if the delayed event is effectively the last one ever it would remain
> undelivered forever; this is particularly worrying in a scenario in which such
> last event is particularly important: imagine a system shutdown where a last
> system-power-off remains undelivered.
> 
> As a consequence I think this rare racy condition should be addressed somehow.
> 
> Looking at this scenario, it seems the classic situation in which you want to
> use some sort of completion to avoid missing out on events delivery, BUT in our
> usecase:
> 
> - placing the workers loaned from cmwq into an unbounded wait_for_completion()
>   once the queue is empty seems not the best to use resources (and probably
>   frowned upon)....using a few dedicated kernel threads to simply let them idle
>   waiting most of the time seems equally frowned upon (I could be wrong...))
> - the needed complete() in the ISR would introduce a spinlock_irqsave into the
>   interrupt path (there's already one inside queue_work in fact) so it is not
>   desirable, at least not if used on a regular base (for each event notified)
> 
> So I was thinking to try to reduce sensibly the above race window, more
> than eliminate it completely, by adding an early flag to be checked under
> specific conditions in order to retry the queue_work a few times when the race
> is hit, something like:
> 
> ISR (core N)		|	WQ (core N+1)
> -------------------------------------------------------------------------------
> 			| atomic_set(&exiting, 0);
> 			|
> 			| do {
> 			|	...
> 			| 	if (queue_is_empty)		- WORK_PENDING		0 events queued
> 			+          atomic_set(&exiting, 1)	- WORK_PENDING		0 events queued
> static int cnt=3	|          --> breakout of while	- WORK_PENDING		0 events queued
> kfifo_in()		|	....
> 			| } while (scmi_process_event_payload);
> kfifo_in()		|
> exiting = atomic_read()	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> do {			|     ...cmwq backing out		- WORK_PENDING		1 events queued
>     ret = queue_work() 	|     ...cmwq backing out		- WORK_PENDING		1 events queued
>     if (ret || !exiting)|     ...cmwq backing out		- WORK_PENDING		1 events queued
> 	break;		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>     mdelay(5);		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>     exiting =		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>       atomic_read;	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> } while (--cnt);	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> 			| ---- WORKER EXIT 			- !WORK_PENDING		0 events queued
> 
> like down below between the scissors.
> 
> Not tested or tried....I could be missing something...and the mdelay is horrible (and not
> the cleanest thing you've ever seen probably :D)...I'll have a chat with Sudeep too.
> 
> > > 
> > > Now probably I wrote too much of an explanation and confuse stuff more ... :D
> > 
> > No, thank you for the detailed explanation. I will continue my review.
> > 
> 
> Thanks
> 
> Regards
> 
> Cristian
> 
> 
> 
> -->8-----------------
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> index 9eb6b8b71bac..8719e077358c 100644
> --- a/drivers/firmware/arm_scmi/notify.c
> +++ b/drivers/firmware/arm_scmi/notify.c
> @@ -223,6 +223,7 @@ struct scmi_notify_instance {
>   */
>  struct events_queue {
>  	size_t				sz;
> +	atomic_t			exiting;
>  	struct kfifo			kfifo;
>  	struct work_struct		notify_work;
>  	struct workqueue_struct		*wq;
> @@ -406,11 +407,16 @@ scmi_process_event_header(struct events_queue *eq,
>  
>  	outs = kfifo_out(&eq->kfifo, pd->eh,
>  			 sizeof(struct scmi_event_header));
> -	if (!outs)
> +	if (!outs) {
> +		atomic_set(&eq->exiting, 1);
> +		smp_mb__after_atomic();
>  		return NULL;
> +	}
>  	if (outs != sizeof(struct scmi_event_header)) {
>  		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
>  		kfifo_reset_out(&eq->kfifo);
> +		atomic_set(&eq->exiting, 1);
> +		smp_mb__after_atomic();
>  		return NULL;
>  	}
>  
> @@ -446,6 +452,8 @@ scmi_process_event_payload(struct events_queue *eq,
>  	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
>  	if (unlikely(!outs)) {
>  		pr_warn("--- EMPTY !!!!\n");
> +		atomic_set(&eq->exiting, 1);
> +		smp_mb__after_atomic();
>  		return false;
>  	}
>  
> @@ -455,6 +463,8 @@ scmi_process_event_payload(struct events_queue *eq,
>  	if (unlikely(outs != pd->eh->payld_sz)) {
>  		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
>  		kfifo_reset_out(&eq->kfifo);
> +		atomic_set(&eq->exiting, 1);
> +		smp_mb__after_atomic();
>  		return false;
>  	}
>  
> @@ -526,6 +536,8 @@ static void scmi_events_dispatcher(struct work_struct *work)
>  	mdelay(200);
>  
>  	eq = container_of(work, struct events_queue, notify_work);
> +	atomic_set(&eq->exiting, 0);
> +	smp_mb__after_atomic();
>  	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
>  			  equeue);
>  	/*
> @@ -579,6 +591,8 @@ static void scmi_events_dispatcher(struct work_struct *work)
>  int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>  		const void *buf, size_t len, u64 ts)
>  {
> +	bool exiting;
> +	static u8 cnt = 3;
>  	struct scmi_registered_event *r_evt;
>  	struct scmi_event_header eh;
>  	struct scmi_notify_instance *ni = handle->notify_priv;
> @@ -616,8 +630,20 @@ int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>  	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>  	mdelay(30);
>  	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> -	queue_work(r_evt->proto->equeue.wq,
> -		   &r_evt->proto->equeue.notify_work);
> +
> +	smp_mb__before_atomic();
> +	exiting = atomic_read(&r_evt->proto->equeue.exiting);
> +	do {
> +		bool ret;
> +
> +		ret = queue_work(r_evt->proto->equeue.wq,
> +				 &r_evt->proto->equeue.notify_work);
> +		if (likely(ret || !exiting))
> +			break;
> +		mdelay(5);
> +		smp_mb__before_atomic();
> +		exiting = atomic_read(&r_evt->proto->equeue.exiting);
> +	} while (--cnt);
>  
>  	return 0;
>  }
> @@ -655,6 +681,7 @@ static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
>  				       &equeue->kfifo);
>  	if (ret)
>  		return ret;
> +	atomic_set(&equeue->exiting, 0);
>  
>  	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
>  	equeue->wq = ni->notify_wq;
> --<8-----------------------

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-05-20  7:09             ` Cristian Marussi
  0 siblings, 0 replies; 70+ messages in thread
From: Cristian Marussi @ 2020-05-20  7:09 UTC (permalink / raw)
  To: Lukasz Luba, linux-kernel, linux-arm-kernel
  Cc: cristian.marussi, sudeep.holla

On Mon, Mar 16, 2020 at 02:46:05PM +0000, Cristian Marussi wrote:
> On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
> > 
> > 

Hi Lukasz,

I went back looking deeper into the possible race issue you pointed out a
while ago understanding it a bit better down below.

> > On 3/12/20 6:34 PM, Cristian Marussi wrote:
> > > On 12/03/2020 13:51, Lukasz Luba wrote:
> > > > Hi Cristian,
> > > > 
> Hi Lukasz
> 
> > > > just one comment below...
> [snip]
> > > > > +	eh.timestamp = ts;
> > > > > +	eh.evt_id = evt_id;
> > > > > +	eh.payld_sz = len;
> > > > > +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
> > > > > +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> > > > > +	queue_work(r_evt->proto->equeue.wq,
> > > > > +		   &r_evt->proto->equeue.notify_work);
> > > > 
> > > > Is it safe to ignore the return value from the queue_work here?
> > > > 
> > > 
> > > In fact yes, we do not want to care: it returns true or false depending on the
> > > fact that the specific work was or not already queued, and we just rely on
> > > this behavior to keep kicking the worker only when needed but never kick
> > > more than one instance of it per-queue (so that there's only one reader
> > > wq and one writer here in the scmi_notify)...explaining better:
> > > 
> > > 1. we push an event (hdr+payld) to the protocol queue if we found that there was
> > > enough space on the queue
> > > 
> > > 2a. if at the time of the kfifo_in( ) the worker was already running
> > > (queue not empty) it will process our new event sooner or later and here
> > > the queue_work will return false, but we do not care in fact ... we
> > > tried to kick it just in case
> > > 
> > > 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
> > > have probably already gone to the sleep and this queue_work() will return true and
> > > so this time it will effectively wake up the worker to process our items
> > > 
> > > The important thing here is that we are sure to wakeup the worker when needed
> > > but we are equally sure we are never causing the scheduling of more than one worker
> > > thread consuming from the same queue (because that would break the one reader/one writer
> > > assumption which let us use the fifo in a lockless manner): this is possible because
> > > queue_work checks if the required work item is already pending and in such a case backs
> > > out returning false and we have one work_item (notify_work) defined per-protocol and
> > > so per-queue.
> > 
> > I see. That's a good assumption: one work_item per protocol and simplify
> > the locking. What if there would be an edge case scenario when the
> > consumer (work_item) has handled the last item (there was NULL from
> > scmi_process_event_header()), while in meantime scmi_notify put into
> > the fifo new event but couldn't kick the queue_work. Would it stay there
> > till the next IRQ which triggers queue_work to consume two events (one
> > potentially a bit old)? Or we can ignore such race situation assuming
> > that cleaning of work item is instant and kfifo_in is slow?
> > 
> 
> In fact, this is a very good point, since between the moment the worker
> determines that the queue is empty and the moment in which the worker
> effectively exits (and it's marked as no more pending by the Kernel cmwq)
> there is a window of opportunity for a race in which the ISR could fill
> the queue with one more event and then fail to kick with queue_work() since
> the work is in fact still nominally marked as pending from the point of view
> of Kernel cmwq, as below:
> 
> ISR (core N)		|	WQ (core N+1)		cmwq flags	       	queued events
> ------------------------------------------------------------------------------------------------
> 			| if (queue_is_empty)		- WORK_PENDING		0 events queued
> 			+     ...			- WORK_PENDING		0 events queued
> 			+ } while (scmi_process_event_payload);
> 			+}// worker function exit 
> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
> queue_work()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>   -> FALSE (pending)	+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
> 			| ---- WORKER THREAD EXIT	- !WORK_PENDING		1 events queued
> 			| 		 		- !WORK_PENDING		1 events queued
> kfifo_in()		|     				- !WORK_PENDING		2 events queued
> kfifo_in()		|  				- !WORK_PENDING		2 events queued
> queue_work()		|     				- !WORK_PENDING		2 events queued
>    -> TRUE		| --- WORKER ENTER		- WORK_PENDING		2 events queued
> 			|  				- WORK_PENDING		2 events consumed
> 		
> where effectively the last event queued won't be consumed till the next
> iteration once another event is queued.
> 

In summary, looking better at Kernel cmwq code, my explanation above about
how the possible race could be exposed by a particular tricky limit condition
and the values assumed by the WORK_STRUCT_PENDING_BIT was ... bullshit :D

In fact there's no race at all because Kernel cmwq takes care to clear the above
PENDING flag BEFORE the user-provided worker-function starts to finally run:
such flag is active only when a work instance is queued pending for execution
but it is cleared just before execution effctively starts.

kernel/workqueue.c:process_one_work()

	set_work_pool_and_clear_pending(work, pool->id);
	....
	worker->current_func(work);

As a consequence in the racy scenario above where the ISR pushes events on the
queues after the worker has already determined the queue to be empty but while
the worker func is still being deactivated in terms of Kernel cmwq internal
handling, it is not a problem since the worker while running is already NO more
marked pending so the queue_work succeeds and a new work will simply be queued
and run once the current instance terminates fully and it is removed from pool.

On the other side in the normal non racy scenario, when the worker is processing
normally a non-empty queue, we'll end-up anyway queueing new items and a new work
from the ISR even if the currently executing one will in fact consume already
naturally the queued items: this will result (it's what I observe in fact) in a
final un-needed quick worker activation/deactivation processing zero items (empty
queue) which is in fact harmless.

Basically the racy condition is taken care by the Kernel cmwq itself, and in fact
there is an extensive explanation also of the barriers employed to properly
realize this in the comments around set_work_pool_and_clear_pending()

I'll add a comment in v8 just to note this behaviour.

Thanks

Cristian

> Given the fact that the ISR and the dedicated WQ on an SMP run effectively
> in parallel I do not think unfortunately that we can simply count on the fact
> the worker exit is faster than the kifos_in, enough to close the race window
> opportunity. (even if rare)
> 
> On the other side considering the impact of such scenario, I can imagine that
> it's not simply that we could only have a delayed delivery, but we must consider
> that if the delayed event is effectively the last one ever it would remain
> undelivered forever; this is particularly worrying in a scenario in which such
> last event is particularly important: imagine a system shutdown where a last
> system-power-off remains undelivered.
> 
> As a consequence I think this rare racy condition should be addressed somehow.
> 
> Looking at this scenario, it seems the classic situation in which you want to
> use some sort of completion to avoid missing out on events delivery, BUT in our
> usecase:
> 
> - placing the workers loaned from cmwq into an unbounded wait_for_completion()
>   once the queue is empty seems not the best to use resources (and probably
>   frowned upon)....using a few dedicated kernel threads to simply let them idle
>   waiting most of the time seems equally frowned upon (I could be wrong...))
> - the needed complete() in the ISR would introduce a spinlock_irqsave into the
>   interrupt path (there's already one inside queue_work in fact) so it is not
>   desirable, at least not if used on a regular base (for each event notified)
> 
> So I was thinking to try to reduce sensibly the above race window, more
> than eliminate it completely, by adding an early flag to be checked under
> specific conditions in order to retry the queue_work a few times when the race
> is hit, something like:
> 
> ISR (core N)		|	WQ (core N+1)
> -------------------------------------------------------------------------------
> 			| atomic_set(&exiting, 0);
> 			|
> 			| do {
> 			|	...
> 			| 	if (queue_is_empty)		- WORK_PENDING		0 events queued
> 			+          atomic_set(&exiting, 1)	- WORK_PENDING		0 events queued
> static int cnt=3	|          --> breakout of while	- WORK_PENDING		0 events queued
> kfifo_in()		|	....
> 			| } while (scmi_process_event_payload);
> kfifo_in()		|
> exiting = atomic_read()	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> do {			|     ...cmwq backing out		- WORK_PENDING		1 events queued
>     ret = queue_work() 	|     ...cmwq backing out		- WORK_PENDING		1 events queued
>     if (ret || !exiting)|     ...cmwq backing out		- WORK_PENDING		1 events queued
> 	break;		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>     mdelay(5);		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>     exiting =		|     ...cmwq backing out		- WORK_PENDING		1 events queued
>       atomic_read;	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> } while (--cnt);	|     ...cmwq backing out		- WORK_PENDING		1 events queued
> 			| ---- WORKER EXIT 			- !WORK_PENDING		0 events queued
> 
> like down below between the scissors.
> 
> Not tested or tried....I could be missing something...and the mdelay is horrible (and not
> the cleanest thing you've ever seen probably :D)...I'll have a chat with Sudeep too.
> 
> > > 
> > > Now probably I wrote too much of an explanation and confuse stuff more ... :D
> > 
> > No, thank you for the detailed explanation. I will continue my review.
> > 
> 
> Thanks
> 
> Regards
> 
> Cristian
> 
> 
> 
> -->8-----------------
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> index 9eb6b8b71bac..8719e077358c 100644
> --- a/drivers/firmware/arm_scmi/notify.c
> +++ b/drivers/firmware/arm_scmi/notify.c
> @@ -223,6 +223,7 @@ struct scmi_notify_instance {
>   */
>  struct events_queue {
>  	size_t				sz;
> +	atomic_t			exiting;
>  	struct kfifo			kfifo;
>  	struct work_struct		notify_work;
>  	struct workqueue_struct		*wq;
> @@ -406,11 +407,16 @@ scmi_process_event_header(struct events_queue *eq,
>  
>  	outs = kfifo_out(&eq->kfifo, pd->eh,
>  			 sizeof(struct scmi_event_header));
> -	if (!outs)
> +	if (!outs) {
> +		atomic_set(&eq->exiting, 1);
> +		smp_mb__after_atomic();
>  		return NULL;
> +	}
>  	if (outs != sizeof(struct scmi_event_header)) {
>  		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
>  		kfifo_reset_out(&eq->kfifo);
> +		atomic_set(&eq->exiting, 1);
> +		smp_mb__after_atomic();
>  		return NULL;
>  	}
>  
> @@ -446,6 +452,8 @@ scmi_process_event_payload(struct events_queue *eq,
>  	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
>  	if (unlikely(!outs)) {
>  		pr_warn("--- EMPTY !!!!\n");
> +		atomic_set(&eq->exiting, 1);
> +		smp_mb__after_atomic();
>  		return false;
>  	}
>  
> @@ -455,6 +463,8 @@ scmi_process_event_payload(struct events_queue *eq,
>  	if (unlikely(outs != pd->eh->payld_sz)) {
>  		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
>  		kfifo_reset_out(&eq->kfifo);
> +		atomic_set(&eq->exiting, 1);
> +		smp_mb__after_atomic();
>  		return false;
>  	}
>  
> @@ -526,6 +536,8 @@ static void scmi_events_dispatcher(struct work_struct *work)
>  	mdelay(200);
>  
>  	eq = container_of(work, struct events_queue, notify_work);
> +	atomic_set(&eq->exiting, 0);
> +	smp_mb__after_atomic();
>  	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
>  			  equeue);
>  	/*
> @@ -579,6 +591,8 @@ static void scmi_events_dispatcher(struct work_struct *work)
>  int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>  		const void *buf, size_t len, u64 ts)
>  {
> +	bool exiting;
> +	static u8 cnt = 3;
>  	struct scmi_registered_event *r_evt;
>  	struct scmi_event_header eh;
>  	struct scmi_notify_instance *ni = handle->notify_priv;
> @@ -616,8 +630,20 @@ int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
>  	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>  	mdelay(30);
>  	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
> -	queue_work(r_evt->proto->equeue.wq,
> -		   &r_evt->proto->equeue.notify_work);
> +
> +	smp_mb__before_atomic();
> +	exiting = atomic_read(&r_evt->proto->equeue.exiting);
> +	do {
> +		bool ret;
> +
> +		ret = queue_work(r_evt->proto->equeue.wq,
> +				 &r_evt->proto->equeue.notify_work);
> +		if (likely(ret || !exiting))
> +			break;
> +		mdelay(5);
> +		smp_mb__before_atomic();
> +		exiting = atomic_read(&r_evt->proto->equeue.exiting);
> +	} while (--cnt);
>  
>  	return 0;
>  }
> @@ -655,6 +681,7 @@ static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
>  				       &equeue->kfifo);
>  	if (ret)
>  		return ret;
> +	atomic_set(&equeue->exiting, 0);
>  
>  	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
>  	equeue->wq = ni->notify_wq;
> --<8-----------------------

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
  2020-05-20  7:09             ` Cristian Marussi
@ 2020-05-20 10:23               ` Lukasz Luba
  -1 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-05-20 10:23 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel; +Cc: sudeep.holla

Hi Cristian,

On 5/20/20 8:09 AM, Cristian Marussi wrote:
> On Mon, Mar 16, 2020 at 02:46:05PM +0000, Cristian Marussi wrote:
>> On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
>>>
>>>
> 
> Hi Lukasz,
> 
> I went back looking deeper into the possible race issue you pointed out a
> while ago understanding it a bit better down below.
> 
>>> On 3/12/20 6:34 PM, Cristian Marussi wrote:
>>>> On 12/03/2020 13:51, Lukasz Luba wrote:
>>>>> Hi Cristian,
>>>>>
>> Hi Lukasz
>>
>>>>> just one comment below...
>> [snip]
>>>>>> +	eh.timestamp = ts;
>>>>>> +	eh.evt_id = evt_id;
>>>>>> +	eh.payld_sz = len;
>>>>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>>>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>>>>> +	queue_work(r_evt->proto->equeue.wq,
>>>>>> +		   &r_evt->proto->equeue.notify_work);
>>>>>
>>>>> Is it safe to ignore the return value from the queue_work here?
>>>>>
>>>>
>>>> In fact yes, we do not want to care: it returns true or false depending on the
>>>> fact that the specific work was or not already queued, and we just rely on
>>>> this behavior to keep kicking the worker only when needed but never kick
>>>> more than one instance of it per-queue (so that there's only one reader
>>>> wq and one writer here in the scmi_notify)...explaining better:
>>>>
>>>> 1. we push an event (hdr+payld) to the protocol queue if we found that there was
>>>> enough space on the queue
>>>>
>>>> 2a. if at the time of the kfifo_in( ) the worker was already running
>>>> (queue not empty) it will process our new event sooner or later and here
>>>> the queue_work will return false, but we do not care in fact ... we
>>>> tried to kick it just in case
>>>>
>>>> 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
>>>> have probably already gone to the sleep and this queue_work() will return true and
>>>> so this time it will effectively wake up the worker to process our items
>>>>
>>>> The important thing here is that we are sure to wakeup the worker when needed
>>>> but we are equally sure we are never causing the scheduling of more than one worker
>>>> thread consuming from the same queue (because that would break the one reader/one writer
>>>> assumption which let us use the fifo in a lockless manner): this is possible because
>>>> queue_work checks if the required work item is already pending and in such a case backs
>>>> out returning false and we have one work_item (notify_work) defined per-protocol and
>>>> so per-queue.
>>>
>>> I see. That's a good assumption: one work_item per protocol and simplify
>>> the locking. What if there would be an edge case scenario when the
>>> consumer (work_item) has handled the last item (there was NULL from
>>> scmi_process_event_header()), while in meantime scmi_notify put into
>>> the fifo new event but couldn't kick the queue_work. Would it stay there
>>> till the next IRQ which triggers queue_work to consume two events (one
>>> potentially a bit old)? Or we can ignore such race situation assuming
>>> that cleaning of work item is instant and kfifo_in is slow?
>>>
>>
>> In fact, this is a very good point, since between the moment the worker
>> determines that the queue is empty and the moment in which the worker
>> effectively exits (and it's marked as no more pending by the Kernel cmwq)
>> there is a window of opportunity for a race in which the ISR could fill
>> the queue with one more event and then fail to kick with queue_work() since
>> the work is in fact still nominally marked as pending from the point of view
>> of Kernel cmwq, as below:
>>
>> ISR (core N)		|	WQ (core N+1)		cmwq flags	       	queued events
>> ------------------------------------------------------------------------------------------------
>> 			| if (queue_is_empty)		- WORK_PENDING		0 events queued
>> 			+     ...			- WORK_PENDING		0 events queued
>> 			+ } while (scmi_process_event_payload);
>> 			+}// worker function exit
>> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> queue_work()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>>    -> FALSE (pending)	+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> 			| ---- WORKER THREAD EXIT	- !WORK_PENDING		1 events queued
>> 			| 		 		- !WORK_PENDING		1 events queued
>> kfifo_in()		|     				- !WORK_PENDING		2 events queued
>> kfifo_in()		|  				- !WORK_PENDING		2 events queued
>> queue_work()		|     				- !WORK_PENDING		2 events queued
>>     -> TRUE		| --- WORKER ENTER		- WORK_PENDING		2 events queued
>> 			|  				- WORK_PENDING		2 events consumed
>> 		
>> where effectively the last event queued won't be consumed till the next
>> iteration once another event is queued.
>>
> 
> In summary, looking better at Kernel cmwq code, my explanation above about
> how the possible race could be exposed by a particular tricky limit condition
> and the values assumed by the WORK_STRUCT_PENDING_BIT was ... bullshit :D
> 
> In fact there's no race at all because Kernel cmwq takes care to clear the above
> PENDING flag BEFORE the user-provided worker-function starts to finally run:
> such flag is active only when a work instance is queued pending for execution
> but it is cleared just before execution effctively starts.
> 
> kernel/workqueue.c:process_one_work()
> 
> 	set_work_pool_and_clear_pending(work, pool->id);
> 	....
> 	worker->current_func(work);
> 
> As a consequence in the racy scenario above where the ISR pushes events on the
> queues after the worker has already determined the queue to be empty but while
> the worker func is still being deactivated in terms of Kernel cmwq internal
> handling, it is not a problem since the worker while running is already NO more
> marked pending so the queue_work succeeds and a new work will simply be queued
> and run once the current instance terminates fully and it is removed from pool.

Sounds good, thanks to for digging into this workqueue code and figuring 
it out.

> 
> On the other side in the normal non racy scenario, when the worker is processing
> normally a non-empty queue, we'll end-up anyway queueing new items and a new work
> from the ISR even if the currently executing one will in fact consume already
> naturally the queued items: this will result (it's what I observe in fact) in a
> final un-needed quick worker activation/deactivation processing zero items (empty
> queue) which is in fact harmless.
> 
> Basically the racy condition is taken care by the Kernel cmwq itself, and in fact
> there is an extensive explanation also of the barriers employed to properly
> realize this in the comments around set_work_pool_and_clear_pending()
> 
> I'll add a comment in v8 just to note this behaviour.

Great research.

Regards,
Lukasz

> 
> Thanks
> 
> Cristian
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery
@ 2020-05-20 10:23               ` Lukasz Luba
  0 siblings, 0 replies; 70+ messages in thread
From: Lukasz Luba @ 2020-05-20 10:23 UTC (permalink / raw)
  To: Cristian Marussi, linux-kernel, linux-arm-kernel; +Cc: sudeep.holla

Hi Cristian,

On 5/20/20 8:09 AM, Cristian Marussi wrote:
> On Mon, Mar 16, 2020 at 02:46:05PM +0000, Cristian Marussi wrote:
>> On Thu, Mar 12, 2020 at 09:43:31PM +0000, Lukasz Luba wrote:
>>>
>>>
> 
> Hi Lukasz,
> 
> I went back looking deeper into the possible race issue you pointed out a
> while ago understanding it a bit better down below.
> 
>>> On 3/12/20 6:34 PM, Cristian Marussi wrote:
>>>> On 12/03/2020 13:51, Lukasz Luba wrote:
>>>>> Hi Cristian,
>>>>>
>> Hi Lukasz
>>
>>>>> just one comment below...
>> [snip]
>>>>>> +	eh.timestamp = ts;
>>>>>> +	eh.evt_id = evt_id;
>>>>>> +	eh.payld_sz = len;
>>>>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
>>>>>> +	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
>>>>>> +	queue_work(r_evt->proto->equeue.wq,
>>>>>> +		   &r_evt->proto->equeue.notify_work);
>>>>>
>>>>> Is it safe to ignore the return value from the queue_work here?
>>>>>
>>>>
>>>> In fact yes, we do not want to care: it returns true or false depending on the
>>>> fact that the specific work was or not already queued, and we just rely on
>>>> this behavior to keep kicking the worker only when needed but never kick
>>>> more than one instance of it per-queue (so that there's only one reader
>>>> wq and one writer here in the scmi_notify)...explaining better:
>>>>
>>>> 1. we push an event (hdr+payld) to the protocol queue if we found that there was
>>>> enough space on the queue
>>>>
>>>> 2a. if at the time of the kfifo_in( ) the worker was already running
>>>> (queue not empty) it will process our new event sooner or later and here
>>>> the queue_work will return false, but we do not care in fact ... we
>>>> tried to kick it just in case
>>>>
>>>> 2b. if instead at the time of the kfifo_in() the queue was empty the worker would
>>>> have probably already gone to the sleep and this queue_work() will return true and
>>>> so this time it will effectively wake up the worker to process our items
>>>>
>>>> The important thing here is that we are sure to wakeup the worker when needed
>>>> but we are equally sure we are never causing the scheduling of more than one worker
>>>> thread consuming from the same queue (because that would break the one reader/one writer
>>>> assumption which let us use the fifo in a lockless manner): this is possible because
>>>> queue_work checks if the required work item is already pending and in such a case backs
>>>> out returning false and we have one work_item (notify_work) defined per-protocol and
>>>> so per-queue.
>>>
>>> I see. That's a good assumption: one work_item per protocol and simplify
>>> the locking. What if there would be an edge case scenario when the
>>> consumer (work_item) has handled the last item (there was NULL from
>>> scmi_process_event_header()), while in meantime scmi_notify put into
>>> the fifo new event but couldn't kick the queue_work. Would it stay there
>>> till the next IRQ which triggers queue_work to consume two events (one
>>> potentially a bit old)? Or we can ignore such race situation assuming
>>> that cleaning of work item is instant and kfifo_in is slow?
>>>
>>
>> In fact, this is a very good point, since between the moment the worker
>> determines that the queue is empty and the moment in which the worker
>> effectively exits (and it's marked as no more pending by the Kernel cmwq)
>> there is a window of opportunity for a race in which the ISR could fill
>> the queue with one more event and then fail to kick with queue_work() since
>> the work is in fact still nominally marked as pending from the point of view
>> of Kernel cmwq, as below:
>>
>> ISR (core N)		|	WQ (core N+1)		cmwq flags	       	queued events
>> ------------------------------------------------------------------------------------------------
>> 			| if (queue_is_empty)		- WORK_PENDING		0 events queued
>> 			+     ...			- WORK_PENDING		0 events queued
>> 			+ } while (scmi_process_event_payload);
>> 			+}// worker function exit
>> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> kfifo_in()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> queue_work()		+     ...cmwq backing out	- WORK_PENDING		1 events queued
>>    -> FALSE (pending)	+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> 			+     ...cmwq backing out	- WORK_PENDING		1 events queued
>> 			| ---- WORKER THREAD EXIT	- !WORK_PENDING		1 events queued
>> 			| 		 		- !WORK_PENDING		1 events queued
>> kfifo_in()		|     				- !WORK_PENDING		2 events queued
>> kfifo_in()		|  				- !WORK_PENDING		2 events queued
>> queue_work()		|     				- !WORK_PENDING		2 events queued
>>     -> TRUE		| --- WORKER ENTER		- WORK_PENDING		2 events queued
>> 			|  				- WORK_PENDING		2 events consumed
>> 		
>> where effectively the last event queued won't be consumed till the next
>> iteration once another event is queued.
>>
> 
> In summary, looking better at Kernel cmwq code, my explanation above about
> how the possible race could be exposed by a particular tricky limit condition
> and the values assumed by the WORK_STRUCT_PENDING_BIT was ... bullshit :D
> 
> In fact there's no race at all because Kernel cmwq takes care to clear the above
> PENDING flag BEFORE the user-provided worker-function starts to finally run:
> such flag is active only when a work instance is queued pending for execution
> but it is cleared just before execution effctively starts.
> 
> kernel/workqueue.c:process_one_work()
> 
> 	set_work_pool_and_clear_pending(work, pool->id);
> 	....
> 	worker->current_func(work);
> 
> As a consequence in the racy scenario above where the ISR pushes events on the
> queues after the worker has already determined the queue to be empty but while
> the worker func is still being deactivated in terms of Kernel cmwq internal
> handling, it is not a problem since the worker while running is already NO more
> marked pending so the queue_work succeeds and a new work will simply be queued
> and run once the current instance terminates fully and it is removed from pool.

Sounds good, thanks to for digging into this workqueue code and figuring 
it out.

> 
> On the other side in the normal non racy scenario, when the worker is processing
> normally a non-empty queue, we'll end-up anyway queueing new items and a new work
> from the ISR even if the currently executing one will in fact consume already
> naturally the queued items: this will result (it's what I observe in fact) in a
> final un-needed quick worker activation/deactivation processing zero items (empty
> queue) which is in fact harmless.
> 
> Basically the racy condition is taken care by the Kernel cmwq itself, and in fact
> there is an extensive explanation also of the barriers employed to properly
> realize this in the comments around set_work_pool_and_clear_pending()
> 
> I'll add a comment in v8 just to note this behaviour.

Great research.

Regards,
Lukasz

> 
> Thanks
> 
> Cristian
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

end of thread, other threads:[~2020-05-20 10:24 UTC | newest]

Thread overview: 70+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-04 16:25 [PATCH v4 00/13] SCMI Notifications Core Support Cristian Marussi
2020-03-04 16:25 ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 01/13] firmware: arm_scmi: Add receive buffer support for notifications Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 02/13] firmware: arm_scmi: Update protocol commands and notification list Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 03/13] firmware: arm_scmi: Add notifications support in transport layer Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 04/13] firmware: arm_scmi: Add support for notifications message processing Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 05/13] firmware: arm_scmi: Add notification protocol-registration Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-09 11:33   ` Jonathan Cameron
2020-03-09 11:33     ` Jonathan Cameron
2020-03-09 12:04     ` Cristian Marussi
2020-03-09 12:04       ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 06/13] firmware: arm_scmi: Add notification callbacks-registration Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-09 11:50   ` Jonathan Cameron
2020-03-09 11:50     ` Jonathan Cameron
2020-03-09 12:25     ` Cristian Marussi
2020-03-09 12:25       ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 07/13] firmware: arm_scmi: Add notification dispatch and delivery Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-09 12:26   ` Jonathan Cameron
2020-03-09 12:26     ` Jonathan Cameron
2020-03-09 16:37     ` Cristian Marussi
2020-03-09 16:37       ` Cristian Marussi
2020-03-10 10:01       ` Jonathan Cameron
2020-03-10 10:01         ` Jonathan Cameron
2020-03-12 13:51   ` Lukasz Luba
2020-03-12 13:51     ` Lukasz Luba
2020-03-12 14:06     ` Lukasz Luba
2020-03-12 14:06       ` Lukasz Luba
2020-03-12 19:24       ` Cristian Marussi
2020-03-12 19:24         ` Cristian Marussi
2020-03-12 20:57         ` Lukasz Luba
2020-03-12 20:57           ` Lukasz Luba
2020-03-12 18:34     ` Cristian Marussi
2020-03-12 18:34       ` Cristian Marussi
2020-03-12 21:43       ` Lukasz Luba
2020-03-12 21:43         ` Lukasz Luba
2020-03-16 14:46         ` Cristian Marussi
2020-03-16 14:46           ` Cristian Marussi
2020-03-18  8:26           ` Lukasz Luba
2020-03-18  8:26             ` Lukasz Luba
2020-03-23  8:28             ` Cristian Marussi
2020-03-23  8:28               ` Cristian Marussi
2020-05-20  7:09           ` Cristian Marussi
2020-05-20  7:09             ` Cristian Marussi
2020-05-20 10:23             ` Lukasz Luba
2020-05-20 10:23               ` Lukasz Luba
2020-03-04 16:25 ` [PATCH v4 08/13] firmware: arm_scmi: Enable notification core Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 09/13] firmware: arm_scmi: Add Power notifications support Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-09 12:28   ` Jonathan Cameron
2020-03-09 12:28     ` Jonathan Cameron
2020-03-09 16:39     ` Cristian Marussi
2020-03-09 16:39       ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 10/13] firmware: arm_scmi: Add Perf " Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 11/13] firmware: arm_scmi: Add Sensor " Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 12/13] firmware: arm_scmi: Add Reset " Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-04 16:25 ` [PATCH v4 13/13] firmware: arm_scmi: Add Base " Cristian Marussi
2020-03-04 16:25   ` Cristian Marussi
2020-03-09 12:33 ` [PATCH v4 00/13] SCMI Notifications Core Support Jonathan Cameron
2020-03-09 12:33   ` Jonathan Cameron

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.