* [PATCH v3 0/3] Introduce Protection Domain Restart (PDR) Helpers
@ 2019-12-30 5:00 Sibi Sankar
2019-12-30 5:00 ` [PATCH v3 1/3] soc: qcom: Introduce Protection Domain Restart helpers Sibi Sankar
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Sibi Sankar @ 2019-12-30 5:00 UTC (permalink / raw)
To: bjorn.andersson, srinivas.kandagatla, robh+dt, tsoni
Cc: agross, mark.rutland, linux-arm-msm, linux-remoteproc,
linux-kernel, devicetree, Sibi Sankar
Qualcomm SoCs (starting with MSM8998) allow for multiple protection
domains (PDs) to run on the same Q6 sub-system. This allows for
services like AVS AUDIO to have their own separate address space and
crash/recover without disrupting the other PDs running on the same Q6
ADSP. This patch series introduces pdr helper library and adds PD
tracking functionality for "avs/audio" allowing apr services to register
themselves asynchronously once the dependent PDs are up.
V3:
* patches 2 and 3 remain unchanged.
* reset servloc_addr/servreg_addr.
* fixup the helpers to handle servloc_work/servreg_work asynchronously.
* fixup useage of list_lock across traversals, insertions and deletions.
* fixup the helpers to use a single lookup list.
* skip waiting for response on ind_ack send.
* introduce pdr_servreg_link_create to re-use existing qmi connection to
servreg instances. This helps tracking PDs running on the same remote
processor.
* have a per node curr_state in pdr_list_node to preserve all state
updates during indack_cb.
* introduce additional servreg_service_state values to help the client
distinguish between a fatal and non-fatal pdr_lookup errors.
* re-order pdr_handle_release sequence.
* fixup "!ind_msg->service_path returns true always" warning.
* fixup comments.
V2:
* fixup pd_status callback to return void.
* return 0 from pdr_get_domain_list on success.
* introduce status_lock to linearize the pd_status reported back
to the clients.
* use the correct service name length across various string operations
performed.
* service locator will now schedule the pending lookups registered
when it comes up.
* other minor cleanups that Bjorn suggested.
* use pr_warn to indicate that the wait for service locator timed
out.
* add Bjorn/Srini's "Reviewed-by" for the dt-bindings.
To Do:
* Add support for pd tracking in fastrpc driver.
Sibi Sankar (3):
soc: qcom: Introduce Protection Domain Restart helpers
dt-bindings: soc: qcom: apr: Add protection domain bindings
soc: qcom: apr: Add avs/audio tracking functionality
.../devicetree/bindings/soc/qcom/qcom,apr.txt | 59 ++
drivers/soc/qcom/Kconfig | 6 +
drivers/soc/qcom/Makefile | 1 +
drivers/soc/qcom/apr.c | 100 ++-
drivers/soc/qcom/pdr_interface.c | 709 ++++++++++++++++++
drivers/soc/qcom/pdr_internal.h | 375 +++++++++
include/linux/soc/qcom/apr.h | 1 +
include/linux/soc/qcom/pdr.h | 109 +++
include/linux/soc/qcom/qmi.h | 1 +
9 files changed, 1350 insertions(+), 11 deletions(-)
create mode 100644 drivers/soc/qcom/pdr_interface.c
create mode 100644 drivers/soc/qcom/pdr_internal.h
create mode 100644 include/linux/soc/qcom/pdr.h
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v3 1/3] soc: qcom: Introduce Protection Domain Restart helpers
2019-12-30 5:00 [PATCH v3 0/3] Introduce Protection Domain Restart (PDR) Helpers Sibi Sankar
@ 2019-12-30 5:00 ` Sibi Sankar
2020-01-02 20:45 ` Bjorn Andersson
2019-12-30 5:00 ` [PATCH v3 2/3] dt-bindings: soc: qcom: apr: Add protection domain bindings Sibi Sankar
2019-12-30 5:00 ` [PATCH v3 3/3] soc: qcom: apr: Add avs/audio tracking functionality Sibi Sankar
2 siblings, 1 reply; 10+ messages in thread
From: Sibi Sankar @ 2019-12-30 5:00 UTC (permalink / raw)
To: bjorn.andersson, srinivas.kandagatla, robh+dt, tsoni
Cc: agross, mark.rutland, linux-arm-msm, linux-remoteproc,
linux-kernel, devicetree, Sibi Sankar
Qualcomm SoCs (starting with MSM8998) allow for multiple protection domains
to run on the same Q6 sub-system. This allows for services like ATH10K WLAN
FW to have their own separate address space and crash/recover without
disrupting the modem and other PDs running on the same sub-system. The PDR
helpers introduces an abstraction that allows for tracking/controlling the
life cycle of protection domains running on various Q6 sub-systems.
Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
---
drivers/soc/qcom/Kconfig | 5 +
drivers/soc/qcom/Makefile | 1 +
drivers/soc/qcom/pdr_interface.c | 709 +++++++++++++++++++++++++++++++
drivers/soc/qcom/pdr_internal.h | 375 ++++++++++++++++
include/linux/soc/qcom/pdr.h | 109 +++++
include/linux/soc/qcom/qmi.h | 1 +
6 files changed, 1200 insertions(+)
create mode 100644 drivers/soc/qcom/pdr_interface.c
create mode 100644 drivers/soc/qcom/pdr_internal.h
create mode 100644 include/linux/soc/qcom/pdr.h
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 79d826553ac82..5c4e76837f59b 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -76,6 +76,11 @@ config QCOM_OCMEM
requirements. This is typically used by the GPU, camera/video, and
audio components on some Snapdragon SoCs.
+config QCOM_PDR_HELPERS
+ tristate
+ depends on ARCH_QCOM || COMPILE_TEST
+ select QCOM_QMI_HELPERS
+
config QCOM_PM
bool "Qualcomm Power Management"
depends on ARCH_QCOM && !ARM64
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 9fb35c8a495e1..5d6b83dc58e82 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -7,6 +7,7 @@ obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o
obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o
obj-$(CONFIG_QCOM_OCMEM) += ocmem.o
+obj-$(CONFIG_QCOM_PDR_HELPERS) += pdr_interface.o
obj-$(CONFIG_QCOM_PM) += spm.o
obj-$(CONFIG_QCOM_QMI_HELPERS) += qmi_helpers.o
qmi_helpers-y += qmi_encdec.o qmi_interface.o
diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
new file mode 100644
index 0000000000000..27b74763add05
--- /dev/null
+++ b/drivers/soc/qcom/pdr_interface.c
@@ -0,0 +1,709 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/completion.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/qrtr.h>
+#include <linux/string.h>
+#include <linux/workqueue.h>
+
+#include "pdr_internal.h"
+
+struct pdr_list_node {
+ enum servreg_service_state curr_state;
+ u16 transaction_id;
+ struct pdr_service *pds;
+ struct list_head node;
+};
+
+static int servreg_locator_new_server(struct qmi_handle *qmi,
+ struct qmi_service *svc)
+{
+ struct pdr_handle *pdr = container_of(qmi, struct pdr_handle,
+ servloc_client);
+ struct pdr_service *pds, *tmp;
+
+ /* Create a Local client port for QMI communication */
+ pdr->servloc_addr.sq_family = AF_QIPCRTR;
+ pdr->servloc_addr.sq_node = svc->node;
+ pdr->servloc_addr.sq_port = svc->port;
+
+ mutex_lock(&pdr->locator_lock);
+ pdr->locator_available = true;
+ mutex_unlock(&pdr->locator_lock);
+
+ /* Service pending lookup requests */
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
+ if (pds->need_servreg_lookup)
+ schedule_work(&pdr->servloc_work);
+ }
+ mutex_unlock(&pdr->list_lock);
+
+ return 0;
+}
+
+static void servreg_locator_del_server(struct qmi_handle *qmi,
+ struct qmi_service *svc)
+{
+ struct pdr_handle *pdr = container_of(qmi, struct pdr_handle,
+ servloc_client);
+
+ mutex_lock(&pdr->locator_lock);
+ pdr->locator_available = false;
+ mutex_unlock(&pdr->locator_lock);
+
+ pdr->servloc_addr.sq_node = 0;
+ pdr->servloc_addr.sq_port = 0;
+}
+
+static struct qmi_ops service_locator_ops = {
+ .new_server = servreg_locator_new_server,
+ .del_server = servreg_locator_del_server,
+};
+
+static int pdr_register_listener(struct pdr_handle *pdr,
+ struct pdr_service *pds,
+ bool enable)
+{
+ struct servreg_register_listener_resp resp;
+ struct servreg_register_listener_req req;
+ struct qmi_txn txn;
+ int ret;
+
+ ret = qmi_txn_init(&pdr->servreg_client, &txn,
+ servreg_register_listener_resp_ei,
+ &resp);
+ if (ret < 0)
+ return ret;
+
+ req.enable = enable;
+ strcpy(req.service_path, pds->service_path);
+
+ ret = qmi_send_request(&pdr->servreg_client, &pdr->servreg_addr,
+ &txn, SERVREG_REGISTER_LISTENER_REQ,
+ SERVREG_REGISTER_LISTENER_REQ_LEN,
+ servreg_register_listener_req_ei,
+ &req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ return ret;
+ }
+
+ ret = qmi_txn_wait(&txn, 5 * HZ);
+ if (ret < 0) {
+ pr_err("PDR: %s register listener txn wait failed: %d\n",
+ pds->service_path, ret);
+ return ret;
+ }
+
+ /* Check the response */
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ pr_err("PDR: %s register listener failed: 0x%x\n",
+ pds->service_path, resp.resp.error);
+ return ret;
+ }
+
+ if ((int)resp.curr_state < INT_MIN || (int)resp.curr_state > INT_MAX)
+ pr_err("PDR: %s notification state invalid: 0x%x\n",
+ pds->service_path, resp.curr_state);
+
+ pds->state = resp.curr_state;
+
+ return 0;
+}
+
+static void pdr_servreg_work(struct work_struct *work)
+{
+ struct pdr_handle *pdr = container_of(work, struct pdr_handle,
+ servreg_work);
+ struct pdr_service *pds, *tmp;
+
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
+ if (pds->service_connected) {
+ if (!pds->need_servreg_register)
+ continue;
+
+ pds->need_servreg_register = false;
+ mutex_unlock(&pdr->list_lock);
+ pdr_register_listener(pdr, pds, true);
+ } else {
+ if (!pds->need_servreg_remove)
+ continue;
+
+ pds->need_servreg_remove = false;
+ mutex_unlock(&pdr->list_lock);
+ pds->state = SERVREG_SERVICE_STATE_DOWN;
+ }
+
+ mutex_lock(&pdr->status_lock);
+ pdr->status(pdr, pds);
+ mutex_unlock(&pdr->status_lock);
+
+ return;
+ }
+ mutex_unlock(&pdr->list_lock);
+}
+
+static int servreg_notifier_new_server(struct qmi_handle *qmi,
+ struct qmi_service *svc)
+{
+ struct pdr_handle *pdr = container_of(qmi, struct pdr_handle,
+ servreg_client);
+ struct pdr_service *pds, *tmp;
+
+ /* Create a Local client port for QMI communication */
+ pdr->servreg_addr.sq_family = AF_QIPCRTR;
+ pdr->servreg_addr.sq_node = svc->node;
+ pdr->servreg_addr.sq_port = svc->port;
+
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
+ if (pds->service == svc->service &&
+ pds->instance == svc->instance) {
+ pds->service_connected = true;
+ pds->need_servreg_register = true;
+ queue_work(pdr->servreg_wq, &pdr->servreg_work);
+ }
+ }
+ mutex_unlock(&pdr->list_lock);
+
+ return 0;
+}
+
+static void servreg_notifier_del_server(struct qmi_handle *qmi,
+ struct qmi_service *svc)
+{
+ struct pdr_handle *pdr = container_of(qmi, struct pdr_handle,
+ servreg_client);
+ struct pdr_service *pds, *tmp;
+
+ pdr->servreg_addr.sq_node = 0;
+ pdr->servreg_addr.sq_port = 0;
+
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
+ if (pds->service == svc->service &&
+ pds->instance == svc->instance) {
+ pds->service_connected = false;
+ pds->need_servreg_remove = true;
+ queue_work(pdr->servreg_wq, &pdr->servreg_work);
+ }
+ }
+ mutex_unlock(&pdr->list_lock);
+}
+
+static struct qmi_ops service_notifier_ops = {
+ .new_server = servreg_notifier_new_server,
+ .del_server = servreg_notifier_del_server,
+};
+
+static int pdr_send_indack_msg(struct pdr_handle *pdr, struct pdr_service *pds,
+ u16 tid)
+{
+ struct servreg_set_ack_resp resp;
+ struct servreg_set_ack_req req;
+ struct qmi_txn txn;
+ int ret;
+
+ ret = qmi_txn_init(&pdr->servreg_client, &txn, servreg_set_ack_resp_ei,
+ &resp);
+ if (ret < 0)
+ return ret;
+
+ req.transaction_id = tid;
+ strcpy(req.service_path, pds->service_path);
+
+ ret = qmi_send_request(&pdr->servreg_client, &pdr->servreg_addr,
+ &txn, SERVREG_SET_ACK_REQ,
+ SERVREG_SET_ACK_REQ_LEN,
+ servreg_set_ack_req_ei,
+ &req);
+
+ /* Skip waiting for response */
+ qmi_txn_cancel(&txn);
+ return ret;
+}
+
+static void pdr_indack_work(struct work_struct *work)
+{
+ struct pdr_handle *pdr = container_of(work, struct pdr_handle,
+ indack_work);
+ struct pdr_list_node *ind, *tmp;
+ struct pdr_service *pds;
+
+ list_for_each_entry_safe(ind, tmp, &pdr->indack_list, node) {
+ pds = ind->pds;
+ pdr_send_indack_msg(pdr, pds, ind->transaction_id);
+ mutex_lock(&pdr->status_lock);
+ pds->state = ind->curr_state;
+ pdr->status(pdr, pds);
+ mutex_unlock(&pdr->status_lock);
+ list_del(&ind->node);
+ kfree(ind);
+ }
+}
+
+static void pdr_servreg_ind_cb(struct qmi_handle *qmi,
+ struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn, const void *data)
+{
+ struct pdr_handle *pdr = container_of(qmi, struct pdr_handle,
+ servreg_client);
+ const struct servreg_state_updated_ind *ind_msg = data;
+ struct pdr_list_node *ind;
+ struct pdr_service *pds;
+
+ if (!ind_msg || !ind_msg->service_path[0] ||
+ strlen(ind_msg->service_path) > SERVREG_NAME_LENGTH)
+ return;
+
+ list_for_each_entry(pds, &pdr->lookups, node) {
+ if (!strcmp(pds->service_path, ind_msg->service_path))
+ goto found;
+ }
+ return;
+
+found:
+ ind = kzalloc(sizeof(*ind), GFP_KERNEL);
+ if (!ind)
+ return;
+
+ pr_info("PDR: Indication received from %s, state: 0x%x, trans-id: %d\n",
+ ind_msg->service_path, ind_msg->curr_state,
+ ind_msg->transaction_id);
+
+ ind->transaction_id = ind_msg->transaction_id;
+ ind->curr_state = ind_msg->curr_state;
+ ind->pds = pds;
+
+ mutex_lock(&pdr->list_lock);
+ list_add_tail(&ind->node, &pdr->indack_list);
+ mutex_unlock(&pdr->list_lock);
+
+ queue_work(pdr->indack_wq, &pdr->indack_work);
+}
+
+static struct qmi_msg_handler qmi_indication_handler[] = {
+ {
+ .type = QMI_INDICATION,
+ .msg_id = SERVREG_STATE_UPDATED_IND_ID,
+ .ei = servreg_state_updated_ind_ei,
+ .decoded_size = sizeof(struct servreg_state_updated_ind),
+ .fn = pdr_servreg_ind_cb,
+ },
+ {}
+};
+
+static int pdr_get_domain_list(struct servreg_get_domain_list_req *req,
+ struct servreg_get_domain_list_resp *resp,
+ struct pdr_handle *pdr)
+{
+ struct qmi_txn txn;
+ int ret;
+
+ ret = qmi_txn_init(&pdr->servloc_client, &txn,
+ servreg_get_domain_list_resp_ei, resp);
+ if (ret < 0)
+ return ret;
+
+ ret = qmi_send_request(&pdr->servloc_client,
+ &pdr->servloc_addr,
+ &txn, SERVREG_GET_DOMAIN_LIST_REQ,
+ SERVREG_GET_DOMAIN_LIST_REQ_MAX_LEN,
+ servreg_get_domain_list_req_ei,
+ req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ return ret;
+ }
+
+ ret = qmi_txn_wait(&txn, 5 * HZ);
+ if (ret < 0) {
+ pr_err("PDR: %s get domain list txn wait failed: %d\n",
+ req->service_name, ret);
+ return ret;
+ }
+
+ /* Check the response */
+ if (resp->resp.result != QMI_RESULT_SUCCESS_V01) {
+ pr_err("PDR: %s get domain list failed: 0x%x\n",
+ req->service_name, resp->resp.error);
+ return -EREMOTEIO;
+ }
+
+ return 0;
+}
+
+static void pdr_servreg_link_create(struct pdr_handle *pdr,
+ struct pdr_service *pds)
+{
+ struct pdr_service *pds_iter, *tmp;
+ bool link_exists = false;
+
+ /* Check if a QMI link to SERVREG instance already exists */
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
+ if (pds_iter->instance == pds->instance &&
+ strcmp(pds_iter->service_path, pds->service_path)) {
+ link_exists = true;
+ pds->service_connected = pds_iter->service_connected;
+ if (pds_iter->service_connected)
+ pds->need_servreg_register = true;
+ else
+ pds->need_servreg_remove = true;
+ queue_work(pdr->servreg_wq, &pdr->servreg_work);
+ break;
+ }
+ }
+ mutex_unlock(&pdr->list_lock);
+
+ if (!link_exists)
+ qmi_add_lookup(&pdr->servreg_client, pds->service, 1,
+ pds->instance);
+}
+
+static int pdr_locate_service(struct pdr_handle *pdr, struct pdr_service *pds)
+{
+ struct servreg_get_domain_list_resp *resp = NULL;
+ struct servreg_get_domain_list_req req;
+ int db_rev_count = 0, domains_read = 0;
+ struct servreg_location_entry *entry;
+ int ret, i;
+
+ resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+ if (!resp)
+ return -ENOMEM;
+
+ /* Prepare req message */
+ strcpy(req.service_name, pds->service_name);
+ req.domain_offset_valid = true;
+ req.domain_offset = 0;
+
+ do {
+ req.domain_offset = domains_read;
+ ret = pdr_get_domain_list(&req, resp, pdr);
+ if (ret < 0)
+ goto out;
+
+ if (!domains_read)
+ db_rev_count = resp->db_rev_count;
+
+ if (db_rev_count != resp->db_rev_count) {
+ ret = -EAGAIN;
+ goto out;
+ }
+
+ for (i = domains_read; i < resp->domain_list_len; i++) {
+ entry = &resp->domain_list[i];
+
+ if (strnlen(entry->name, sizeof(entry->name)) == sizeof(entry->name))
+ continue;
+
+ if (!strcmp(entry->name, pds->service_path)) {
+ pds->service_data_valid = entry->service_data_valid;
+ pds->service_data = entry->service_data;
+ pds->instance = entry->instance;
+ goto out;
+ }
+ }
+
+ /* Update ret to indicate that the service is not yet found */
+ ret = -ENXIO;
+
+ /* Always read total_domains from the response msg */
+ if (resp->domain_list_len > resp->total_domains)
+ resp->domain_list_len = resp->total_domains;
+
+ domains_read += resp->domain_list_len;
+ } while (domains_read < resp->total_domains);
+out:
+ kfree(resp);
+ return ret;
+}
+
+static void pdr_servloc_work(struct work_struct *work)
+{
+ struct pdr_handle *pdr = container_of(work, struct pdr_handle,
+ servloc_work);
+ struct pdr_service *pds, *tmp;
+ int ret;
+
+ /* Bail out early if PD Mapper is not up */
+ mutex_lock(&pdr->locator_lock);
+ if (!pdr->locator_available) {
+ mutex_unlock(&pdr->locator_lock);
+ pr_warn("PDR: SERVICE LOCATOR service not available\n");
+ return;
+ }
+ mutex_unlock(&pdr->locator_lock);
+
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
+ if (!pds->need_servreg_lookup)
+ continue;
+
+ pds->need_servreg_lookup = false;
+ mutex_unlock(&pdr->list_lock);
+
+ ret = pdr_locate_service(pdr, pds);
+ if (ret < 0) {
+ if (ret == -ENXIO)
+ pds->state = SERVREG_LOCATOR_UNKNOWN_SERVICE;
+ else if (ret == -EAGAIN)
+ pds->state = SERVREG_LOCATOR_DB_UPDATED;
+ else
+ pds->state = SERVREG_LOCATOR_ERR;
+
+ pr_err("PDR: service lookup for %s failed: %d\n",
+ pds->service_name, ret);
+
+ /* Remove from lookup list */
+ mutex_lock(&pdr->list_lock);
+ list_del(&pds->node);
+ mutex_unlock(&pdr->list_lock);
+
+ /* Notify Lookup failed */
+ mutex_lock(&pdr->status_lock);
+ pdr->status(pdr, pds);
+ mutex_unlock(&pdr->status_lock);
+ kfree(pds);
+ } else {
+ pdr_servreg_link_create(pdr, pds);
+ }
+
+ return;
+ }
+ mutex_unlock(&pdr->list_lock);
+}
+
+/**
+ * pdr_add_lookup() - register a tracking request for a PD
+ * @pdr: PDR client handle
+ * @service_name: service name of the tracking request
+ * @service_path: service path of the tracking request
+ *
+ * Registering a pdr lookup allows for tracking the life cycle of the PD.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+int pdr_add_lookup(struct pdr_handle *pdr, const char *service_name,
+ const char *service_path)
+{
+ struct pdr_service *pds, *pds_iter, *tmp;
+ int ret;
+
+ if (!service_name || strlen(service_name) > SERVREG_NAME_LENGTH ||
+ !service_path || strlen(service_path) > SERVREG_NAME_LENGTH)
+ return -EINVAL;
+
+ pds = kzalloc(sizeof(*pds), GFP_KERNEL);
+ if (!pds)
+ return -ENOMEM;
+
+ pds->service = SERVREG_NOTIFIER_SERVICE;
+ strcpy(pds->service_name, service_name);
+ strcpy(pds->service_path, service_path);
+ pds->need_servreg_lookup = true;
+
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
+ if (!strcmp(pds_iter->service_path, service_path)) {
+ mutex_unlock(&pdr->list_lock);
+ ret = -EALREADY;
+ goto err;
+ }
+ }
+
+ list_add(&pds->node, &pdr->lookups);
+ mutex_unlock(&pdr->list_lock);
+
+ schedule_work(&pdr->servloc_work);
+
+ return 0;
+err:
+ kfree(pds);
+
+ return ret;
+}
+EXPORT_SYMBOL(pdr_add_lookup);
+
+/**
+ * pdr_restart_pd() - restart PD
+ * @pdr: PDR client handle
+ * @service_path: service path of restart request
+ *
+ * Restarts the PD tracked by the PDR client handle for a given service path.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+int pdr_restart_pd(struct pdr_handle *pdr, const char *service_path)
+{
+ struct servreg_restart_pd_req req;
+ struct servreg_restart_pd_resp resp;
+ struct pdr_service *pds = NULL, *pds_iter, *tmp;
+ struct qmi_txn txn;
+ int ret;
+
+ if (!service_path || strlen(service_path) > SERVREG_NAME_LENGTH)
+ return -EINVAL;
+
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
+ if (!pds_iter->service_connected)
+ continue;
+
+ if (!strcmp(pds_iter->service_path, service_path)) {
+ pds = pds_iter;
+ break;
+ }
+ }
+ mutex_unlock(&pdr->list_lock);
+
+ if (!pds)
+ return -EINVAL;
+
+ /* Prepare req message */
+ strcpy(req.service_path, pds->service_path);
+
+ ret = qmi_txn_init(&pdr->servreg_client, &txn,
+ servreg_restart_pd_resp_ei,
+ &resp);
+ if (ret < 0)
+ return ret;
+
+ ret = qmi_send_request(&pdr->servreg_client, &pdr->servreg_addr,
+ &txn, SERVREG_RESTART_PD_REQ,
+ SERVREG_RESTART_PD_REQ_MAX_LEN,
+ servreg_restart_pd_req_ei, &req);
+ if (ret < 0) {
+ qmi_txn_cancel(&txn);
+ return ret;
+ }
+
+ ret = qmi_txn_wait(&txn, 5 * HZ);
+ if (ret < 0) {
+ pr_err("PDR: %s PD restart txn wait failed: %d\n",
+ pds->service_path, ret);
+ return ret;
+ }
+
+ /* Check response if PDR is disabled */
+ if (resp.resp.result == QMI_RESULT_FAILURE_V01 &&
+ resp.resp.error == QMI_ERR_DISABLED_V01) {
+ pr_err("PDR: %s PD restart is disabled: 0x%x\n",
+ pds->service_path, resp.resp.error);
+ return -EOPNOTSUPP;
+ }
+
+ /* Check the response for other error case*/
+ if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ pr_err("PDR: %s request for PD restart failed: 0x%x\n",
+ pds->service_path, resp.resp.error);
+ return -EREMOTEIO;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(pdr_restart_pd);
+
+/**
+ * pdr_handle_init() - initialize the PDR client handle
+ * @pdr: PDR client handle
+ * @status: function to be called on PD state change
+ *
+ * Initializes the PDR client handle to allow for tracking/restart of PDs.
+ *
+ * Return: 0 on success, negative errno on failure.
+ */
+int pdr_handle_init(struct pdr_handle *pdr,
+ void (*status)(struct pdr_handle *pdr,
+ struct pdr_service *pds))
+{
+ int ret;
+
+ if (!status)
+ return -EINVAL;
+
+ pdr->status = *status;
+
+ mutex_init(&pdr->locator_lock);
+ mutex_init(&pdr->list_lock);
+ mutex_init(&pdr->status_lock);
+
+ INIT_LIST_HEAD(&pdr->lookups);
+ INIT_LIST_HEAD(&pdr->indack_list);
+
+ INIT_WORK(&pdr->servloc_work, pdr_servloc_work);
+ INIT_WORK(&pdr->servreg_work, pdr_servreg_work);
+ INIT_WORK(&pdr->indack_work, pdr_indack_work);
+
+ pdr->servreg_wq = create_singlethread_workqueue("pdr_servreg_wq");
+ if (!pdr->servreg_wq)
+ return -ENOMEM;
+
+ pdr->indack_wq = alloc_ordered_workqueue("pdr_indack_wq", WQ_HIGHPRI);
+ if (!pdr->indack_wq) {
+ ret = -ENOMEM;
+ goto destroy_servreg;
+ }
+
+ ret = qmi_handle_init(&pdr->servloc_client,
+ SERVREG_GET_DOMAIN_LIST_RESP_MAX_LEN,
+ &service_locator_ops, NULL);
+ if (ret < 0)
+ goto destroy_indack;
+
+ ret = qmi_handle_init(&pdr->servreg_client,
+ SERVREG_STATE_UPDATED_IND_MAX_LEN,
+ &service_notifier_ops, qmi_indication_handler);
+ if (ret < 0)
+ goto release_handle;
+
+ qmi_add_lookup(&pdr->servloc_client, SERVREG_LOCATOR_SERVICE, 1, 1);
+
+ return 0;
+
+release_handle:
+ qmi_handle_release(&pdr->servloc_client);
+destroy_indack:
+ destroy_workqueue(pdr->indack_wq);
+destroy_servreg:
+ destroy_workqueue(pdr->servreg_wq);
+
+ return ret;
+}
+EXPORT_SYMBOL(pdr_handle_init);
+
+/**
+ * pdr_handle_release() - release the PDR client handle
+ * @pdr: PDR client handle
+ *
+ * Cleans up pending tracking requests and releases the underlying qmi handles.
+ */
+void pdr_handle_release(struct pdr_handle *pdr)
+{
+ struct pdr_service *pds, *tmp;
+
+ mutex_lock(&pdr->list_lock);
+ list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
+ list_del(&pds->node);
+ kfree(pds);
+ }
+ mutex_unlock(&pdr->list_lock);
+
+ cancel_work_sync(&pdr->servloc_work);
+ cancel_work_sync(&pdr->servreg_work);
+ cancel_work_sync(&pdr->indack_work);
+
+ destroy_workqueue(pdr->servreg_wq);
+ destroy_workqueue(pdr->indack_wq);
+
+ qmi_handle_release(&pdr->servloc_client);
+ qmi_handle_release(&pdr->servreg_client);
+}
+EXPORT_SYMBOL(pdr_handle_release);
diff --git a/drivers/soc/qcom/pdr_internal.h b/drivers/soc/qcom/pdr_internal.h
new file mode 100644
index 0000000000000..adaf85d509507
--- /dev/null
+++ b/drivers/soc/qcom/pdr_internal.h
@@ -0,0 +1,375 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/soc/qcom/pdr.h>
+
+#define SERVREG_LOCATOR_SERVICE 0x40
+#define SERVREG_NOTIFIER_SERVICE 0x42
+
+#define SERVREG_REGISTER_LISTENER_REQ 0x20
+#define SERVREG_GET_DOMAIN_LIST_REQ 0x21
+#define SERVREG_STATE_UPDATED_IND_ID 0x22
+#define SERVREG_SET_ACK_REQ 0x23
+#define SERVREG_RESTART_PD_REQ 0x24
+
+#define SERVREG_DOMAIN_LIST_LENGTH 32
+#define SERVREG_RESTART_PD_REQ_MAX_LEN 67
+#define SERVREG_REGISTER_LISTENER_REQ_LEN 71
+#define SERVREG_SET_ACK_REQ_LEN 72
+#define SERVREG_GET_DOMAIN_LIST_REQ_MAX_LEN 74
+#define SERVREG_STATE_UPDATED_IND_MAX_LEN 79
+#define SERVREG_GET_DOMAIN_LIST_RESP_MAX_LEN 2389
+
+struct servreg_location_entry {
+ char name[SERVREG_NAME_LENGTH + 1];
+ u8 service_data_valid;
+ u32 service_data;
+ u32 instance;
+};
+
+struct qmi_elem_info servreg_location_entry_ei[] = {
+ {
+ .data_type = QMI_STRING,
+ .elem_len = SERVREG_NAME_LENGTH + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct servreg_location_entry,
+ name),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct servreg_location_entry,
+ instance),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct servreg_location_entry,
+ service_data_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0,
+ .offset = offsetof(struct servreg_location_entry,
+ service_data),
+ },
+ {}
+};
+
+struct servreg_get_domain_list_req {
+ char service_name[SERVREG_NAME_LENGTH + 1];
+ u8 domain_offset_valid;
+ u32 domain_offset;
+};
+
+struct qmi_elem_info servreg_get_domain_list_req_ei[] = {
+ {
+ .data_type = QMI_STRING,
+ .elem_len = SERVREG_NAME_LENGTH + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct servreg_get_domain_list_req,
+ service_name),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct servreg_get_domain_list_req,
+ domain_offset_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct servreg_get_domain_list_req,
+ domain_offset),
+ },
+ {}
+};
+
+struct servreg_get_domain_list_resp {
+ struct qmi_response_type_v01 resp;
+ u8 total_domains_valid;
+ u16 total_domains;
+ u8 db_rev_count_valid;
+ u16 db_rev_count;
+ u8 domain_list_valid;
+ u32 domain_list_len;
+ struct servreg_location_entry domain_list[SERVREG_DOMAIN_LIST_LENGTH];
+};
+
+struct qmi_elem_info servreg_get_domain_list_resp_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ total_domains_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ total_domains),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ db_rev_count_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ db_rev_count),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ domain_list_valid),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ domain_list_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = SERVREG_DOMAIN_LIST_LENGTH,
+ .elem_size = sizeof(struct servreg_location_entry),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x12,
+ .offset = offsetof(struct servreg_get_domain_list_resp,
+ domain_list),
+ .ei_array = servreg_location_entry_ei,
+ },
+ {}
+};
+
+struct servreg_register_listener_req {
+ u8 enable;
+ char service_path[SERVREG_NAME_LENGTH + 1];
+};
+
+struct qmi_elem_info servreg_register_listener_req_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct servreg_register_listener_req,
+ enable),
+ },
+ {
+ .data_type = QMI_STRING,
+ .elem_len = SERVREG_NAME_LENGTH + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct servreg_register_listener_req,
+ service_path),
+ },
+ {}
+};
+
+struct servreg_register_listener_resp {
+ struct qmi_response_type_v01 resp;
+ u8 curr_state_valid;
+ enum servreg_service_state curr_state;
+};
+
+struct qmi_elem_info servreg_register_listener_resp_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct servreg_register_listener_resp,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof(u8),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct servreg_register_listener_resp,
+ curr_state_valid),
+ },
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(enum servreg_service_state),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x10,
+ .offset = offsetof(struct servreg_register_listener_resp,
+ curr_state),
+ },
+ {}
+};
+
+struct servreg_restart_pd_req {
+ char service_path[SERVREG_NAME_LENGTH + 1];
+};
+
+struct qmi_elem_info servreg_restart_pd_req_ei[] = {
+ {
+ .data_type = QMI_STRING,
+ .elem_len = SERVREG_NAME_LENGTH + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct servreg_restart_pd_req,
+ service_path),
+ },
+ {}
+};
+
+struct servreg_restart_pd_resp {
+ struct qmi_response_type_v01 resp;
+};
+
+struct qmi_elem_info servreg_restart_pd_resp_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct servreg_restart_pd_resp,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {}
+};
+
+struct servreg_state_updated_ind {
+ enum servreg_service_state curr_state;
+ char service_path[SERVREG_NAME_LENGTH + 1];
+ u16 transaction_id;
+};
+
+struct qmi_elem_info servreg_state_updated_ind_ei[] = {
+ {
+ .data_type = QMI_SIGNED_4_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(u32),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct servreg_state_updated_ind,
+ curr_state),
+ },
+ {
+ .data_type = QMI_STRING,
+ .elem_len = SERVREG_NAME_LENGTH + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct servreg_state_updated_ind,
+ service_path),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x03,
+ .offset = offsetof(struct servreg_state_updated_ind,
+ transaction_id),
+ },
+ {}
+};
+
+struct servreg_set_ack_req {
+ char service_path[SERVREG_NAME_LENGTH + 1];
+ u16 transaction_id;
+};
+
+struct qmi_elem_info servreg_set_ack_req_ei[] = {
+ {
+ .data_type = QMI_STRING,
+ .elem_len = SERVREG_NAME_LENGTH + 1,
+ .elem_size = sizeof(char),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct servreg_set_ack_req,
+ service_path),
+ },
+ {
+ .data_type = QMI_UNSIGNED_2_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof(u16),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct servreg_set_ack_req,
+ transaction_id),
+ },
+ {}
+};
+
+struct servreg_set_ack_resp {
+ struct qmi_response_type_v01 resp;
+};
+
+struct qmi_elem_info servreg_set_ack_resp_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .array_type = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = offsetof(struct servreg_set_ack_resp,
+ resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {}
+};
diff --git a/include/linux/soc/qcom/pdr.h b/include/linux/soc/qcom/pdr.h
new file mode 100644
index 0000000000000..7d48531705ac3
--- /dev/null
+++ b/include/linux/soc/qcom/pdr.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __QCOM_PDR_HELPER__
+#define __QCOM_PDR_HELPER__
+
+#include <linux/soc/qcom/qmi.h>
+
+#define SERVREG_NAME_LENGTH 64
+
+enum servreg_service_state {
+ SERVREG_LOCATOR_ERR = 0x1,
+ SERVREG_LOCATOR_UNKNOWN_SERVICE = 0x2,
+ SERVREG_LOCATOR_DB_UPDATED = 0x3,
+ SERVREG_SERVICE_STATE_DOWN = 0x0FFFFFFF,
+ SERVREG_SERVICE_STATE_UP = 0x1FFFFFFF,
+ SERVREG_SERVICE_STATE_EARLY_DOWN = 0x2FFFFFFF,
+ SERVREG_SERVICE_STATE_UNINIT = 0x7FFFFFFF,
+};
+
+/**
+ * struct pdr_service - context to track lookups/restarts
+ * @service_name: name of the service running on the PD
+ * @service_path: service path of the PD
+ * @service_data_valid: indicates if service_data field has valid data
+ * @service_data: service data provided by servreg_locator service
+ * @need_servreg_lookup: state flag for tracking servreg lookup requests
+ * @need_servreg_register: state flag for tracking pending servreg register
+ * @need_servreg_remove: state flag for tracking pending servreg remove
+ * @service_connected: current state of servreg_notifier qmi service
+ * @state: current state of PD
+ * @service: servreg_notifer service type
+ * @instance: instance id of the @service
+ * @priv: handle for client's use
+ * @node: list_head for house keeping
+ */
+struct pdr_service {
+ char service_name[SERVREG_NAME_LENGTH + 1];
+ char service_path[SERVREG_NAME_LENGTH + 1];
+
+ u8 service_data_valid;
+ u32 service_data;
+
+ bool need_servreg_lookup;
+ bool need_servreg_register;
+ bool need_servreg_remove;
+ bool service_connected;
+ int state;
+
+ unsigned int instance;
+ unsigned int service;
+
+ void *priv;
+ struct list_head node;
+};
+
+/**
+ * struct pdr_handle - PDR context
+ * @servloc_client: servreg_locator qmi handle
+ * @servreg_client: servreg_notifier qmi handle
+ * @servloc_addr: socket addr of @servloc_client
+ * @servreg_addr: socket addr of @servreg_client
+ * @lookups: list of lookup requests
+ * @indack_list: list of qmi indications from servreg_notifier services
+ * @list_lock: lock for modifications of lists
+ * @status_lock: lock to serialize pd status call back
+ * @locator_lock: lock for the shared locator state flag
+ * @locator_available: state flag to track servreg_locator service
+ * @servloc_work: work for handling lookup requests
+ * @servreg_work: work for registering with servreg_notifier service
+ * @indack_work: work for acking the qmi indications
+ * @servreg_wq: work queue to post @servreg_work and @servdel_work on
+ * @indack_wq: work queue to post @indack_work on
+ * @status: callback to inform the client on PD service state change
+ */
+struct pdr_handle {
+ struct qmi_handle servloc_client;
+ struct qmi_handle servreg_client;
+
+ struct sockaddr_qrtr servloc_addr;
+ struct sockaddr_qrtr servreg_addr;
+
+ struct list_head lookups;
+ struct list_head indack_list;
+
+ /* control access to pdr lookup list */
+ struct mutex list_lock;
+
+ /* serialize pd status invocation */
+ struct mutex status_lock;
+
+ /* control access to service locator state */
+ struct mutex locator_lock;
+ bool locator_available;
+
+ struct work_struct servloc_work;
+ struct work_struct servreg_work;
+ struct work_struct indack_work;
+
+ struct workqueue_struct *servreg_wq;
+ struct workqueue_struct *indack_wq;
+
+ void (*status)(struct pdr_handle *pdr, struct pdr_service *pds);
+};
+
+int pdr_handle_init(struct pdr_handle *pdr, void (*status)(struct pdr_handle *pdr, struct pdr_service *pds));
+int pdr_add_lookup(struct pdr_handle *pdr, const char *service_name, const char *service_path);
+int pdr_restart_pd(struct pdr_handle *pdr, const char *service_path);
+void pdr_handle_release(struct pdr_handle *pdr);
+
+#endif
diff --git a/include/linux/soc/qcom/qmi.h b/include/linux/soc/qcom/qmi.h
index 5efa2b67fa557..e712f94b89fcc 100644
--- a/include/linux/soc/qcom/qmi.h
+++ b/include/linux/soc/qcom/qmi.h
@@ -88,6 +88,7 @@ struct qmi_elem_info {
#define QMI_ERR_CLIENT_IDS_EXHAUSTED_V01 5
#define QMI_ERR_INVALID_ID_V01 41
#define QMI_ERR_ENCODING_V01 58
+#define QMI_ERR_DISABLED_V01 69
#define QMI_ERR_INCOMPATIBLE_STATE_V01 90
#define QMI_ERR_NOT_SUPPORTED_V01 94
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 2/3] dt-bindings: soc: qcom: apr: Add protection domain bindings
2019-12-30 5:00 [PATCH v3 0/3] Introduce Protection Domain Restart (PDR) Helpers Sibi Sankar
2019-12-30 5:00 ` [PATCH v3 1/3] soc: qcom: Introduce Protection Domain Restart helpers Sibi Sankar
@ 2019-12-30 5:00 ` Sibi Sankar
2020-01-31 14:34 ` Rob Herring
2019-12-30 5:00 ` [PATCH v3 3/3] soc: qcom: apr: Add avs/audio tracking functionality Sibi Sankar
2 siblings, 1 reply; 10+ messages in thread
From: Sibi Sankar @ 2019-12-30 5:00 UTC (permalink / raw)
To: bjorn.andersson, srinivas.kandagatla, robh+dt, tsoni
Cc: agross, mark.rutland, linux-arm-msm, linux-remoteproc,
linux-kernel, devicetree, Sibi Sankar
Add optional "qcom,protection-domain" bindings for APR services. This
helps to capture the dependencies between APR services and the PD on
which each apr service run.
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
---
.../devicetree/bindings/soc/qcom/qcom,apr.txt | 59 +++++++++++++++++++
1 file changed, 59 insertions(+)
diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt b/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
index db501269f47b8..f87c0b2a48de4 100644
--- a/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
+++ b/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
@@ -45,6 +45,12 @@ by the individual bindings for the specific service
12 - Ultrasound stream manager.
13 - Listen stream manager.
+- qcom,protection-domain
+ Usage: optional
+ Value type: <stringlist>
+ Definition: Must list the protection domain service name and path
+ that the particular apr service has a dependency on.
+
= EXAMPLE
The following example represents a QDSP based sound card on a MSM8996 device
which uses apr as communication between Apps and QDSP.
@@ -82,3 +88,56 @@ which uses apr as communication between Apps and QDSP.
...
};
};
+
+= EXAMPLE 2
+The following example represents a QDSP based sound card on SDM845 device.
+Here the apr services are dependent on "avs/audio" service running on AUDIO
+Protection Domain hosted on ADSP remote processor.
+
+ apr {
+ compatible = "qcom,apr-v2";
+ qcom,glink-channels = "apr_audio_svc";
+ qcom,apr-domain = <APR_DOMAIN_ADSP>;
+
+ q6core {
+ compatible = "qcom,q6core";
+ reg = <APR_SVC_ADSP_CORE>;
+ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+ };
+
+ q6afe: q6afe {
+ compatible = "qcom,q6afe";
+ reg = <APR_SVC_AFE>;
+ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+ q6afedai: dais {
+ compatible = "qcom,q6afe-dais";
+ #sound-dai-cells = <1>;
+
+ qi2s@22 {
+ reg = <22>;
+ qcom,sd-lines = <3>;
+ };
+ };
+ };
+
+ q6asm: q6asm {
+ compatible = "qcom,q6asm";
+ reg = <APR_SVC_ASM>;
+ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+ q6asmdai: dais {
+ compatible = "qcom,q6asm-dais";
+ #sound-dai-cells = <1>;
+ iommus = <&apps_smmu 0x1821 0x0>;
+ };
+ };
+
+ q6adm: q6adm {
+ compatible = "qcom,q6adm";
+ reg = <APR_SVC_ADM>;
+ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+ q6routing: routing {
+ compatible = "qcom,q6adm-routing";
+ #sound-dai-cells = <0>;
+ };
+ };
+ };
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 3/3] soc: qcom: apr: Add avs/audio tracking functionality
2019-12-30 5:00 [PATCH v3 0/3] Introduce Protection Domain Restart (PDR) Helpers Sibi Sankar
2019-12-30 5:00 ` [PATCH v3 1/3] soc: qcom: Introduce Protection Domain Restart helpers Sibi Sankar
2019-12-30 5:00 ` [PATCH v3 2/3] dt-bindings: soc: qcom: apr: Add protection domain bindings Sibi Sankar
@ 2019-12-30 5:00 ` Sibi Sankar
2020-01-02 20:57 ` Bjorn Andersson
2 siblings, 1 reply; 10+ messages in thread
From: Sibi Sankar @ 2019-12-30 5:00 UTC (permalink / raw)
To: bjorn.andersson, srinivas.kandagatla, robh+dt, tsoni
Cc: agross, mark.rutland, linux-arm-msm, linux-remoteproc,
linux-kernel, devicetree, Sibi Sankar
Use PDR helper functions to track the protection domains that the apr
services are dependent upon on SDM845 SoC, specifically the "avs/audio"
service running on ADSP Q6.
Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
---
drivers/soc/qcom/Kconfig | 1 +
drivers/soc/qcom/apr.c | 100 +++++++++++++++++++++++++++++++----
include/linux/soc/qcom/apr.h | 1 +
3 files changed, 91 insertions(+), 11 deletions(-)
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 5c4e76837f59b..cacfed945b275 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -202,6 +202,7 @@ config QCOM_APR
tristate "Qualcomm APR Bus (Asynchronous Packet Router)"
depends on ARCH_QCOM || COMPILE_TEST
depends on RPMSG
+ select QCOM_PDR_HELPERS
help
Enable APR IPC protocol support between
application processor and QDSP6. APR is
diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
index 4fcc32420c474..5234426718e88 100644
--- a/drivers/soc/qcom/apr.c
+++ b/drivers/soc/qcom/apr.c
@@ -11,6 +11,7 @@
#include <linux/workqueue.h>
#include <linux/of_device.h>
#include <linux/soc/qcom/apr.h>
+#include <linux/soc/qcom/pdr.h>
#include <linux/rpmsg.h>
#include <linux/of.h>
@@ -21,6 +22,7 @@ struct apr {
spinlock_t rx_lock;
struct idr svcs_idr;
int dest_domain_id;
+ struct pdr_handle pdr;
struct workqueue_struct *rxwq;
struct work_struct rx_work;
struct list_head rx_list;
@@ -289,6 +291,9 @@ static int apr_add_device(struct device *dev, struct device_node *np,
id->svc_id + 1, GFP_ATOMIC);
spin_unlock(&apr->svcs_lock);
+ of_property_read_string_index(np, "qcom,protection-domain",
+ 1, &adev->service_path);
+
dev_info(dev, "Adding APR dev: %s\n", dev_name(&adev->dev));
ret = device_register(&adev->dev);
@@ -300,14 +305,56 @@ static int apr_add_device(struct device *dev, struct device_node *np,
return ret;
}
-static void of_register_apr_devices(struct device *dev)
+static void of_apr_add_pd_lookups(struct device *dev)
{
+ const char *service_name, *service_path;
struct apr *apr = dev_get_drvdata(dev);
struct device_node *node;
+ int ret;
+
+ for_each_child_of_node(dev->of_node, node) {
+ ret = of_property_read_string_index(node, "qcom,protection-domain",
+ 0, &service_name);
+ if (ret < 0)
+ continue;
+
+ ret = of_property_read_string_index(node, "qcom,protection-domain",
+ 1, &service_path);
+ if (ret < 0)
+ continue;
+
+ ret = pdr_add_lookup(&apr->pdr, service_name, service_path);
+ if (ret && ret != -EALREADY)
+ dev_err(dev, "pdr add lookup failed: %d\n", ret);
+ }
+}
+
+static void of_register_apr_devices(struct device *dev, const char *svc_path)
+{
+ struct apr *apr = dev_get_drvdata(dev);
+ struct device_node *node;
+ const char *service_path;
+ int ret;
for_each_child_of_node(dev->of_node, node) {
struct apr_device_id id = { {0} };
+ ret = of_property_read_string_index(node, "qcom,protection-domain",
+ 1, &service_path);
+ if (svc_path) {
+ /* skip APR services that are PD independent */
+ if (ret)
+ continue;
+
+ /* skip APR services whose PD paths don't match */
+ if (strcmp(service_path, svc_path))
+ continue;
+ } else {
+ /* skip APR services whose PD lookups are registered */
+ if (ret == 0)
+ continue;
+ }
+
if (of_property_read_u32(node, "reg", &id.svc_id))
continue;
@@ -318,6 +365,35 @@ static void of_register_apr_devices(struct device *dev)
}
}
+static int apr_remove_device(struct device *dev, void *svc_path)
+{
+ struct apr_device *adev = to_apr_device(dev);
+
+ if (svc_path) {
+ if (!strcmp(adev->service_path, (char *)svc_path))
+ device_unregister(&adev->dev);
+ } else {
+ device_unregister(&adev->dev);
+ }
+
+ return 0;
+}
+
+static void apr_pd_status(struct pdr_handle *pdr, struct pdr_service *pds)
+{
+ struct apr *apr = container_of(pdr, struct apr, pdr);
+
+ switch (pds->state) {
+ case SERVREG_SERVICE_STATE_UP:
+ of_register_apr_devices(apr->dev, pds->service_path);
+ break;
+ case SERVREG_SERVICE_STATE_DOWN:
+ device_for_each_child(apr->dev, pds->service_path,
+ apr_remove_device);
+ break;
+ }
+}
+
static int apr_probe(struct rpmsg_device *rpdev)
{
struct device *dev = &rpdev->dev;
@@ -337,26 +413,27 @@ static int apr_probe(struct rpmsg_device *rpdev)
dev_set_drvdata(dev, apr);
apr->ch = rpdev->ept;
apr->dev = dev;
+
apr->rxwq = create_singlethread_workqueue("qcom_apr_rx");
if (!apr->rxwq) {
dev_err(apr->dev, "Failed to start Rx WQ\n");
return -ENOMEM;
}
INIT_WORK(&apr->rx_work, apr_rxwq);
+
+ ret = pdr_handle_init(&apr->pdr, apr_pd_status);
+ if (ret) {
+ dev_err(dev, "Failed to init PDR handle\n");
+ destroy_workqueue(apr->rxwq);
+ return ret;
+ }
+
INIT_LIST_HEAD(&apr->rx_list);
spin_lock_init(&apr->rx_lock);
spin_lock_init(&apr->svcs_lock);
idr_init(&apr->svcs_idr);
- of_register_apr_devices(dev);
-
- return 0;
-}
-
-static int apr_remove_device(struct device *dev, void *null)
-{
- struct apr_device *adev = to_apr_device(dev);
-
- device_unregister(&adev->dev);
+ of_apr_add_pd_lookups(dev);
+ of_register_apr_devices(dev, NULL);
return 0;
}
@@ -365,6 +442,7 @@ static void apr_remove(struct rpmsg_device *rpdev)
{
struct apr *apr = dev_get_drvdata(&rpdev->dev);
+ pdr_handle_release(&apr->pdr);
device_for_each_child(&rpdev->dev, NULL, apr_remove_device);
flush_workqueue(apr->rxwq);
destroy_workqueue(apr->rxwq);
diff --git a/include/linux/soc/qcom/apr.h b/include/linux/soc/qcom/apr.h
index c5d52e2cb275f..7f0bc3cf4d610 100644
--- a/include/linux/soc/qcom/apr.h
+++ b/include/linux/soc/qcom/apr.h
@@ -85,6 +85,7 @@ struct apr_device {
uint16_t domain_id;
uint32_t version;
char name[APR_NAME_SIZE];
+ const char *service_path;
spinlock_t lock;
struct list_head node;
};
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v3 1/3] soc: qcom: Introduce Protection Domain Restart helpers
2019-12-30 5:00 ` [PATCH v3 1/3] soc: qcom: Introduce Protection Domain Restart helpers Sibi Sankar
@ 2020-01-02 20:45 ` Bjorn Andersson
2020-01-03 12:36 ` Sibi Sankar
0 siblings, 1 reply; 10+ messages in thread
From: Bjorn Andersson @ 2020-01-02 20:45 UTC (permalink / raw)
To: Sibi Sankar
Cc: srinivas.kandagatla, robh+dt, tsoni, agross, mark.rutland,
linux-arm-msm, linux-remoteproc, linux-kernel, devicetree
On Sun 29 Dec 21:00 PST 2019, Sibi Sankar wrote:
[..]
> diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
[..]
> +static int servreg_locator_new_server(struct qmi_handle *qmi,
> + struct qmi_service *svc)
> +{
> + struct pdr_handle *pdr = container_of(qmi, struct pdr_handle,
> + servloc_client);
> + struct pdr_service *pds, *tmp;
> +
> + /* Create a Local client port for QMI communication */
> + pdr->servloc_addr.sq_family = AF_QIPCRTR;
> + pdr->servloc_addr.sq_node = svc->node;
> + pdr->servloc_addr.sq_port = svc->port;
> +
> + mutex_lock(&pdr->locator_lock);
> + pdr->locator_available = true;
> + mutex_unlock(&pdr->locator_lock);
> +
> + /* Service pending lookup requests */
> + mutex_lock(&pdr->list_lock);
> + list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
No need to make this _safe, as you're not modifying the list in the
loop.
> + if (pds->need_servreg_lookup)
> + schedule_work(&pdr->servloc_work);
> + }
> + mutex_unlock(&pdr->list_lock);
> +
> + return 0;
> +}
[..]
> +static void pdr_servreg_link_create(struct pdr_handle *pdr,
> + struct pdr_service *pds)
> +{
> + struct pdr_service *pds_iter, *tmp;
> + bool link_exists = false;
> +
> + /* Check if a QMI link to SERVREG instance already exists */
> + mutex_lock(&pdr->list_lock);
> + list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
> + if (pds_iter->instance == pds->instance &&
Flip this condition around and continue if it's not a match, to save
indentation and to split the two expressions into two distinct checks.
> + strcmp(pds_iter->service_path, pds->service_path)) {
Isn't this just saying:
if (pds_iter == pds)
continue;
With the purpose of link_exists to be !empty(set(lookups) - pds) ?
But if I read pdr_add_lookup() correctly it's possible that a client
could call pdr_add_lookup() more than once before pdr_servloc_work() is
scheduled, in which case "set(lookup) - pds" isn't empty and as such you
won't add the lookup?
> + link_exists = true;
> + pds->service_connected = pds_iter->service_connected;
> + if (pds_iter->service_connected)
> + pds->need_servreg_register = true;
> + else
> + pds->need_servreg_remove = true;
> + queue_work(pdr->servreg_wq, &pdr->servreg_work);
> + break;
> + }
> + }
[..]
> +static void pdr_servloc_work(struct work_struct *work)
> +{
> + struct pdr_handle *pdr = container_of(work, struct pdr_handle,
> + servloc_work);
> + struct pdr_service *pds, *tmp;
> + int ret;
> +
> + /* Bail out early if PD Mapper is not up */
> + mutex_lock(&pdr->locator_lock);
> + if (!pdr->locator_available) {
> + mutex_unlock(&pdr->locator_lock);
> + pr_warn("PDR: SERVICE LOCATOR service not available\n");
> + return;
> + }
> + mutex_unlock(&pdr->locator_lock);
> +
> + mutex_lock(&pdr->list_lock);
> + list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
As written right now you don't need _safe here, because in the only case
you're modifying the list you end up exiting the loop.
> + if (!pds->need_servreg_lookup)
> + continue;
> +
> + pds->need_servreg_lookup = false;
> + mutex_unlock(&pdr->list_lock);
You should probably just hold on to list_lock over this entire loop.
> +
> + ret = pdr_locate_service(pdr, pds);
> + if (ret < 0) {
> + if (ret == -ENXIO)
> + pds->state = SERVREG_LOCATOR_UNKNOWN_SERVICE;
> + else if (ret == -EAGAIN)
> + pds->state = SERVREG_LOCATOR_DB_UPDATED;
Isn't this something that we should recover from?
> + else
> + pds->state = SERVREG_LOCATOR_ERR;
> +
> + pr_err("PDR: service lookup for %s failed: %d\n",
> + pds->service_name, ret);
> +
> + /* Remove from lookup list */
> + mutex_lock(&pdr->list_lock);
> + list_del(&pds->node);
What should I do in my driver when this happens?
> + mutex_unlock(&pdr->list_lock);
> +
> + /* Notify Lookup failed */
> + mutex_lock(&pdr->status_lock);
> + pdr->status(pdr, pds);
> + mutex_unlock(&pdr->status_lock);
> + kfree(pds);
> + } else {
> + pdr_servreg_link_create(pdr, pds);
> + }
> +
> + return;
There might be more pds entries with need_servreg_lookup in the list,
shouldn't we allow this to continue?
This would though imply that you should hold onto the list_lock over the
entire loop, which I think looks fine.
> + }
> + mutex_unlock(&pdr->list_lock);
> +}
> +
> +/**
> + * pdr_add_lookup() - register a tracking request for a PD
> + * @pdr: PDR client handle
> + * @service_name: service name of the tracking request
> + * @service_path: service path of the tracking request
> + *
> + * Registering a pdr lookup allows for tracking the life cycle of the PD.
> + *
> + * Return: 0 on success, negative errno on failure.
> + */
> +int pdr_add_lookup(struct pdr_handle *pdr, const char *service_name,
> + const char *service_path)
> +{
> + struct pdr_service *pds, *pds_iter, *tmp;
> + int ret;
> +
> + if (!service_name || strlen(service_name) > SERVREG_NAME_LENGTH ||
> + !service_path || strlen(service_path) > SERVREG_NAME_LENGTH)
> + return -EINVAL;
> +
> + pds = kzalloc(sizeof(*pds), GFP_KERNEL);
> + if (!pds)
> + return -ENOMEM;
> +
> + pds->service = SERVREG_NOTIFIER_SERVICE;
> + strcpy(pds->service_name, service_name);
> + strcpy(pds->service_path, service_path);
> + pds->need_servreg_lookup = true;
> +
> + mutex_lock(&pdr->list_lock);
> + list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
No _safe
> + if (!strcmp(pds_iter->service_path, service_path)) {
> + mutex_unlock(&pdr->list_lock);
> + ret = -EALREADY;
> + goto err;
> + }
> + }
> +
> + list_add(&pds->node, &pdr->lookups);
> + mutex_unlock(&pdr->list_lock);
> +
> + schedule_work(&pdr->servloc_work);
> +
> + return 0;
> +err:
> + kfree(pds);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL(pdr_add_lookup);
> +
> +/**
> + * pdr_restart_pd() - restart PD
> + * @pdr: PDR client handle
> + * @service_path: service path of restart request
> + *
> + * Restarts the PD tracked by the PDR client handle for a given service path.
> + *
> + * Return: 0 on success, negative errno on failure.
> + */
> +int pdr_restart_pd(struct pdr_handle *pdr, const char *service_path)
> +{
> + struct servreg_restart_pd_req req;
> + struct servreg_restart_pd_resp resp;
> + struct pdr_service *pds = NULL, *pds_iter, *tmp;
> + struct qmi_txn txn;
> + int ret;
> +
> + if (!service_path || strlen(service_path) > SERVREG_NAME_LENGTH)
> + return -EINVAL;
> +
> + mutex_lock(&pdr->list_lock);
> + list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
> + if (!pds_iter->service_connected)
> + continue;
> +
> + if (!strcmp(pds_iter->service_path, service_path)) {
> + pds = pds_iter;
> + break;
> + }
> + }
> + mutex_unlock(&pdr->list_lock);
> +
> + if (!pds)
Given that you may only call pdr_restart_pd() on something created by
first calling pdr_add_lookup(), how about returning the struct
pdr_service from pdr_add_lookup() instead and then have the client pass
that as an argument to this function.
Most clients doesn't care about pdr_restart_pd() so they would only have
to IS_ERR(pdr_add_lookup()) anyways, and the ones that care can carry
the returned pointer.
Note that the struct pdr_service doesn't have to be defined in a way
that it's possible to dereference by clients.
> + return -EINVAL;
> +
> + /* Prepare req message */
> + strcpy(req.service_path, pds->service_path);
> +
> + ret = qmi_txn_init(&pdr->servreg_client, &txn,
> + servreg_restart_pd_resp_ei,
> + &resp);
> + if (ret < 0)
> + return ret;
> +
> + ret = qmi_send_request(&pdr->servreg_client, &pdr->servreg_addr,
> + &txn, SERVREG_RESTART_PD_REQ,
> + SERVREG_RESTART_PD_REQ_MAX_LEN,
> + servreg_restart_pd_req_ei, &req);
> + if (ret < 0) {
> + qmi_txn_cancel(&txn);
> + return ret;
> + }
> +
> + ret = qmi_txn_wait(&txn, 5 * HZ);
> + if (ret < 0) {
> + pr_err("PDR: %s PD restart txn wait failed: %d\n",
> + pds->service_path, ret);
> + return ret;
> + }
> +
> + /* Check response if PDR is disabled */
> + if (resp.resp.result == QMI_RESULT_FAILURE_V01 &&
> + resp.resp.error == QMI_ERR_DISABLED_V01) {
> + pr_err("PDR: %s PD restart is disabled: 0x%x\n",
> + pds->service_path, resp.resp.error);
> + return -EOPNOTSUPP;
> + }
> +
> + /* Check the response for other error case*/
> + if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
> + pr_err("PDR: %s request for PD restart failed: 0x%x\n",
> + pds->service_path, resp.resp.error);
> + return -EREMOTEIO;
> + }
> +
> + return ret;
> +}
> +EXPORT_SYMBOL(pdr_restart_pd);
[..]
> +/**
> + * struct pdr_service - context to track lookups/restarts
> + * @service_name: name of the service running on the PD
> + * @service_path: service path of the PD
> + * @service_data_valid: indicates if service_data field has valid data
> + * @service_data: service data provided by servreg_locator service
> + * @need_servreg_lookup: state flag for tracking servreg lookup requests
> + * @need_servreg_register: state flag for tracking pending servreg register
> + * @need_servreg_remove: state flag for tracking pending servreg remove
> + * @service_connected: current state of servreg_notifier qmi service
> + * @state: current state of PD
> + * @service: servreg_notifer service type
> + * @instance: instance id of the @service
> + * @priv: handle for client's use
> + * @node: list_head for house keeping
> + */
> +struct pdr_service {
This is primarily internal bookkeeping, how about not exposing it to the
clients? This would imply that status() would have to be called with
pdr_service->priv and pdr_service->state as arguments instead.
> + char service_name[SERVREG_NAME_LENGTH + 1];
> + char service_path[SERVREG_NAME_LENGTH + 1];
> +
> + u8 service_data_valid;
> + u32 service_data;
> +
> + bool need_servreg_lookup;
> + bool need_servreg_register;
> + bool need_servreg_remove;
> + bool service_connected;
> + int state;
> +
> + unsigned int instance;
> + unsigned int service;
> +
> + void *priv;
> + struct list_head node;
> +};
> +
[..]
> + void (*status)(struct pdr_handle *pdr, struct pdr_service *pds);
> +};
Regards,
Bjorn
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 3/3] soc: qcom: apr: Add avs/audio tracking functionality
2019-12-30 5:00 ` [PATCH v3 3/3] soc: qcom: apr: Add avs/audio tracking functionality Sibi Sankar
@ 2020-01-02 20:57 ` Bjorn Andersson
2020-01-03 12:47 ` Sibi Sankar
0 siblings, 1 reply; 10+ messages in thread
From: Bjorn Andersson @ 2020-01-02 20:57 UTC (permalink / raw)
To: Sibi Sankar
Cc: srinivas.kandagatla, robh+dt, tsoni, agross, mark.rutland,
linux-arm-msm, linux-remoteproc, linux-kernel, devicetree
On Sun 29 Dec 21:00 PST 2019, Sibi Sankar wrote:
> diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
[..]
> -static void of_register_apr_devices(struct device *dev)
> +static void of_apr_add_pd_lookups(struct device *dev)
> {
> + const char *service_name, *service_path;
> struct apr *apr = dev_get_drvdata(dev);
> struct device_node *node;
> + int ret;
> +
> + for_each_child_of_node(dev->of_node, node) {
> + ret = of_property_read_string_index(node, "qcom,protection-domain",
> + 0, &service_name);
> + if (ret < 0)
> + continue;
While this implies that the qcom,protection-domain property is
missing...
> +
> + ret = of_property_read_string_index(node, "qcom,protection-domain",
> + 1, &service_path);
> + if (ret < 0)
> + continue;
...this would imply that it's there but the format is wrong. I think you
should log this and propagate the error.
> +
> + ret = pdr_add_lookup(&apr->pdr, service_name, service_path);
> + if (ret && ret != -EALREADY)
> + dev_err(dev, "pdr add lookup failed: %d\n", ret);
So we have a DT that denotes that PDR is required, but we failed to
register a lookup (for some reason). That would imply that apr is not
going to work. I think you should propagate this and make apr_probe()
fail to make this obvious.
> + }
> +}
> +
> +static void of_register_apr_devices(struct device *dev, const char *svc_path)
> +{
> + struct apr *apr = dev_get_drvdata(dev);
> + struct device_node *node;
> + const char *service_path;
> + int ret;
>
> for_each_child_of_node(dev->of_node, node) {
> struct apr_device_id id = { {0} };
I think you should add a comment here describing what's actually going
on. Something along the lines of:
/*
* This function is called with svc_path NULL during apr_probe(), in
* which case we register any apr devices without a
* qcom,protection-domain specified.
*
* Then as the protection domains becomes available (if applicable) this
* function is again called, but with svc_path representing the service
* becoming available. In this case we register any apr devices with a
* matching qcom,protection-domain.
*/
>
> + ret = of_property_read_string_index(node, "qcom,protection-domain",
> + 1, &service_path);
> + if (svc_path) {
> + /* skip APR services that are PD independent */
> + if (ret)
> + continue;
> +
> + /* skip APR services whose PD paths don't match */
> + if (strcmp(service_path, svc_path))
> + continue;
> + } else {
> + /* skip APR services whose PD lookups are registered */
> + if (ret == 0)
> + continue;
> + }
> +
Regards,
Bjorn
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 1/3] soc: qcom: Introduce Protection Domain Restart helpers
2020-01-02 20:45 ` Bjorn Andersson
@ 2020-01-03 12:36 ` Sibi Sankar
0 siblings, 0 replies; 10+ messages in thread
From: Sibi Sankar @ 2020-01-03 12:36 UTC (permalink / raw)
To: Bjorn Andersson
Cc: srinivas.kandagatla, robh+dt, tsoni, agross, mark.rutland,
linux-arm-msm, linux-remoteproc, linux-kernel, devicetree,
linux-remoteproc-owner
Hey Bjorn,
Thanks for taking time to review
the series.
On 2020-01-03 02:15, Bjorn Andersson wrote:
> On Sun 29 Dec 21:00 PST 2019, Sibi Sankar wrote:
> [..]
>> diff --git a/drivers/soc/qcom/pdr_interface.c
>> b/drivers/soc/qcom/pdr_interface.c
> [..]
>> +static int servreg_locator_new_server(struct qmi_handle *qmi,
>> + struct qmi_service *svc)
>> +{
>> + struct pdr_handle *pdr = container_of(qmi, struct pdr_handle,
>> + servloc_client);
>> + struct pdr_service *pds, *tmp;
>> +
>> + /* Create a Local client port for QMI communication */
>> + pdr->servloc_addr.sq_family = AF_QIPCRTR;
>> + pdr->servloc_addr.sq_node = svc->node;
>> + pdr->servloc_addr.sq_port = svc->port;
>> +
>> + mutex_lock(&pdr->locator_lock);
>> + pdr->locator_available = true;
>> + mutex_unlock(&pdr->locator_lock);
>> +
>> + /* Service pending lookup requests */
>> + mutex_lock(&pdr->list_lock);
>> + list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
>
> No need to make this _safe, as you're not modifying the list in the
> loop.
sure I'll do that
>
>> + if (pds->need_servreg_lookup)
>> + schedule_work(&pdr->servloc_work);
>> + }
>> + mutex_unlock(&pdr->list_lock);
>> +
>> + return 0;
>> +}
> [..]
>> +static void pdr_servreg_link_create(struct pdr_handle *pdr,
>> + struct pdr_service *pds)
>> +{
>> + struct pdr_service *pds_iter, *tmp;
>> + bool link_exists = false;
>> +
>> + /* Check if a QMI link to SERVREG instance already exists */
>> + mutex_lock(&pdr->list_lock);
>> + list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
>> + if (pds_iter->instance == pds->instance &&
>
> Flip this condition around and continue if it's not a match, to save
> indentation and to split the two expressions into two distinct checks.
sure I'll do that
>
>> + strcmp(pds_iter->service_path, pds->service_path)) {
>
> Isn't this just saying:
> if (pds_iter == pds)
> continue;
>
> With the purpose of link_exists to be !empty(set(lookups) - pds) ?
More like:
!empty(set(lookups_with_same_instance) - pds)
servreg_link_create was added to re-use
an existing qmi_lookup i.e deal with
PDs running on the same remote processor.
This can be identified by looking for
a lookup with the same instance value
but with a different service path. We
still need to register the service_path
with the servreg service once its up.
>
> But if I read pdr_add_lookup() correctly it's possible that a client
> could call pdr_add_lookup() more than once before pdr_servloc_work() is
> scheduled, in which case "set(lookup) - pds" isn't empty and as such
> you
> won't add the lookup?
holding the lock over entire servloc_work
should handle that scenario? That way we
can ensure qmi_lookup is called atleast
once.
>
>> + link_exists = true;
>> + pds->service_connected = pds_iter->service_connected;
>> + if (pds_iter->service_connected)
>> + pds->need_servreg_register = true;
>> + else
>> + pds->need_servreg_remove = true;
>> + queue_work(pdr->servreg_wq, &pdr->servreg_work);
>> + break;
>> + }
>> + }
> [..]
>> +static void pdr_servloc_work(struct work_struct *work)
>> +{
>> + struct pdr_handle *pdr = container_of(work, struct pdr_handle,
>> + servloc_work);
>> + struct pdr_service *pds, *tmp;
>> + int ret;
>> +
>> + /* Bail out early if PD Mapper is not up */
>> + mutex_lock(&pdr->locator_lock);
>> + if (!pdr->locator_available) {
>> + mutex_unlock(&pdr->locator_lock);
>> + pr_warn("PDR: SERVICE LOCATOR service not available\n");
>> + return;
>> + }
>> + mutex_unlock(&pdr->locator_lock);
>> +
>> + mutex_lock(&pdr->list_lock);
>> + list_for_each_entry_safe(pds, tmp, &pdr->lookups, node) {
>
> As written right now you don't need _safe here, because in the only
> case
> you're modifying the list you end up exiting the loop.
sure
>
>> + if (!pds->need_servreg_lookup)
>> + continue;
>> +
>> + pds->need_servreg_lookup = false;
>> + mutex_unlock(&pdr->list_lock);
>
> You should probably just hold on to list_lock over this entire loop.
>
>> +
>> + ret = pdr_locate_service(pdr, pds);
>> + if (ret < 0) {
>> + if (ret == -ENXIO)
>> + pds->state = SERVREG_LOCATOR_UNKNOWN_SERVICE;
>> + else if (ret == -EAGAIN)
>> + pds->state = SERVREG_LOCATOR_DB_UPDATED;
>
> Isn't this something that we should recover from?
yes its a case where the json
referenced by pd-mapper has been
updated mid lookup. Calling lookup
again should ideally fix this but
we'll have to decide on the max
number of retries. I guess I can
simulate such a scenario with
a custom json file and pd-mapper
changes.
>
>> + else
>> + pds->state = SERVREG_LOCATOR_ERR;
>> +
>> + pr_err("PDR: service lookup for %s failed: %d\n",
>> + pds->service_name, ret);
>> +
>> + /* Remove from lookup list */
>> + mutex_lock(&pdr->list_lock);
>> + list_del(&pds->node);
>
> What should I do in my driver when this happens?
db_updated -> retry should fix
this error
unknown_service -> lookup not found.
^^ With the way pd-mapper is implemented
its not really recoverable until pd-mapper
is restarted with different args.
locator_err -> not really recoverable
>
>> + mutex_unlock(&pdr->list_lock);
>> +
>> + /* Notify Lookup failed */
>> + mutex_lock(&pdr->status_lock);
>> + pdr->status(pdr, pds);
>> + mutex_unlock(&pdr->status_lock);
>> + kfree(pds);
>> + } else {
>> + pdr_servreg_link_create(pdr, pds);
>> + }
>> +
>> + return;
>
> There might be more pds entries with need_servreg_lookup in the list,
> shouldn't we allow this to continue?
but we've already scheduled a
number of workers to deal with
this.
>
> This would though imply that you should hold onto the list_lock over
> the
> entire loop, which I think looks fine.
sure
>
>> + }
>> + mutex_unlock(&pdr->list_lock);
>> +}
>> +
>> +/**
>> + * pdr_add_lookup() - register a tracking request for a PD
>> + * @pdr: PDR client handle
>> + * @service_name: service name of the tracking request
>> + * @service_path: service path of the tracking request
>> + *
>> + * Registering a pdr lookup allows for tracking the life cycle of the
>> PD.
>> + *
>> + * Return: 0 on success, negative errno on failure.
>> + */
>> +int pdr_add_lookup(struct pdr_handle *pdr, const char *service_name,
>> + const char *service_path)
>> +{
>> + struct pdr_service *pds, *pds_iter, *tmp;
>> + int ret;
>> +
>> + if (!service_name || strlen(service_name) > SERVREG_NAME_LENGTH ||
>> + !service_path || strlen(service_path) > SERVREG_NAME_LENGTH)
>> + return -EINVAL;
>> +
>> + pds = kzalloc(sizeof(*pds), GFP_KERNEL);
>> + if (!pds)
>> + return -ENOMEM;
>> +
>> + pds->service = SERVREG_NOTIFIER_SERVICE;
>> + strcpy(pds->service_name, service_name);
>> + strcpy(pds->service_path, service_path);
>> + pds->need_servreg_lookup = true;
>> +
>> + mutex_lock(&pdr->list_lock);
>> + list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
>
> No _safe
Thanks will update
>
>> + if (!strcmp(pds_iter->service_path, service_path)) {
>> + mutex_unlock(&pdr->list_lock);
>> + ret = -EALREADY;
>> + goto err;
>> + }
>> + }
>> +
>> + list_add(&pds->node, &pdr->lookups);
>> + mutex_unlock(&pdr->list_lock);
>> +
>> + schedule_work(&pdr->servloc_work);
>> +
>> + return 0;
>> +err:
>> + kfree(pds);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL(pdr_add_lookup);
>> +
>> +/**
>> + * pdr_restart_pd() - restart PD
>> + * @pdr: PDR client handle
>> + * @service_path: service path of restart request
>> + *
>> + * Restarts the PD tracked by the PDR client handle for a given
>> service path.
>> + *
>> + * Return: 0 on success, negative errno on failure.
>> + */
>> +int pdr_restart_pd(struct pdr_handle *pdr, const char *service_path)
>> +{
>> + struct servreg_restart_pd_req req;
>> + struct servreg_restart_pd_resp resp;
>> + struct pdr_service *pds = NULL, *pds_iter, *tmp;
>> + struct qmi_txn txn;
>> + int ret;
>> +
>> + if (!service_path || strlen(service_path) > SERVREG_NAME_LENGTH)
>> + return -EINVAL;
>> +
>> + mutex_lock(&pdr->list_lock);
>> + list_for_each_entry_safe(pds_iter, tmp, &pdr->lookups, node) {
>> + if (!pds_iter->service_connected)
>> + continue;
>> +
>> + if (!strcmp(pds_iter->service_path, service_path)) {
>> + pds = pds_iter;
>> + break;
>> + }
>> + }
>> + mutex_unlock(&pdr->list_lock);
>> +
>> + if (!pds)
>
> Given that you may only call pdr_restart_pd() on something created by
> first calling pdr_add_lookup(), how about returning the struct
> pdr_service from pdr_add_lookup() instead and then have the client pass
> that as an argument to this function.
>
> Most clients doesn't care about pdr_restart_pd() so they would only
> have
> to IS_ERR(pdr_add_lookup()) anyways, and the ones that care can carry
> the returned pointer.
>
>
> Note that the struct pdr_service doesn't have to be defined in a way
> that it's possible to dereference by clients.
sure will update the design in the
next re-spin.
>
>> + return -EINVAL;
>> +
>> + /* Prepare req message */
>> + strcpy(req.service_path, pds->service_path);
>> +
>> + ret = qmi_txn_init(&pdr->servreg_client, &txn,
>> + servreg_restart_pd_resp_ei,
>> + &resp);
>> + if (ret < 0)
>> + return ret;
>> +
>> + ret = qmi_send_request(&pdr->servreg_client, &pdr->servreg_addr,
>> + &txn, SERVREG_RESTART_PD_REQ,
>> + SERVREG_RESTART_PD_REQ_MAX_LEN,
>> + servreg_restart_pd_req_ei, &req);
>> + if (ret < 0) {
>> + qmi_txn_cancel(&txn);
>> + return ret;
>> + }
>> +
>> + ret = qmi_txn_wait(&txn, 5 * HZ);
>> + if (ret < 0) {
>> + pr_err("PDR: %s PD restart txn wait failed: %d\n",
>> + pds->service_path, ret);
>> + return ret;
>> + }
>> +
>> + /* Check response if PDR is disabled */
>> + if (resp.resp.result == QMI_RESULT_FAILURE_V01 &&
>> + resp.resp.error == QMI_ERR_DISABLED_V01) {
>> + pr_err("PDR: %s PD restart is disabled: 0x%x\n",
>> + pds->service_path, resp.resp.error);
>> + return -EOPNOTSUPP;
>> + }
>> +
>> + /* Check the response for other error case*/
>> + if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
>> + pr_err("PDR: %s request for PD restart failed: 0x%x\n",
>> + pds->service_path, resp.resp.error);
>> + return -EREMOTEIO;
>> + }
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL(pdr_restart_pd);
> [..]
>> +/**
>> + * struct pdr_service - context to track lookups/restarts
>> + * @service_name: name of the service running on the PD
>> + * @service_path: service path of the PD
>> + * @service_data_valid: indicates if service_data field has valid
>> data
>> + * @service_data: service data provided by servreg_locator service
>> + * @need_servreg_lookup: state flag for tracking servreg lookup
>> requests
>> + * @need_servreg_register: state flag for tracking pending servreg
>> register
>> + * @need_servreg_remove: state flag for tracking pending servreg
>> remove
>> + * @service_connected: current state of servreg_notifier qmi service
>> + * @state: current state of PD
>> + * @service: servreg_notifer service type
>> + * @instance: instance id of the @service
>> + * @priv: handle for client's use
>> + * @node: list_head for house keeping
>> + */
>> +struct pdr_service {
>
> This is primarily internal bookkeeping, how about not exposing it to
> the
> clients? This would imply that status() would have to be called with
> pdr_service->priv and pdr_service->state as arguments instead.
sure will update the design in the
next re-spin.
>
>> + char service_name[SERVREG_NAME_LENGTH + 1];
>> + char service_path[SERVREG_NAME_LENGTH + 1];
>> +
>> + u8 service_data_valid;
>> + u32 service_data;
>> +
>> + bool need_servreg_lookup;
>> + bool need_servreg_register;
>> + bool need_servreg_remove;
>> + bool service_connected;
>> + int state;
>> +
>> + unsigned int instance;
>> + unsigned int service;
>> +
>> + void *priv;
>> + struct list_head node;
>> +};
>> +
> [..]
>> + void (*status)(struct pdr_handle *pdr, struct pdr_service *pds);
>> +};
>
> Regards,
> Bjorn
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 3/3] soc: qcom: apr: Add avs/audio tracking functionality
2020-01-02 20:57 ` Bjorn Andersson
@ 2020-01-03 12:47 ` Sibi Sankar
0 siblings, 0 replies; 10+ messages in thread
From: Sibi Sankar @ 2020-01-03 12:47 UTC (permalink / raw)
To: Bjorn Andersson
Cc: srinivas.kandagatla, robh+dt, tsoni, agross, mark.rutland,
linux-arm-msm, linux-remoteproc, linux-kernel, devicetree,
linux-arm-msm-owner
On 2020-01-03 02:27, Bjorn Andersson wrote:
> On Sun 29 Dec 21:00 PST 2019, Sibi Sankar wrote:
>> diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
> [..]
>> -static void of_register_apr_devices(struct device *dev)
>> +static void of_apr_add_pd_lookups(struct device *dev)
>> {
>> + const char *service_name, *service_path;
>> struct apr *apr = dev_get_drvdata(dev);
>> struct device_node *node;
>> + int ret;
>> +
>> + for_each_child_of_node(dev->of_node, node) {
>> + ret = of_property_read_string_index(node, "qcom,protection-domain",
>> + 0, &service_name);
>> + if (ret < 0)
>> + continue;
>
> While this implies that the qcom,protection-domain property is
> missing...
>
>> +
>> + ret = of_property_read_string_index(node, "qcom,protection-domain",
>> + 1, &service_path);
>> + if (ret < 0)
>> + continue;
>
> ...this would imply that it's there but the format is wrong. I think
> you
> should log this and propagate the error.
>
>> +
>> + ret = pdr_add_lookup(&apr->pdr, service_name, service_path);
>> + if (ret && ret != -EALREADY)
>> + dev_err(dev, "pdr add lookup failed: %d\n", ret);
>
> So we have a DT that denotes that PDR is required, but we failed to
> register a lookup (for some reason). That would imply that apr is not
> going to work. I think you should propagate this and make apr_probe()
> fail to make this obvious.
this was predominantly done to deal
with a mix of apr devices where some
of them are independent of PDs so
making apr_probe fail is detrimental
here. Also apr devices having improper
format will not be registered or removed.
>
>> + }
>> +}
>> +
>> +static void of_register_apr_devices(struct device *dev, const char
>> *svc_path)
>> +{
>> + struct apr *apr = dev_get_drvdata(dev);
>> + struct device_node *node;
>> + const char *service_path;
>> + int ret;
>>
>> for_each_child_of_node(dev->of_node, node) {
>> struct apr_device_id id = { {0} };
>
> I think you should add a comment here describing what's actually going
> on. Something along the lines of:
>
> /*
> * This function is called with svc_path NULL during apr_probe(), in
> * which case we register any apr devices without a
> * qcom,protection-domain specified.
> *
> * Then as the protection domains becomes available (if applicable)
> this
> * function is again called, but with svc_path representing the service
> * becoming available. In this case we register any apr devices with a
> * matching qcom,protection-domain.
> */
Thanks for writing ^^ up will include
it in my next re-spin.
>
>>
>> + ret = of_property_read_string_index(node, "qcom,protection-domain",
>> + 1, &service_path);
>> + if (svc_path) {
>> + /* skip APR services that are PD independent */
>> + if (ret)
>> + continue;
>> +
>> + /* skip APR services whose PD paths don't match */
>> + if (strcmp(service_path, svc_path))
>> + continue;
>> + } else {
>> + /* skip APR services whose PD lookups are registered */
>> + if (ret == 0)
>> + continue;
>> + }
>> +
>
> Regards,
> Bjorn
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/3] dt-bindings: soc: qcom: apr: Add protection domain bindings
2019-12-30 5:00 ` [PATCH v3 2/3] dt-bindings: soc: qcom: apr: Add protection domain bindings Sibi Sankar
@ 2020-01-31 14:34 ` Rob Herring
2020-02-01 13:44 ` Sibi Sankar
0 siblings, 1 reply; 10+ messages in thread
From: Rob Herring @ 2020-01-31 14:34 UTC (permalink / raw)
To: Sibi Sankar
Cc: bjorn.andersson, srinivas.kandagatla, tsoni, agross,
mark.rutland, linux-arm-msm, linux-remoteproc, linux-kernel,
devicetree
On Mon, Dec 30, 2019 at 10:30:07AM +0530, Sibi Sankar wrote:
> Add optional "qcom,protection-domain" bindings for APR services. This
> helps to capture the dependencies between APR services and the PD on
> which each apr service run.
This is meaningless to me...
> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
> Reviewed-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
> Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
> ---
> .../devicetree/bindings/soc/qcom/qcom,apr.txt | 59 +++++++++++++++++++
> 1 file changed, 59 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt b/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
> index db501269f47b8..f87c0b2a48de4 100644
> --- a/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
> +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
> @@ -45,6 +45,12 @@ by the individual bindings for the specific service
> 12 - Ultrasound stream manager.
> 13 - Listen stream manager.
>
> +- qcom,protection-domain
> + Usage: optional
> + Value type: <stringlist>
> + Definition: Must list the protection domain service name and path
> + that the particular apr service has a dependency on.
How many strings can there be and can you enumerate the possible
strings?
> +
> = EXAMPLE
> The following example represents a QDSP based sound card on a MSM8996 device
> which uses apr as communication between Apps and QDSP.
> @@ -82,3 +88,56 @@ which uses apr as communication between Apps and QDSP.
> ...
> };
> };
> +
> += EXAMPLE 2
> +The following example represents a QDSP based sound card on SDM845 device.
> +Here the apr services are dependent on "avs/audio" service running on AUDIO
> +Protection Domain hosted on ADSP remote processor.
> +
> + apr {
> + compatible = "qcom,apr-v2";
> + qcom,glink-channels = "apr_audio_svc";
> + qcom,apr-domain = <APR_DOMAIN_ADSP>;
> +
> + q6core {
> + compatible = "qcom,q6core";
> + reg = <APR_SVC_ADSP_CORE>;
> + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
> + };
> +
> + q6afe: q6afe {
> + compatible = "qcom,q6afe";
> + reg = <APR_SVC_AFE>;
> + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
> + q6afedai: dais {
> + compatible = "qcom,q6afe-dais";
> + #sound-dai-cells = <1>;
> +
> + qi2s@22 {
> + reg = <22>;
> + qcom,sd-lines = <3>;
> + };
> + };
> + };
> +
> + q6asm: q6asm {
> + compatible = "qcom,q6asm";
> + reg = <APR_SVC_ASM>;
> + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
> + q6asmdai: dais {
> + compatible = "qcom,q6asm-dais";
> + #sound-dai-cells = <1>;
> + iommus = <&apps_smmu 0x1821 0x0>;
> + };
> + };
> +
> + q6adm: q6adm {
> + compatible = "qcom,q6adm";
> + reg = <APR_SVC_ADM>;
> + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
Perhaps an example where not everything is the same. The example shows
me this isn't needed in DT.
> + q6routing: routing {
> + compatible = "qcom,q6adm-routing";
> + #sound-dai-cells = <0>;
> + };
> + };
> + };
> --
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/3] dt-bindings: soc: qcom: apr: Add protection domain bindings
2020-01-31 14:34 ` Rob Herring
@ 2020-02-01 13:44 ` Sibi Sankar
0 siblings, 0 replies; 10+ messages in thread
From: Sibi Sankar @ 2020-02-01 13:44 UTC (permalink / raw)
To: Rob Herring
Cc: bjorn.andersson, srinivas.kandagatla, tsoni, agross,
mark.rutland, linux-arm-msm, linux-remoteproc, linux-kernel,
devicetree
Hey Rob,
Thanks for the review!
On 1/31/20 8:04 PM, Rob Herring wrote:
> On Mon, Dec 30, 2019 at 10:30:07AM +0530, Sibi Sankar wrote:
>> Add optional "qcom,protection-domain" bindings for APR services. This
>> helps to capture the dependencies between APR services and the PD on
>> which each apr service run.
>
> This is meaningless to me...
Qualcomm SoCs (starting with MSM8998) allow for multiple protection
domains (PDs) to run on the same Q6 sub-system. This allows for
services like AVS AUDIO to have their own separate address space and
crash/recover without disrupting the other PDs running on the same Q6
ADSP. Add "qcom,protection-domain" bindings to capture the dependencies
between the APR service and the PD on which the apr service runs.
Is ^^ better?
>
>> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
>> Reviewed-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
>> Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
>> ---
>> .../devicetree/bindings/soc/qcom/qcom,apr.txt | 59 +++++++++++++++++++
>> 1 file changed, 59 insertions(+)
>>
>> diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt b/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
>> index db501269f47b8..f87c0b2a48de4 100644
>> --- a/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
>> +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
>> @@ -45,6 +45,12 @@ by the individual bindings for the specific service
>> 12 - Ultrasound stream manager.
>> 13 - Listen stream manager.
>>
>> +- qcom,protection-domain
>> + Usage: optional
>> + Value type: <stringlist>
>> + Definition: Must list the protection domain service name and path
>> + that the particular apr service has a dependency on.
>
> How many strings can there be and can you enumerate the possible
> strings?
https://lore.kernel.org/lkml/a19623d4-ab33-d87e-5925-d0411d7479dd@codeaurora.org/
Like I explained in ^^ avs/audio is the only service
that the apr device depends on and is known to run only
in "msm/adsp/audio_pd" service path.
However there are other service:service_path pairs
which will get documented when I add support for more
clients like fastrpc and ath10k.
>
>> +
>> = EXAMPLE
>> The following example represents a QDSP based sound card on a MSM8996 device
>> which uses apr as communication between Apps and QDSP.
>> @@ -82,3 +88,56 @@ which uses apr as communication between Apps and QDSP.
>> ...
>> };
>> };
>> +
>> += EXAMPLE 2
>> +The following example represents a QDSP based sound card on SDM845 device.
>> +Here the apr services are dependent on "avs/audio" service running on AUDIO
>> +Protection Domain hosted on ADSP remote processor.
>> +
>> + apr {
>> + compatible = "qcom,apr-v2";
>> + qcom,glink-channels = "apr_audio_svc";
>> + qcom,apr-domain = <APR_DOMAIN_ADSP>;
>> +
>> + q6core {
>> + compatible = "qcom,q6core";
>> + reg = <APR_SVC_ADSP_CORE>;
>> + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
>> + };
>> +
>> + q6afe: q6afe {
>> + compatible = "qcom,q6afe";
>> + reg = <APR_SVC_AFE>;
>> + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
>> + q6afedai: dais {
>> + compatible = "qcom,q6afe-dais";
>> + #sound-dai-cells = <1>;
>> +
>> + qi2s@22 {
>> + reg = <22>;
>> + qcom,sd-lines = <3>;
>> + };
>> + };
>> + };
>> +
>> + q6asm: q6asm {
>> + compatible = "qcom,q6asm";
>> + reg = <APR_SVC_ASM>;
>> + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
>> + q6asmdai: dais {
>> + compatible = "qcom,q6asm-dais";
>> + #sound-dai-cells = <1>;
>> + iommus = <&apps_smmu 0x1821 0x0>;
>> + };
>> + };
>> +
>> + q6adm: q6adm {
>> + compatible = "qcom,q6adm";
>> + reg = <APR_SVC_ADM>;
>> + qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
>
> Perhaps an example where not everything is the same. The example shows
> me this isn't needed in DT.
yes will update the example.
>
>> + q6routing: routing {
>> + compatible = "qcom,q6adm-routing";
>> + #sound-dai-cells = <0>;
>> + };
>> + };
>> + };
>> --
>> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
>> a Linux Foundation Collaborative Project
>>
--
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc, is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2020-02-01 13:45 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-30 5:00 [PATCH v3 0/3] Introduce Protection Domain Restart (PDR) Helpers Sibi Sankar
2019-12-30 5:00 ` [PATCH v3 1/3] soc: qcom: Introduce Protection Domain Restart helpers Sibi Sankar
2020-01-02 20:45 ` Bjorn Andersson
2020-01-03 12:36 ` Sibi Sankar
2019-12-30 5:00 ` [PATCH v3 2/3] dt-bindings: soc: qcom: apr: Add protection domain bindings Sibi Sankar
2020-01-31 14:34 ` Rob Herring
2020-02-01 13:44 ` Sibi Sankar
2019-12-30 5:00 ` [PATCH v3 3/3] soc: qcom: apr: Add avs/audio tracking functionality Sibi Sankar
2020-01-02 20:57 ` Bjorn Andersson
2020-01-03 12:47 ` Sibi Sankar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).