dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] gpu/drm: fix potential memleak in error branch
@ 2022-01-05 12:13 Bernard Zhao
  0 siblings, 0 replies; only message in thread
From: Bernard Zhao @ 2022-01-05 12:13 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Daniel Vetter, dri-devel, linux-kernel
  Cc: Bernard Zhao

This patch try to fix potential memleak in error branch.
For example:
nv50_sor_create ->nv50_mstm_new-> drm_dp_mst_topology_mgr_init,
In function drm_dp_mst_topology_mgr_init, there are five error
branches, error branch just return error code, no free called.
And we see that the caller didn`t do the drm_dp_mst_topology_mgr_destroy job.
In this case, this may bring in the risk of memleak issue.

Signed-off-by: Bernard Zhao <bernard@vivo.com>
---
 drivers/gpu/drm/drm_dp_mst_topology.c | 22 ++++++++++++++++------
 1 file changed, 16 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
index f3d79eda94bb..f73b180dee73 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -5501,7 +5501,10 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
 				 int max_lane_count, int max_link_rate,
 				 int conn_base_id)
 {
-	struct drm_dp_mst_topology_state *mst_state;
+	struct drm_dp_mst_topology_state *mst_state = NULL;
+
+	mgr->payloads = NULL;
+	mgr->proposed_vcpis = NULL;
 
 	mutex_init(&mgr->lock);
 	mutex_init(&mgr->qlock);
@@ -5523,7 +5526,7 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
 	 */
 	mgr->delayed_destroy_wq = alloc_ordered_workqueue("drm_dp_mst_wq", 0);
 	if (mgr->delayed_destroy_wq == NULL)
-		return -ENOMEM;
+		goto out;
 
 	INIT_WORK(&mgr->work, drm_dp_mst_link_probe_work);
 	INIT_WORK(&mgr->tx_work, drm_dp_tx_work);
@@ -5539,18 +5542,18 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
 	mgr->conn_base_id = conn_base_id;
 	if (max_payloads + 1 > sizeof(mgr->payload_mask) * 8 ||
 	    max_payloads + 1 > sizeof(mgr->vcpi_mask) * 8)
-		return -EINVAL;
+		goto failed;
 	mgr->payloads = kcalloc(max_payloads, sizeof(struct drm_dp_payload), GFP_KERNEL);
 	if (!mgr->payloads)
-		return -ENOMEM;
+		goto failed;
 	mgr->proposed_vcpis = kcalloc(max_payloads, sizeof(struct drm_dp_vcpi *), GFP_KERNEL);
 	if (!mgr->proposed_vcpis)
-		return -ENOMEM;
+		goto failed;
 	set_bit(0, &mgr->payload_mask);
 
 	mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL);
 	if (mst_state == NULL)
-		return -ENOMEM;
+		goto failed;
 
 	mst_state->total_avail_slots = 63;
 	mst_state->start_slot = 1;
@@ -5563,6 +5566,13 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
 				    &drm_dp_mst_topology_state_funcs);
 
 	return 0;
+
+failed:
+	kfree(mgr->proposed_vcpis);
+	kfree(mgr->payloads);
+	destroy_workqueue(mgr->delayed_destroy_wq);
+out:
+	return -ENOMEM;
 }
 EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init);
 
-- 
2.33.1


^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2022-01-05 12:13 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-05 12:13 [PATCH v2] gpu/drm: fix potential memleak in error branch Bernard Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).