linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nicolai Stange <nstange@suse.de>
To: Steffen Klassert <steffen.klassert@secunet.com>,
	Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>,
	Martin Doucha <mdoucha@suse.cz>,
	linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org,
	Nicolai Stange <nstange@suse.de>
Subject: [PATCH 3/5] padata: grab parallel_data refcnt for reorder
Date: Wed, 19 Oct 2022 10:37:06 +0200	[thread overview]
Message-ID: <20221019083708.27138-4-nstange@suse.de> (raw)
In-Reply-To: <20221019083708.27138-1-nstange@suse.de>

On entry of padata_do_serial(), the in-flight padata_priv owns a reference
to the associated parallel_data instance.

However, as soon as the padata_priv got enqueued on the reorder list, it
can be completed from a different context, causing the reference to get
released in the course.

This would potentially cause UAFs from the subsequent padata_reorder()
operations invoked from the enqueueing padata_do_serial() or from the
reorder work.

Note that this is a purely theroretical concern, the problem has never been
actually observed -- it would require multiple pcrypt request submissions
racing against each other, ultimately a pcrypt instance destruction
(DELALG) short after request completions as well as unfortunate timing.

However, for the sake of correctness, it is still worth fixing.

Make padata_do_serial() grab a reference count on the parallel_data for
the subsequent reorder operation(s). As long as the padata_priv has not
been enqueued, this is safe, because as mentioned above, in-flight
pdata_privs own a reference already.

Note that padata_reorder() might schedule another padata_reorder() work
and thus, care must be taken to not prematurely release that "reorder
refcount" from padata_do_serial() again in case that has happened.
Make padata_reorder() return a bool for indicating whether or not a
reorder work has been scheduled. Let padata_do_serial() drop its refcount
only if this is not the case. Accordingly, make the reorder work handler,
invoke_padata_reorder(), drop it then as appropriate.

A remark on the commit chosen for the Fixes tag reference below: before
commit bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock by using per-instance
padata queues"), the padata_parallel lifetime had been tied to the
padata_instance. The padata_free() resp. padata_stop() issued a
synchronize_rcu() before padata_free_pd() from the instance destruction
path, rendering UAFs from the padata_do_serial()=>padata_reorder()
invocations with BHs disabled impossible AFAICS. With that, the
padata_reorder() work remains to be considered. Before
commit b128a3040935 ("padata: allocate workqueue internally"), the
workqueue got destroyed (from pcrypt), hence drained, before the padata
instance destruction, but this change moved that to after the
padata_free_pd() invocation from __padata_free(). So, while the Fixes
reference from below is most likely technically correct, I would still like
to reiterate that this problem is probably hard to trigger in practice,
even more so before commit bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock
by using per-instance padata queues").

Fixes: b128a3040935 ("padata: allocate workqueue internally")
Signed-off-by: Nicolai Stange <nstange@suse.de>
---
 kernel/padata.c | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/kernel/padata.c b/kernel/padata.c
index 0bf8c80dad5a..b79226727ef7 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -275,7 +275,7 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd,
 	return padata;
 }
 
-static void padata_reorder(struct parallel_data *pd)
+static bool padata_reorder(struct parallel_data *pd)
 {
 	struct padata_instance *pinst = pd->ps->pinst;
 	int cb_cpu;
@@ -294,7 +294,7 @@ static void padata_reorder(struct parallel_data *pd)
 	 * care for all the objects enqueued during the holdtime of the lock.
 	 */
 	if (!spin_trylock_bh(&pd->lock))
-		return;
+		return false;
 
 	while (1) {
 		padata = padata_find_next(pd, true);
@@ -331,17 +331,23 @@ static void padata_reorder(struct parallel_data *pd)
 
 	reorder = per_cpu_ptr(pd->reorder_list, pd->cpu);
 	if (!list_empty(&reorder->list) && padata_find_next(pd, false))
-		queue_work(pinst->serial_wq, &pd->reorder_work);
+		return queue_work(pinst->serial_wq, &pd->reorder_work);
+
+	return false;
 }
 
 static void invoke_padata_reorder(struct work_struct *work)
 {
 	struct parallel_data *pd;
+	bool keep_refcnt;
 
 	local_bh_disable();
 	pd = container_of(work, struct parallel_data, reorder_work);
-	padata_reorder(pd);
+	keep_refcnt = padata_reorder(pd);
 	local_bh_enable();
+
+	if (!keep_refcnt)
+		padata_put_pd(pd);
 }
 
 static void padata_serial_worker(struct work_struct *serial_work)
@@ -392,6 +398,15 @@ void padata_do_serial(struct padata_priv *padata)
 	struct padata_list *reorder = per_cpu_ptr(pd->reorder_list, hashed_cpu);
 	struct padata_priv *cur;
 
+	/*
+	 * The in-flight padata owns a reference on pd. However, as
+	 * soon as it's been enqueued on the reorder list, another
+	 * task can dequeue and complete it, thereby dropping the
+	 * reference. Grab another reference here, it will eventually
+	 * be released from a reorder work, if any, or below.
+	 */
+	padata_get_pd(pd);
+
 	spin_lock(&reorder->lock);
 	/* Sort in ascending order of sequence number. */
 	list_for_each_entry_reverse(cur, &reorder->list, list)
@@ -407,7 +422,8 @@ void padata_do_serial(struct padata_priv *padata)
 	 */
 	smp_mb();
 
-	padata_reorder(pd);
+	if (!padata_reorder(pd))
+		padata_put_pd(pd);
 }
 EXPORT_SYMBOL(padata_do_serial);
 
-- 
2.37.3


  parent reply	other threads:[~2022-10-19  8:37 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-19  8:37 [PATCH 0/5] padata: fix liftime issues after ->serial() has completed Nicolai Stange
2022-10-19  8:37 ` [PATCH 1/5] padata: introduce internal padata_get/put_pd() helpers Nicolai Stange
2022-10-28 14:23   ` Daniel Jordan
2022-10-19  8:37 ` [PATCH 2/5] padata: make padata_free_shell() to respect pd's ->refcnt Nicolai Stange
2022-10-28 14:35   ` Daniel Jordan
2022-10-28 16:22     ` Daniel Jordan
2022-11-09 13:02     ` Nicolai Stange
2022-11-10 22:05       ` Daniel Jordan
2022-10-19  8:37 ` Nicolai Stange [this message]
2022-10-28 16:04   ` [PATCH 3/5] padata: grab parallel_data refcnt for reorder Daniel Jordan
2022-11-09 13:03     ` Nicolai Stange
2022-11-10 23:16       ` Daniel Jordan
2022-10-19  8:37 ` [PATCH 4/5] padata: split out dequeue operation from padata_find_next() Nicolai Stange
2022-10-19  8:37 ` [PATCH 5/5] padata: avoid potential UAFs to the padata_shell from padata_reorder() Nicolai Stange
2022-10-28 16:14   ` Daniel Jordan
2022-11-09 13:03     ` Nicolai Stange
2022-11-10 23:22       ` Daniel Jordan
2022-10-21 21:35 ` [PATCH 0/5] padata: fix liftime issues after ->serial() has completed Daniel Jordan
2022-10-24  8:47   ` Nicolai Stange

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221019083708.27138-4-nstange@suse.de \
    --to=nstange@suse.de \
    --cc=daniel.m.jordan@oracle.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mdoucha@suse.cz \
    --cc=steffen.klassert@secunet.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).