All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] iommu/arm-smmu-v3: poll cmdq until it has space
@ 2021-09-26 18:51 ` Fernand Sieber via iommu
  0 siblings, 0 replies; 6+ messages in thread
From: Fernand Sieber @ 2021-09-26 18:51 UTC (permalink / raw)
  To: will, robin.murphy, john.garry
  Cc: Fernand Sieber, linux-arm-kernel, iommu, linux-kernel

When a thread sends commands to the SMMU, it needs to allocate some
space to write its commands in a ring buffer.

The allocation algorithms works as follows: until enough free spaced is
available in the queue, repeat the following outer loop. First, try to
acquire an exclusive lock to read the consumer index from the SMMU
register over MMIO. If that fails, it means that another thread holds
the lock (or multiple threads, in shared mode, for sync commands). The
current code guarantees that when a thread is holding the lock, the
consumer index will be updated from the SMMU register. So then when the
acquisition of the exclusive lock fails, we can safely assume that
another thread will eventually update the consumer index and enter an
inner waiting loop until that happens.

The problem that this patch fixes is that the waiting loop exits as soon
as any space is available in the queue, so it is possible that it exits
immediately if there's some space available but not enough to write the
thread's commands. That means the cmdq allocation queue will fail (outer
loop) and the thread will spin attempting to acquire the exclusive lock
to update the consumer index from the SMMU register.

If a lot of threads are failing to allocate commands, this can cause
heavy contention on the lock, to the point where the system slowdown or
livelocks. The livelock is possible if some other threads are attempting
to execute synchronous commands. These threads need to ensure that they
control updates to the consumer index so that they can correctly observe
when their command is executed, they enforce that by acquiring the lock
in shared mode. If there's too much contention, they never succeed to
acquire the lock via the read+cmpxchg mechanism, and their progress
stall. But because they already hold allocated space in the command
queue, their stall prevents progress from other threads attempting to
allocate space in the cmdq. This causes a livelock.

This patch makes the waiting loop exit as soon as enough space is
available, rather than as soon as any space is available. This means
that when two threads are competing for the exclusive lock when
allocating space in the queue, one of them will fail to acquiire the
lock in exclusive lock and be knocked to the waiting loop and stay there
until there's enough free space rather than exiting it immediately when
any space is available. Threads in the waiting loop do not compete for
the lock, reducing contention enough to enable synchronous threads to
make progress, when applicable.

Note that we cannot afford to have all threads parked in the waiting
loop unless there are synchronous threads executing concurrenty,
otherwise no thread is observing the SMMU register and updating the
consumer index. Thus if we succeed to acquire the lock in exclusive
mode, we cannot enter the waiting loop because we could be the last
thread observing the SMMU. Similarly, if the producer index is updated,
we need to exit the waiting loop because it could mean that the latest
thread to observe the SMMU has succeeded to allocate commands and thus
has moved on.

Signed-off-by: Fernand Sieber <sieberf@amazon.com>
---
Changes in v2
  - Fix inverted condition to break out the loop when queue_has_space
  - Replace obsolete comment reference to llq->state->val by llq->val

 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 +++++++++++++++------
 1 file changed, 29 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index a388e318f86e..a0a04cc9c57d 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -118,12 +118,6 @@ static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
 	return space >= n;
 }

-static bool queue_full(struct arm_smmu_ll_queue *q)
-{
-	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
-	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
-}
-
 static bool queue_empty(struct arm_smmu_ll_queue *q)
 {
 	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
@@ -582,14 +576,16 @@ static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
 	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
 }

-/* Wait for the command queue to become non-full */
-static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
-					     struct arm_smmu_ll_queue *llq)
+/* Wait for the command queue to have enough space */
+static int arm_smmu_cmdq_poll_until_has_space(struct arm_smmu_device *smmu,
+					      struct arm_smmu_ll_queue *llq,
+					      u32 n)
 {
 	unsigned long flags;
 	struct arm_smmu_queue_poll qp;
 	struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu);
 	int ret = 0;
+	int prod;

 	/*
 	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
@@ -599,15 +595,36 @@ static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
 		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
 		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
 		llq->val = READ_ONCE(cmdq->q.llq.val);
-		return 0;
+
+		/*
+		 * We must return here even if there's no space, because we could be
+		 * the last thread observing the SMMU progress. Thus, we cannot enter
+		 * the waiting loop below as it relies on another thread updating
+		 * llq->val.
+		 */
+		if (queue_has_space(llq, n))
+			return 0;
+		else
+			return -EAGAIN;
 	}

 	queue_poll_init(smmu, &qp);
+	prod = llq->prod;
 	do {
 		llq->val = READ_ONCE(cmdq->q.llq.val);
-		if (!queue_full(llq))
+		if (queue_has_space(llq, n))
 			break;

+		/*
+		 * We must return here even if there's no space, because the producer
+		 * having moved forward could mean that the last thread observing the
+		 * SMMU progress has allocated space in the cmdq and moved on, leaving
+		 * us in this waiting loop with no other thread updating
+		 * llq->val.
+		 */
+		if (llq->prod != prod)
+			return -EAGAIN;
+
 		ret = queue_poll(&qp);
 	} while (!ret);

@@ -755,7 +772,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,

 		while (!queue_has_space(&llq, n + sync)) {
 			local_irq_restore(flags);
-			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
+			if (arm_smmu_cmdq_poll_until_has_space(smmu, &llq, n + sync) == -ETIMEDOUT)
 				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
 			local_irq_save(flags);
 		}
--
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2] iommu/arm-smmu-v3: poll cmdq until it has space
@ 2021-09-26 18:51 ` Fernand Sieber via iommu
  0 siblings, 0 replies; 6+ messages in thread
From: Fernand Sieber via iommu @ 2021-09-26 18:51 UTC (permalink / raw)
  To: will, robin.murphy, john.garry
  Cc: Fernand Sieber, iommu, linux-kernel, linux-arm-kernel

When a thread sends commands to the SMMU, it needs to allocate some
space to write its commands in a ring buffer.

The allocation algorithms works as follows: until enough free spaced is
available in the queue, repeat the following outer loop. First, try to
acquire an exclusive lock to read the consumer index from the SMMU
register over MMIO. If that fails, it means that another thread holds
the lock (or multiple threads, in shared mode, for sync commands). The
current code guarantees that when a thread is holding the lock, the
consumer index will be updated from the SMMU register. So then when the
acquisition of the exclusive lock fails, we can safely assume that
another thread will eventually update the consumer index and enter an
inner waiting loop until that happens.

The problem that this patch fixes is that the waiting loop exits as soon
as any space is available in the queue, so it is possible that it exits
immediately if there's some space available but not enough to write the
thread's commands. That means the cmdq allocation queue will fail (outer
loop) and the thread will spin attempting to acquire the exclusive lock
to update the consumer index from the SMMU register.

If a lot of threads are failing to allocate commands, this can cause
heavy contention on the lock, to the point where the system slowdown or
livelocks. The livelock is possible if some other threads are attempting
to execute synchronous commands. These threads need to ensure that they
control updates to the consumer index so that they can correctly observe
when their command is executed, they enforce that by acquiring the lock
in shared mode. If there's too much contention, they never succeed to
acquire the lock via the read+cmpxchg mechanism, and their progress
stall. But because they already hold allocated space in the command
queue, their stall prevents progress from other threads attempting to
allocate space in the cmdq. This causes a livelock.

This patch makes the waiting loop exit as soon as enough space is
available, rather than as soon as any space is available. This means
that when two threads are competing for the exclusive lock when
allocating space in the queue, one of them will fail to acquiire the
lock in exclusive lock and be knocked to the waiting loop and stay there
until there's enough free space rather than exiting it immediately when
any space is available. Threads in the waiting loop do not compete for
the lock, reducing contention enough to enable synchronous threads to
make progress, when applicable.

Note that we cannot afford to have all threads parked in the waiting
loop unless there are synchronous threads executing concurrenty,
otherwise no thread is observing the SMMU register and updating the
consumer index. Thus if we succeed to acquire the lock in exclusive
mode, we cannot enter the waiting loop because we could be the last
thread observing the SMMU. Similarly, if the producer index is updated,
we need to exit the waiting loop because it could mean that the latest
thread to observe the SMMU has succeeded to allocate commands and thus
has moved on.

Signed-off-by: Fernand Sieber <sieberf@amazon.com>
---
Changes in v2
  - Fix inverted condition to break out the loop when queue_has_space
  - Replace obsolete comment reference to llq->state->val by llq->val

 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 +++++++++++++++------
 1 file changed, 29 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index a388e318f86e..a0a04cc9c57d 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -118,12 +118,6 @@ static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
 	return space >= n;
 }

-static bool queue_full(struct arm_smmu_ll_queue *q)
-{
-	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
-	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
-}
-
 static bool queue_empty(struct arm_smmu_ll_queue *q)
 {
 	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
@@ -582,14 +576,16 @@ static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
 	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
 }

-/* Wait for the command queue to become non-full */
-static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
-					     struct arm_smmu_ll_queue *llq)
+/* Wait for the command queue to have enough space */
+static int arm_smmu_cmdq_poll_until_has_space(struct arm_smmu_device *smmu,
+					      struct arm_smmu_ll_queue *llq,
+					      u32 n)
 {
 	unsigned long flags;
 	struct arm_smmu_queue_poll qp;
 	struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu);
 	int ret = 0;
+	int prod;

 	/*
 	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
@@ -599,15 +595,36 @@ static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
 		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
 		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
 		llq->val = READ_ONCE(cmdq->q.llq.val);
-		return 0;
+
+		/*
+		 * We must return here even if there's no space, because we could be
+		 * the last thread observing the SMMU progress. Thus, we cannot enter
+		 * the waiting loop below as it relies on another thread updating
+		 * llq->val.
+		 */
+		if (queue_has_space(llq, n))
+			return 0;
+		else
+			return -EAGAIN;
 	}

 	queue_poll_init(smmu, &qp);
+	prod = llq->prod;
 	do {
 		llq->val = READ_ONCE(cmdq->q.llq.val);
-		if (!queue_full(llq))
+		if (queue_has_space(llq, n))
 			break;

+		/*
+		 * We must return here even if there's no space, because the producer
+		 * having moved forward could mean that the last thread observing the
+		 * SMMU progress has allocated space in the cmdq and moved on, leaving
+		 * us in this waiting loop with no other thread updating
+		 * llq->val.
+		 */
+		if (llq->prod != prod)
+			return -EAGAIN;
+
 		ret = queue_poll(&qp);
 	} while (!ret);

@@ -755,7 +772,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,

 		while (!queue_has_space(&llq, n + sync)) {
 			local_irq_restore(flags);
-			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
+			if (arm_smmu_cmdq_poll_until_has_space(smmu, &llq, n + sync) == -ETIMEDOUT)
 				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
 			local_irq_save(flags);
 		}
--
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2] iommu/arm-smmu-v3: poll cmdq until it has space
@ 2021-09-26 18:51 ` Fernand Sieber via iommu
  0 siblings, 0 replies; 6+ messages in thread
From: Fernand Sieber @ 2021-09-26 18:51 UTC (permalink / raw)
  To: will, robin.murphy, john.garry
  Cc: Fernand Sieber, linux-arm-kernel, iommu, linux-kernel

When a thread sends commands to the SMMU, it needs to allocate some
space to write its commands in a ring buffer.

The allocation algorithms works as follows: until enough free spaced is
available in the queue, repeat the following outer loop. First, try to
acquire an exclusive lock to read the consumer index from the SMMU
register over MMIO. If that fails, it means that another thread holds
the lock (or multiple threads, in shared mode, for sync commands). The
current code guarantees that when a thread is holding the lock, the
consumer index will be updated from the SMMU register. So then when the
acquisition of the exclusive lock fails, we can safely assume that
another thread will eventually update the consumer index and enter an
inner waiting loop until that happens.

The problem that this patch fixes is that the waiting loop exits as soon
as any space is available in the queue, so it is possible that it exits
immediately if there's some space available but not enough to write the
thread's commands. That means the cmdq allocation queue will fail (outer
loop) and the thread will spin attempting to acquire the exclusive lock
to update the consumer index from the SMMU register.

If a lot of threads are failing to allocate commands, this can cause
heavy contention on the lock, to the point where the system slowdown or
livelocks. The livelock is possible if some other threads are attempting
to execute synchronous commands. These threads need to ensure that they
control updates to the consumer index so that they can correctly observe
when their command is executed, they enforce that by acquiring the lock
in shared mode. If there's too much contention, they never succeed to
acquire the lock via the read+cmpxchg mechanism, and their progress
stall. But because they already hold allocated space in the command
queue, their stall prevents progress from other threads attempting to
allocate space in the cmdq. This causes a livelock.

This patch makes the waiting loop exit as soon as enough space is
available, rather than as soon as any space is available. This means
that when two threads are competing for the exclusive lock when
allocating space in the queue, one of them will fail to acquiire the
lock in exclusive lock and be knocked to the waiting loop and stay there
until there's enough free space rather than exiting it immediately when
any space is available. Threads in the waiting loop do not compete for
the lock, reducing contention enough to enable synchronous threads to
make progress, when applicable.

Note that we cannot afford to have all threads parked in the waiting
loop unless there are synchronous threads executing concurrenty,
otherwise no thread is observing the SMMU register and updating the
consumer index. Thus if we succeed to acquire the lock in exclusive
mode, we cannot enter the waiting loop because we could be the last
thread observing the SMMU. Similarly, if the producer index is updated,
we need to exit the waiting loop because it could mean that the latest
thread to observe the SMMU has succeeded to allocate commands and thus
has moved on.

Signed-off-by: Fernand Sieber <sieberf@amazon.com>
---
Changes in v2
  - Fix inverted condition to break out the loop when queue_has_space
  - Replace obsolete comment reference to llq->state->val by llq->val

 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 +++++++++++++++------
 1 file changed, 29 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index a388e318f86e..a0a04cc9c57d 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -118,12 +118,6 @@ static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
 	return space >= n;
 }

-static bool queue_full(struct arm_smmu_ll_queue *q)
-{
-	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
-	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
-}
-
 static bool queue_empty(struct arm_smmu_ll_queue *q)
 {
 	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
@@ -582,14 +576,16 @@ static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
 	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
 }

-/* Wait for the command queue to become non-full */
-static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
-					     struct arm_smmu_ll_queue *llq)
+/* Wait for the command queue to have enough space */
+static int arm_smmu_cmdq_poll_until_has_space(struct arm_smmu_device *smmu,
+					      struct arm_smmu_ll_queue *llq,
+					      u32 n)
 {
 	unsigned long flags;
 	struct arm_smmu_queue_poll qp;
 	struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu);
 	int ret = 0;
+	int prod;

 	/*
 	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
@@ -599,15 +595,36 @@ static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
 		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
 		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
 		llq->val = READ_ONCE(cmdq->q.llq.val);
-		return 0;
+
+		/*
+		 * We must return here even if there's no space, because we could be
+		 * the last thread observing the SMMU progress. Thus, we cannot enter
+		 * the waiting loop below as it relies on another thread updating
+		 * llq->val.
+		 */
+		if (queue_has_space(llq, n))
+			return 0;
+		else
+			return -EAGAIN;
 	}

 	queue_poll_init(smmu, &qp);
+	prod = llq->prod;
 	do {
 		llq->val = READ_ONCE(cmdq->q.llq.val);
-		if (!queue_full(llq))
+		if (queue_has_space(llq, n))
 			break;

+		/*
+		 * We must return here even if there's no space, because the producer
+		 * having moved forward could mean that the last thread observing the
+		 * SMMU progress has allocated space in the cmdq and moved on, leaving
+		 * us in this waiting loop with no other thread updating
+		 * llq->val.
+		 */
+		if (llq->prod != prod)
+			return -EAGAIN;
+
 		ret = queue_poll(&qp);
 	} while (!ret);

@@ -755,7 +772,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,

 		while (!queue_has_space(&llq, n + sync)) {
 			local_irq_restore(flags);
-			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
+			if (arm_smmu_cmdq_poll_until_has_space(smmu, &llq, n + sync) == -ETIMEDOUT)
 				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
 			local_irq_save(flags);
 		}
--
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] iommu/arm-smmu-v3: poll cmdq until it has space
  2021-09-26 18:51 ` Fernand Sieber via iommu
  (?)
@ 2021-10-01 15:22   ` John Garry
  -1 siblings, 0 replies; 6+ messages in thread
From: John Garry @ 2021-10-01 15:22 UTC (permalink / raw)
  To: Fernand Sieber, will, robin.murphy; +Cc: iommu, linux-kernel, linux-arm-kernel

On 26/09/2021 19:51, Fernand Sieber wrote:
> When a thread sends commands to the SMMU, it needs to allocate some
> space to write its commands in a ring buffer.
> 
> The allocation algorithms works as follows: until enough free spaced is
> available in the queue, repeat the following outer loop. First, try to
> acquire an exclusive lock to read the consumer index from the SMMU
> register over MMIO. If that fails, it means that another thread holds
> the lock (or multiple threads, in shared mode, for sync commands). The
> current code guarantees that when a thread is holding the lock, the
> consumer index will be updated from the SMMU register. So then when the
> acquisition of the exclusive lock fails, we can safely assume that
> another thread will eventually update the consumer index and enter an
> inner waiting loop until that happens.
> 

You have written a lot in the commit message. I think that this 
paragraph can be dropped to try to make it more concise and readable.

> The problem that this patch fixes is that the waiting loop exits as soon
> as any space is available in the queue, so it is possible that it exits
> immediately if there's some space available but not enough to write the
> thread's commands.

To me, this seems strange. From my experience, I would expect that if 
the queue is full, and then some space comes available after updating 
the cons pointer from HW, then the cons pointer would have advanced so 
far such that we have more than enough space for the maximum amount of 
commands in a batch, 64.

> That means the cmdq allocation queue will fail (outer
> loop) and the thread will spin attempting to acquire the exclusive lock
> to update the consumer index from the SMMU register.
> 
> If a lot of threads are failing to allocate commands,

As above, what is special for your HW such that when the queue becomes 
not full that there is still not enough space? The cons pointer is only 
updated at 2x locations:
- in arm_smmu_cmdq_poll_until_not_full()
- the last CPU gathered unlocks the shared lock at the function exit

So it's not updated very regularly.

> this can cause
> heavy contention on the lock, to the point where the system slowdown or
> livelocks. The livelock is possible if some other threads are attempting
> to execute synchronous commands.

What are synchronous commands in this context? Do you mean we're issuing 
a CMD_SYNC?

> These threads need to ensure that they
> control updates to the consumer index so that they can correctly observe
> when their command is executed, they enforce that by acquiring the lock
> in shared mode. If there's too much contention, they never succeed to
> acquire the lock via the read+cmpxchg mechanism, and their progress
> stall.

Why is this? Why can they not get the lock? They spin waiting for it.

> But because they already hold allocated space in the command
> queue, their stall prevents progress from other threads attempting to
> allocate space in the cmdq. This causes a livelock.
> 
> This patch makes the waiting loop exit as soon as enough space is
> available, rather than as soon as any space is available. This means
> that when two threads are competing for the exclusive lock when
> allocating space in the queue, one of them will fail to acquiire the
> lock in exclusive lock and be knocked to the waiting loop and stay there
> until there's enough free space rather than exiting it immediately when
> any space is available. Threads in the waiting loop do not compete for
> the lock, reducing contention enough to enable synchronous threads to
> make progress, when applicable.
> 
> Note that we cannot afford to have all threads parked in the waiting
> loop unless there are synchronous threads executing concurrenty,

nit: I'd say CPU when describing this problem, rather than thread

> otherwise no thread is observing the SMMU register and updating the
> consumer index. Thus if we succeed to acquire the lock in exclusive
> mode, we cannot enter the waiting loop because we could be the last
> thread observing the SMMU. Similarly, if the producer index is updated,
> we need to exit the waiting loop because it could mean that the latest
> thread to observe the SMMU has succeeded to allocate commands and thus
> has moved on.
> 
> Signed-off-by: Fernand Sieber <sieberf@amazon.com>
> ---
> Changes in v2
>    - Fix inverted condition to break out the loop when queue_has_space
>    - Replace obsolete comment reference to llq->state->val by llq->val
> 
>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 +++++++++++++++------
>   1 file changed, 29 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index a388e318f86e..a0a04cc9c57d 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -118,12 +118,6 @@ static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
>   	return space >= n;
>   }
> 
> -static bool queue_full(struct arm_smmu_ll_queue *q)
> -{
> -	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> -	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
> -}
> -
>   static bool queue_empty(struct arm_smmu_ll_queue *q)
>   {
>   	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> @@ -582,14 +576,16 @@ static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
>   	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
>   }
> 
> -/* Wait for the command queue to become non-full */
> -static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
> -					     struct arm_smmu_ll_queue *llq)
> +/* Wait for the command queue to have enough space */
> +static int arm_smmu_cmdq_poll_until_has_space(struct arm_smmu_device *smmu,
> +					      struct arm_smmu_ll_queue *llq,
> +					      u32 n)
>   {
>   	unsigned long flags;
>   	struct arm_smmu_queue_poll qp;
>   	struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu);
>   	int ret = 0;
> +	int prod;
> 
>   	/*
>   	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
> @@ -599,15 +595,36 @@ static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
>   		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
>   		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
>   		llq->val = READ_ONCE(cmdq->q.llq.val);
> -		return 0;
> +
> +		/*
> +		 * We must return here even if there's no space, because we could be
> +		 * the last thread observing the SMMU progress. Thus, we cannot enter
> +		 * the waiting loop below as it relies on another thread updating
> +		 * llq->val.
> +		 */
> +		if (queue_has_space(llq, n))
> +			return 0;
> +		else
> +			return -EAGAIN;
>   	}
> 
>   	queue_poll_init(smmu, &qp);
> +	prod = llq->prod;
>   	do {
>   		llq->val = READ_ONCE(cmdq->q.llq.val);
> -		if (!queue_full(llq))
> +		if (queue_has_space(llq, n))
>   			break;
> 
> +		/*
> +		 * We must return here even if there's no space, because the producer
> +		 * having moved forward could mean that the last thread observing the
> +		 * SMMU progress has allocated space in the cmdq and moved on, leaving
> +		 * us in this waiting loop with no other thread updating
> +		 * llq->val.
> +		 */
> +		if (llq->prod != prod)
> +			return -EAGAIN;
> +
>   		ret = queue_poll(&qp);
>   	} while (!ret);
> 
> @@ -755,7 +772,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
> 
>   		while (!queue_has_space(&llq, n + sync)) {
>   			local_irq_restore(flags);
> -			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
> +			if (arm_smmu_cmdq_poll_until_has_space(smmu, &llq, n + sync) == -ETIMEDOUT)
>   				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
>   			local_irq_save(flags);
>   		}
> --
> 2.25.1
> 

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] iommu/arm-smmu-v3: poll cmdq until it has space
@ 2021-10-01 15:22   ` John Garry
  0 siblings, 0 replies; 6+ messages in thread
From: John Garry @ 2021-10-01 15:22 UTC (permalink / raw)
  To: Fernand Sieber, will, robin.murphy; +Cc: linux-arm-kernel, iommu, linux-kernel

On 26/09/2021 19:51, Fernand Sieber wrote:
> When a thread sends commands to the SMMU, it needs to allocate some
> space to write its commands in a ring buffer.
> 
> The allocation algorithms works as follows: until enough free spaced is
> available in the queue, repeat the following outer loop. First, try to
> acquire an exclusive lock to read the consumer index from the SMMU
> register over MMIO. If that fails, it means that another thread holds
> the lock (or multiple threads, in shared mode, for sync commands). The
> current code guarantees that when a thread is holding the lock, the
> consumer index will be updated from the SMMU register. So then when the
> acquisition of the exclusive lock fails, we can safely assume that
> another thread will eventually update the consumer index and enter an
> inner waiting loop until that happens.
> 

You have written a lot in the commit message. I think that this 
paragraph can be dropped to try to make it more concise and readable.

> The problem that this patch fixes is that the waiting loop exits as soon
> as any space is available in the queue, so it is possible that it exits
> immediately if there's some space available but not enough to write the
> thread's commands.

To me, this seems strange. From my experience, I would expect that if 
the queue is full, and then some space comes available after updating 
the cons pointer from HW, then the cons pointer would have advanced so 
far such that we have more than enough space for the maximum amount of 
commands in a batch, 64.

> That means the cmdq allocation queue will fail (outer
> loop) and the thread will spin attempting to acquire the exclusive lock
> to update the consumer index from the SMMU register.
> 
> If a lot of threads are failing to allocate commands,

As above, what is special for your HW such that when the queue becomes 
not full that there is still not enough space? The cons pointer is only 
updated at 2x locations:
- in arm_smmu_cmdq_poll_until_not_full()
- the last CPU gathered unlocks the shared lock at the function exit

So it's not updated very regularly.

> this can cause
> heavy contention on the lock, to the point where the system slowdown or
> livelocks. The livelock is possible if some other threads are attempting
> to execute synchronous commands.

What are synchronous commands in this context? Do you mean we're issuing 
a CMD_SYNC?

> These threads need to ensure that they
> control updates to the consumer index so that they can correctly observe
> when their command is executed, they enforce that by acquiring the lock
> in shared mode. If there's too much contention, they never succeed to
> acquire the lock via the read+cmpxchg mechanism, and their progress
> stall.

Why is this? Why can they not get the lock? They spin waiting for it.

> But because they already hold allocated space in the command
> queue, their stall prevents progress from other threads attempting to
> allocate space in the cmdq. This causes a livelock.
> 
> This patch makes the waiting loop exit as soon as enough space is
> available, rather than as soon as any space is available. This means
> that when two threads are competing for the exclusive lock when
> allocating space in the queue, one of them will fail to acquiire the
> lock in exclusive lock and be knocked to the waiting loop and stay there
> until there's enough free space rather than exiting it immediately when
> any space is available. Threads in the waiting loop do not compete for
> the lock, reducing contention enough to enable synchronous threads to
> make progress, when applicable.
> 
> Note that we cannot afford to have all threads parked in the waiting
> loop unless there are synchronous threads executing concurrenty,

nit: I'd say CPU when describing this problem, rather than thread

> otherwise no thread is observing the SMMU register and updating the
> consumer index. Thus if we succeed to acquire the lock in exclusive
> mode, we cannot enter the waiting loop because we could be the last
> thread observing the SMMU. Similarly, if the producer index is updated,
> we need to exit the waiting loop because it could mean that the latest
> thread to observe the SMMU has succeeded to allocate commands and thus
> has moved on.
> 
> Signed-off-by: Fernand Sieber <sieberf@amazon.com>
> ---
> Changes in v2
>    - Fix inverted condition to break out the loop when queue_has_space
>    - Replace obsolete comment reference to llq->state->val by llq->val
> 
>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 +++++++++++++++------
>   1 file changed, 29 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index a388e318f86e..a0a04cc9c57d 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -118,12 +118,6 @@ static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
>   	return space >= n;
>   }
> 
> -static bool queue_full(struct arm_smmu_ll_queue *q)
> -{
> -	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> -	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
> -}
> -
>   static bool queue_empty(struct arm_smmu_ll_queue *q)
>   {
>   	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> @@ -582,14 +576,16 @@ static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
>   	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
>   }
> 
> -/* Wait for the command queue to become non-full */
> -static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
> -					     struct arm_smmu_ll_queue *llq)
> +/* Wait for the command queue to have enough space */
> +static int arm_smmu_cmdq_poll_until_has_space(struct arm_smmu_device *smmu,
> +					      struct arm_smmu_ll_queue *llq,
> +					      u32 n)
>   {
>   	unsigned long flags;
>   	struct arm_smmu_queue_poll qp;
>   	struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu);
>   	int ret = 0;
> +	int prod;
> 
>   	/*
>   	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
> @@ -599,15 +595,36 @@ static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
>   		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
>   		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
>   		llq->val = READ_ONCE(cmdq->q.llq.val);
> -		return 0;
> +
> +		/*
> +		 * We must return here even if there's no space, because we could be
> +		 * the last thread observing the SMMU progress. Thus, we cannot enter
> +		 * the waiting loop below as it relies on another thread updating
> +		 * llq->val.
> +		 */
> +		if (queue_has_space(llq, n))
> +			return 0;
> +		else
> +			return -EAGAIN;
>   	}
> 
>   	queue_poll_init(smmu, &qp);
> +	prod = llq->prod;
>   	do {
>   		llq->val = READ_ONCE(cmdq->q.llq.val);
> -		if (!queue_full(llq))
> +		if (queue_has_space(llq, n))
>   			break;
> 
> +		/*
> +		 * We must return here even if there's no space, because the producer
> +		 * having moved forward could mean that the last thread observing the
> +		 * SMMU progress has allocated space in the cmdq and moved on, leaving
> +		 * us in this waiting loop with no other thread updating
> +		 * llq->val.
> +		 */
> +		if (llq->prod != prod)
> +			return -EAGAIN;
> +
>   		ret = queue_poll(&qp);
>   	} while (!ret);
> 
> @@ -755,7 +772,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
> 
>   		while (!queue_has_space(&llq, n + sync)) {
>   			local_irq_restore(flags);
> -			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
> +			if (arm_smmu_cmdq_poll_until_has_space(smmu, &llq, n + sync) == -ETIMEDOUT)
>   				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
>   			local_irq_save(flags);
>   		}
> --
> 2.25.1
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] iommu/arm-smmu-v3: poll cmdq until it has space
@ 2021-10-01 15:22   ` John Garry
  0 siblings, 0 replies; 6+ messages in thread
From: John Garry @ 2021-10-01 15:22 UTC (permalink / raw)
  To: Fernand Sieber, will, robin.murphy; +Cc: linux-arm-kernel, iommu, linux-kernel

On 26/09/2021 19:51, Fernand Sieber wrote:
> When a thread sends commands to the SMMU, it needs to allocate some
> space to write its commands in a ring buffer.
> 
> The allocation algorithms works as follows: until enough free spaced is
> available in the queue, repeat the following outer loop. First, try to
> acquire an exclusive lock to read the consumer index from the SMMU
> register over MMIO. If that fails, it means that another thread holds
> the lock (or multiple threads, in shared mode, for sync commands). The
> current code guarantees that when a thread is holding the lock, the
> consumer index will be updated from the SMMU register. So then when the
> acquisition of the exclusive lock fails, we can safely assume that
> another thread will eventually update the consumer index and enter an
> inner waiting loop until that happens.
> 

You have written a lot in the commit message. I think that this 
paragraph can be dropped to try to make it more concise and readable.

> The problem that this patch fixes is that the waiting loop exits as soon
> as any space is available in the queue, so it is possible that it exits
> immediately if there's some space available but not enough to write the
> thread's commands.

To me, this seems strange. From my experience, I would expect that if 
the queue is full, and then some space comes available after updating 
the cons pointer from HW, then the cons pointer would have advanced so 
far such that we have more than enough space for the maximum amount of 
commands in a batch, 64.

> That means the cmdq allocation queue will fail (outer
> loop) and the thread will spin attempting to acquire the exclusive lock
> to update the consumer index from the SMMU register.
> 
> If a lot of threads are failing to allocate commands,

As above, what is special for your HW such that when the queue becomes 
not full that there is still not enough space? The cons pointer is only 
updated at 2x locations:
- in arm_smmu_cmdq_poll_until_not_full()
- the last CPU gathered unlocks the shared lock at the function exit

So it's not updated very regularly.

> this can cause
> heavy contention on the lock, to the point where the system slowdown or
> livelocks. The livelock is possible if some other threads are attempting
> to execute synchronous commands.

What are synchronous commands in this context? Do you mean we're issuing 
a CMD_SYNC?

> These threads need to ensure that they
> control updates to the consumer index so that they can correctly observe
> when their command is executed, they enforce that by acquiring the lock
> in shared mode. If there's too much contention, they never succeed to
> acquire the lock via the read+cmpxchg mechanism, and their progress
> stall.

Why is this? Why can they not get the lock? They spin waiting for it.

> But because they already hold allocated space in the command
> queue, their stall prevents progress from other threads attempting to
> allocate space in the cmdq. This causes a livelock.
> 
> This patch makes the waiting loop exit as soon as enough space is
> available, rather than as soon as any space is available. This means
> that when two threads are competing for the exclusive lock when
> allocating space in the queue, one of them will fail to acquiire the
> lock in exclusive lock and be knocked to the waiting loop and stay there
> until there's enough free space rather than exiting it immediately when
> any space is available. Threads in the waiting loop do not compete for
> the lock, reducing contention enough to enable synchronous threads to
> make progress, when applicable.
> 
> Note that we cannot afford to have all threads parked in the waiting
> loop unless there are synchronous threads executing concurrenty,

nit: I'd say CPU when describing this problem, rather than thread

> otherwise no thread is observing the SMMU register and updating the
> consumer index. Thus if we succeed to acquire the lock in exclusive
> mode, we cannot enter the waiting loop because we could be the last
> thread observing the SMMU. Similarly, if the producer index is updated,
> we need to exit the waiting loop because it could mean that the latest
> thread to observe the SMMU has succeeded to allocate commands and thus
> has moved on.
> 
> Signed-off-by: Fernand Sieber <sieberf@amazon.com>
> ---
> Changes in v2
>    - Fix inverted condition to break out the loop when queue_has_space
>    - Replace obsolete comment reference to llq->state->val by llq->val
> 
>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 +++++++++++++++------
>   1 file changed, 29 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index a388e318f86e..a0a04cc9c57d 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -118,12 +118,6 @@ static bool queue_has_space(struct arm_smmu_ll_queue *q, u32 n)
>   	return space >= n;
>   }
> 
> -static bool queue_full(struct arm_smmu_ll_queue *q)
> -{
> -	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> -	       Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
> -}
> -
>   static bool queue_empty(struct arm_smmu_ll_queue *q)
>   {
>   	return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
> @@ -582,14 +576,16 @@ static void arm_smmu_cmdq_poll_valid_map(struct arm_smmu_cmdq *cmdq,
>   	__arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false);
>   }
> 
> -/* Wait for the command queue to become non-full */
> -static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
> -					     struct arm_smmu_ll_queue *llq)
> +/* Wait for the command queue to have enough space */
> +static int arm_smmu_cmdq_poll_until_has_space(struct arm_smmu_device *smmu,
> +					      struct arm_smmu_ll_queue *llq,
> +					      u32 n)
>   {
>   	unsigned long flags;
>   	struct arm_smmu_queue_poll qp;
>   	struct arm_smmu_cmdq *cmdq = arm_smmu_get_cmdq(smmu);
>   	int ret = 0;
> +	int prod;
> 
>   	/*
>   	 * Try to update our copy of cons by grabbing exclusive cmdq access. If
> @@ -599,15 +595,36 @@ static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu,
>   		WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg));
>   		arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags);
>   		llq->val = READ_ONCE(cmdq->q.llq.val);
> -		return 0;
> +
> +		/*
> +		 * We must return here even if there's no space, because we could be
> +		 * the last thread observing the SMMU progress. Thus, we cannot enter
> +		 * the waiting loop below as it relies on another thread updating
> +		 * llq->val.
> +		 */
> +		if (queue_has_space(llq, n))
> +			return 0;
> +		else
> +			return -EAGAIN;
>   	}
> 
>   	queue_poll_init(smmu, &qp);
> +	prod = llq->prod;
>   	do {
>   		llq->val = READ_ONCE(cmdq->q.llq.val);
> -		if (!queue_full(llq))
> +		if (queue_has_space(llq, n))
>   			break;
> 
> +		/*
> +		 * We must return here even if there's no space, because the producer
> +		 * having moved forward could mean that the last thread observing the
> +		 * SMMU progress has allocated space in the cmdq and moved on, leaving
> +		 * us in this waiting loop with no other thread updating
> +		 * llq->val.
> +		 */
> +		if (llq->prod != prod)
> +			return -EAGAIN;
> +
>   		ret = queue_poll(&qp);
>   	} while (!ret);
> 
> @@ -755,7 +772,7 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
> 
>   		while (!queue_has_space(&llq, n + sync)) {
>   			local_irq_restore(flags);
> -			if (arm_smmu_cmdq_poll_until_not_full(smmu, &llq))
> +			if (arm_smmu_cmdq_poll_until_has_space(smmu, &llq, n + sync) == -ETIMEDOUT)
>   				dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
>   			local_irq_save(flags);
>   		}
> --
> 2.25.1
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-10-01 15:22 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-26 18:51 [PATCH v2] iommu/arm-smmu-v3: poll cmdq until it has space Fernand Sieber
2021-09-26 18:51 ` Fernand Sieber
2021-09-26 18:51 ` Fernand Sieber via iommu
2021-10-01 15:22 ` John Garry
2021-10-01 15:22   ` John Garry
2021-10-01 15:22   ` John Garry

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.