linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] bugfix and optimization about CMD_SYNC
@ 2018-08-15 10:23 Zhen Lei
  2018-08-15 10:23 ` [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout Zhen Lei
  2018-08-15 10:23 ` [PATCH v3 2/2] iommu/arm-smmu-v3: avoid redundant CMD_SYNCs if possible Zhen Lei
  0 siblings, 2 replies; 12+ messages in thread
From: Zhen Lei @ 2018-08-15 10:23 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon, Joerg Roedel, linux-arm-kernel, iommu,
	linux-kernel
  Cc: Zhen Lei, LinuxArm, Hanjun Guo, Libin, John Garry

v2 -> v3:
Although I have no data to show how many performance will be impacted
because of arm_smmu_cmdq_build_cmd is protected by spinlock. But it's
clear that the performance is bound to drop, a memset operation and 
a complicate switch..case in the function arm_smmu_cmdq_build_cmd.

v1 -> v2:
1. move the call to arm_smmu_cmdq_build_cmd into the critical section,
   and keep itself unchange.
2. Although patch2 can make sure no two CMD_SYNCs will be adjacent,
but patch1 is still needed, see below:

cpu0			cpu1			cpu2
msidata=0
			msidata=1
			insert cmd1
						insert a TLBI command
insert cmd0
			smmu execute cmd1
						smmu execute TLBI
smmu execute cmd0
			poll timeout, because msidata=1 is overridden by
			cmd0, that means VAL=0, sync_idx=1.

Zhen Lei (2):
  iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  iommu/arm-smmu-v3: avoid redundant CMD_SYNCs if possible

 drivers/iommu/arm-smmu-v3.c | 30 +++++++++++++++++++++++++-----
 1 file changed, 25 insertions(+), 5 deletions(-)

-- 
1.8.3



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-15 10:23 [PATCH v3 0/2] bugfix and optimization about CMD_SYNC Zhen Lei
@ 2018-08-15 10:23 ` Zhen Lei
  2018-08-15 12:26   ` Robin Murphy
  2018-08-15 10:23 ` [PATCH v3 2/2] iommu/arm-smmu-v3: avoid redundant CMD_SYNCs if possible Zhen Lei
  1 sibling, 1 reply; 12+ messages in thread
From: Zhen Lei @ 2018-08-15 10:23 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon, Joerg Roedel, linux-arm-kernel, iommu,
	linux-kernel
  Cc: Zhen Lei, LinuxArm, Hanjun Guo, Libin, John Garry

The condition "(int)(VAL - sync_idx) >= 0" to break loop in function
__arm_smmu_sync_poll_msi requires that sync_idx must be increased
monotonously according to the sequence of the CMDs in the cmdq.

But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected
by spinlock, so the following scenarios may appear:
cpu0			cpu1
msidata=0
			msidata=1
			insert cmd1
insert cmd0
			smmu execute cmd1
smmu execute cmd0
			poll timeout, because msidata=1 is overridden by
			cmd0, that means VAL=0, sync_idx=1.

This is not a functional problem, just make the caller wait for a long
time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs
during the waiting period will break it.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 drivers/iommu/arm-smmu-v3.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 1d64710..3f5c236 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -566,7 +566,7 @@ struct arm_smmu_device {

 	int				gerr_irq;
 	int				combined_irq;
-	atomic_t			sync_nr;
+	u32				sync_nr;

 	unsigned long			ias; /* IPA */
 	unsigned long			oas; /* PA */
@@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
 	return 0;
 }

+static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
+{
+	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, msidata);
+}
+
 /* High-level queue accessors */
 static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 {
@@ -836,7 +841,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
 		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
 		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
-		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata);
 		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
 		break;
 	default:
@@ -947,7 +951,6 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
 	struct arm_smmu_cmdq_ent ent = {
 		.opcode = CMDQ_OP_CMD_SYNC,
 		.sync	= {
-			.msidata = atomic_inc_return_relaxed(&smmu->sync_nr),
 			.msiaddr = virt_to_phys(&smmu->sync_count),
 		},
 	};
@@ -955,6 +958,8 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
 	arm_smmu_cmdq_build_cmd(cmd, &ent);

 	spin_lock_irqsave(&smmu->cmdq.lock, flags);
+	ent.sync.msidata = ++smmu->sync_nr;
+	arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata);
 	arm_smmu_cmdq_insert_cmd(smmu, cmd);
 	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);

@@ -2179,7 +2184,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
 {
 	int ret;

-	atomic_set(&smmu->sync_nr, 0);
 	ret = arm_smmu_init_queues(smmu);
 	if (ret)
 		return ret;
--
1.8.3



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 2/2] iommu/arm-smmu-v3: avoid redundant CMD_SYNCs if possible
  2018-08-15 10:23 [PATCH v3 0/2] bugfix and optimization about CMD_SYNC Zhen Lei
  2018-08-15 10:23 ` [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout Zhen Lei
@ 2018-08-15 10:23 ` Zhen Lei
  1 sibling, 0 replies; 12+ messages in thread
From: Zhen Lei @ 2018-08-15 10:23 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon, Joerg Roedel, linux-arm-kernel, iommu,
	linux-kernel
  Cc: Zhen Lei, LinuxArm, Hanjun Guo, Libin, John Garry

More than two CMD_SYNCs maybe adjacent in the command queue, and the first
one has done what others want to do. Drop the redundant CMD_SYNCs can
improve IO performance especially under the pressure scene.

I did the statistics in my test environment, the number of CMD_SYNCs can
be reduced about 1/3. See below:
CMD_SYNCs reduced:	19542181
CMD_SYNCs total:	58098548	(include reduced)
CMDs total:		116197099	(TLBI:SYNC about 1:1)

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 drivers/iommu/arm-smmu-v3.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 3f5c236..ee0219b 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -567,6 +567,7 @@ struct arm_smmu_device {
 	int				gerr_irq;
 	int				combined_irq;
 	u32				sync_nr;
+	u8				prev_cmd_opcode;

 	unsigned long			ias; /* IPA */
 	unsigned long			oas; /* PA */
@@ -780,6 +781,11 @@ static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
 	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, msidata);
 }

+static inline u8 arm_smmu_cmd_opcode_get(u64 *cmd)
+{
+	return cmd[0] & CMDQ_0_OP;
+}
+
 /* High-level queue accessors */
 static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
 {
@@ -904,6 +910,8 @@ static void arm_smmu_cmdq_insert_cmd(struct arm_smmu_device *smmu, u64 *cmd)
 	struct arm_smmu_queue *q = &smmu->cmdq.q;
 	bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);

+	smmu->prev_cmd_opcode = arm_smmu_cmd_opcode_get(cmd);
+
 	while (queue_insert_raw(q, cmd) == -ENOSPC) {
 		if (queue_poll_cons(q, false, wfe))
 			dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
@@ -958,9 +966,17 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
 	arm_smmu_cmdq_build_cmd(cmd, &ent);

 	spin_lock_irqsave(&smmu->cmdq.lock, flags);
-	ent.sync.msidata = ++smmu->sync_nr;
-	arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata);
-	arm_smmu_cmdq_insert_cmd(smmu, cmd);
+	if (smmu->prev_cmd_opcode == CMDQ_OP_CMD_SYNC) {
+		/*
+		 * Previous command is CMD_SYNC also, there is no need to add
+		 * one more. Just poll it.
+		 */
+		ent.sync.msidata = smmu->sync_nr;
+	} else {
+		ent.sync.msidata = ++smmu->sync_nr;
+		arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata);
+		arm_smmu_cmdq_insert_cmd(smmu, cmd);
+	}
 	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);

 	return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
--
1.8.3



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-15 10:23 ` [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout Zhen Lei
@ 2018-08-15 12:26   ` Robin Murphy
  2018-08-15 13:00     ` Will Deacon
  2018-08-16  8:21     ` Leizhen (ThunderTown)
  0 siblings, 2 replies; 12+ messages in thread
From: Robin Murphy @ 2018-08-15 12:26 UTC (permalink / raw)
  To: Zhen Lei, Will Deacon, Joerg Roedel, linux-arm-kernel, iommu,
	linux-kernel
  Cc: LinuxArm, Hanjun Guo, Libin, John Garry

On 15/08/18 11:23, Zhen Lei wrote:
> The condition "(int)(VAL - sync_idx) >= 0" to break loop in function
> __arm_smmu_sync_poll_msi requires that sync_idx must be increased
> monotonously according to the sequence of the CMDs in the cmdq.
> 
> But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected
> by spinlock, so the following scenarios may appear:
> cpu0			cpu1
> msidata=0
> 			msidata=1
> 			insert cmd1
> insert cmd0
> 			smmu execute cmd1
> smmu execute cmd0
> 			poll timeout, because msidata=1 is overridden by
> 			cmd0, that means VAL=0, sync_idx=1.
> 
> This is not a functional problem, just make the caller wait for a long
> time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs
> during the waiting period will break it.
> 
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> ---
>   drivers/iommu/arm-smmu-v3.c | 12 ++++++++----
>   1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 1d64710..3f5c236 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -566,7 +566,7 @@ struct arm_smmu_device {
> 
>   	int				gerr_irq;
>   	int				combined_irq;
> -	atomic_t			sync_nr;
> +	u32				sync_nr;
> 
>   	unsigned long			ias; /* IPA */
>   	unsigned long			oas; /* PA */
> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
>   	return 0;
>   }
> 
> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)

If we *are* going to go down this route then I think it would make sense 
to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. 
arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync 
command, then calling this guy would convert it to an MSI-based one. 
As-is, having bits of mutually-dependent data handled across two 
separate places just seems too messy and error-prone.

That said, I still don't think that just building the whole command 
under the lock is really all that bad - even when it doesn't get 
optimised into one of the assignments that memset you call out is only a 
single "stp xzr, xzr, ...", and a couple of extra branches doesn't seem 
a huge deal compared to the DSB and MMIO accesses (and potentially 
polling) that we're about to do anyway. I've tried hacking things up 
enough to convince GCC to inline a specialisation of the relevant switch 
case when ent->opcode is known, and that reduces the "overhead" down to 
just a handful of ALU instructions. I still need to try cleaning said 
hack up and double-check that it doesn't have any adverse impact on all 
the other SMMUv3 stuff in development, but watch this space...

Robin.

> +{
> +	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, msidata);
> +}
> +
>   /* High-level queue accessors */
>   static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>   {
> @@ -836,7 +841,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>   			cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
>   		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>   		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> -		cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata);
>   		cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>   		break;
>   	default:
> @@ -947,7 +951,6 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
>   	struct arm_smmu_cmdq_ent ent = {
>   		.opcode = CMDQ_OP_CMD_SYNC,
>   		.sync	= {
> -			.msidata = atomic_inc_return_relaxed(&smmu->sync_nr),
>   			.msiaddr = virt_to_phys(&smmu->sync_count),
>   		},
>   	};
> @@ -955,6 +958,8 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
>   	arm_smmu_cmdq_build_cmd(cmd, &ent);
> 
>   	spin_lock_irqsave(&smmu->cmdq.lock, flags);
> +	ent.sync.msidata = ++smmu->sync_nr;
> +	arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata);
>   	arm_smmu_cmdq_insert_cmd(smmu, cmd);
>   	spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
> 
> @@ -2179,7 +2184,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
>   {
>   	int ret;
> 
> -	atomic_set(&smmu->sync_nr, 0);
>   	ret = arm_smmu_init_queues(smmu);
>   	if (ret)
>   		return ret;
> --
> 1.8.3
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-15 12:26   ` Robin Murphy
@ 2018-08-15 13:00     ` Will Deacon
  2018-08-15 18:08       ` John Garry
  2018-08-16  8:21     ` Leizhen (ThunderTown)
  1 sibling, 1 reply; 12+ messages in thread
From: Will Deacon @ 2018-08-15 13:00 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Zhen Lei, Joerg Roedel, linux-arm-kernel, iommu, linux-kernel,
	LinuxArm, Hanjun Guo, Libin, John Garry

On Wed, Aug 15, 2018 at 01:26:31PM +0100, Robin Murphy wrote:
> On 15/08/18 11:23, Zhen Lei wrote:
> >The condition "(int)(VAL - sync_idx) >= 0" to break loop in function
> >__arm_smmu_sync_poll_msi requires that sync_idx must be increased
> >monotonously according to the sequence of the CMDs in the cmdq.
> >
> >But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected
> >by spinlock, so the following scenarios may appear:
> >cpu0			cpu1
> >msidata=0
> >			msidata=1
> >			insert cmd1
> >insert cmd0
> >			smmu execute cmd1
> >smmu execute cmd0
> >			poll timeout, because msidata=1 is overridden by
> >			cmd0, that means VAL=0, sync_idx=1.
> >
> >This is not a functional problem, just make the caller wait for a long
> >time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs
> >during the waiting period will break it.
> >
> >Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> >---
> >  drivers/iommu/arm-smmu-v3.c | 12 ++++++++----
> >  1 file changed, 8 insertions(+), 4 deletions(-)
> >
> >diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> >index 1d64710..3f5c236 100644
> >--- a/drivers/iommu/arm-smmu-v3.c
> >+++ b/drivers/iommu/arm-smmu-v3.c
> >@@ -566,7 +566,7 @@ struct arm_smmu_device {
> >
> >  	int				gerr_irq;
> >  	int				combined_irq;
> >-	atomic_t			sync_nr;
> >+	u32				sync_nr;
> >
> >  	unsigned long			ias; /* IPA */
> >  	unsigned long			oas; /* PA */
> >@@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
> >  	return 0;
> >  }
> >
> >+static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
> 
> If we *are* going to go down this route then I think it would make sense to
> move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e.
> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync
> command, then calling this guy would convert it to an MSI-based one. As-is,
> having bits of mutually-dependent data handled across two separate places
> just seems too messy and error-prone.

Yeah, but I'd first like to see some number showing that doing all of this
under the lock actually has an impact.

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-15 13:00     ` Will Deacon
@ 2018-08-15 18:08       ` John Garry
  2018-08-16  4:11         ` Leizhen (ThunderTown)
  0 siblings, 1 reply; 12+ messages in thread
From: John Garry @ 2018-08-15 18:08 UTC (permalink / raw)
  To: Will Deacon, Robin Murphy
  Cc: Zhen Lei, Joerg Roedel, linux-arm-kernel, iommu, linux-kernel,
	LinuxArm, Hanjun Guo, Libin

On 15/08/2018 14:00, Will Deacon wrote:
> On Wed, Aug 15, 2018 at 01:26:31PM +0100, Robin Murphy wrote:
>> On 15/08/18 11:23, Zhen Lei wrote:
>>> The condition "(int)(VAL - sync_idx) >= 0" to break loop in function
>>> __arm_smmu_sync_poll_msi requires that sync_idx must be increased
>>> monotonously according to the sequence of the CMDs in the cmdq.
>>>
>>> But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected
>>> by spinlock, so the following scenarios may appear:
>>> cpu0			cpu1
>>> msidata=0
>>> 			msidata=1
>>> 			insert cmd1
>>> insert cmd0
>>> 			smmu execute cmd1
>>> smmu execute cmd0
>>> 			poll timeout, because msidata=1 is overridden by
>>> 			cmd0, that means VAL=0, sync_idx=1.
>>>
>>> This is not a functional problem, just make the caller wait for a long
>>> time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs
>>> during the waiting period will break it.
>>>
>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>>> ---
>>>  drivers/iommu/arm-smmu-v3.c | 12 ++++++++----
>>>  1 file changed, 8 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
>>> index 1d64710..3f5c236 100644
>>> --- a/drivers/iommu/arm-smmu-v3.c
>>> +++ b/drivers/iommu/arm-smmu-v3.c
>>> @@ -566,7 +566,7 @@ struct arm_smmu_device {
>>>
>>>  	int				gerr_irq;
>>>  	int				combined_irq;
>>> -	atomic_t			sync_nr;
>>> +	u32				sync_nr;
>>>
>>>  	unsigned long			ias; /* IPA */
>>>  	unsigned long			oas; /* PA */
>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
>>>  	return 0;
>>>  }
>>>
>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
>>
>> If we *are* going to go down this route then I think it would make sense to
>> move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e.
>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync
>> command, then calling this guy would convert it to an MSI-based one. As-is,
>> having bits of mutually-dependent data handled across two separate places
>> just seems too messy and error-prone.
>
> Yeah, but I'd first like to see some number showing that doing all of this
> under the lock actually has an impact.

Update:

I tested this patch versus a modified version which builds the command 
under the queue spinlock (* below). From my testing there is a small 
difference:

Setup:
Testing Single NVME card
fio 15 processes
No process pinning

Average Results:
v3 patch read/r,w/write (IOPS): 301K/149K,149K/307K
Build under lock version read/r,w/write (IOPS): 304K/150K,150K/311K

I don't know why it's better to build under the lock. We can test more.

I suppose there is no justification to build the command outside the 
spinlock based on these results alone...

Cheers,
John

* Modified version:
static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
{
     u64 cmd[CMDQ_ENT_DWORDS];
     unsigned long flags;
     struct arm_smmu_cmdq_ent ent = {
         .opcode = CMDQ_OP_CMD_SYNC,
         .sync    = {
             .msiaddr = virt_to_phys(&smmu->sync_count),
         },
     };

     spin_lock_irqsave(&smmu->cmdq.lock, flags);
     ent.sync.msidata = ++smmu->sync_nr;
     arm_smmu_cmdq_build_cmd(cmd, &ent);
     arm_smmu_cmdq_insert_cmd(smmu, cmd);
     spin_unlock_irqrestore(&smmu->cmdq.lock, flags);

     return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
}


> Will
>
> .
>



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-15 18:08       ` John Garry
@ 2018-08-16  4:11         ` Leizhen (ThunderTown)
  0 siblings, 0 replies; 12+ messages in thread
From: Leizhen (ThunderTown) @ 2018-08-16  4:11 UTC (permalink / raw)
  To: John Garry, Will Deacon, Robin Murphy
  Cc: Joerg Roedel, linux-arm-kernel, iommu, linux-kernel, LinuxArm,
	Hanjun Guo, Libin



On 2018/8/16 2:08, John Garry wrote:
> On 15/08/2018 14:00, Will Deacon wrote:
>> On Wed, Aug 15, 2018 at 01:26:31PM +0100, Robin Murphy wrote:
>>> On 15/08/18 11:23, Zhen Lei wrote:
>>>> The condition "(int)(VAL - sync_idx) >= 0" to break loop in function
>>>> __arm_smmu_sync_poll_msi requires that sync_idx must be increased
>>>> monotonously according to the sequence of the CMDs in the cmdq.
>>>>
>>>> But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected
>>>> by spinlock, so the following scenarios may appear:
>>>> cpu0            cpu1
>>>> msidata=0
>>>>             msidata=1
>>>>             insert cmd1
>>>> insert cmd0
>>>>             smmu execute cmd1
>>>> smmu execute cmd0
>>>>             poll timeout, because msidata=1 is overridden by
>>>>             cmd0, that means VAL=0, sync_idx=1.
>>>>
>>>> This is not a functional problem, just make the caller wait for a long
>>>> time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs
>>>> during the waiting period will break it.
>>>>
>>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>>>> ---
>>>>  drivers/iommu/arm-smmu-v3.c | 12 ++++++++----
>>>>  1 file changed, 8 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
>>>> index 1d64710..3f5c236 100644
>>>> --- a/drivers/iommu/arm-smmu-v3.c
>>>> +++ b/drivers/iommu/arm-smmu-v3.c
>>>> @@ -566,7 +566,7 @@ struct arm_smmu_device {
>>>>
>>>>      int                gerr_irq;
>>>>      int                combined_irq;
>>>> -    atomic_t            sync_nr;
>>>> +    u32                sync_nr;
>>>>
>>>>      unsigned long            ias; /* IPA */
>>>>      unsigned long            oas; /* PA */
>>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
>>>>      return 0;
>>>>  }
>>>>
>>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
>>>
>>> If we *are* going to go down this route then I think it would make sense to
>>> move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e.
>>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync
>>> command, then calling this guy would convert it to an MSI-based one. As-is,
>>> having bits of mutually-dependent data handled across two separate places
>>> just seems too messy and error-prone.
>>
>> Yeah, but I'd first like to see some number showing that doing all of this
>> under the lock actually has an impact.
> 
> Update:
> 
> I tested this patch versus a modified version which builds the command under the queue spinlock (* below). From my testing there is a small difference:
> 
> Setup:
> Testing Single NVME card
> fio 15 processes
> No process pinning
> 
> Average Results:
> v3 patch read/r,w/write (IOPS): 301K/149K,149K/307K
> Build under lock version read/r,w/write (IOPS): 304K/150K,150K/311K
> 
> I don't know why it's better to build under the lock. We can test more.

I have analysed the assembly code, the memset will be optimized as Robin said to be "stp xzr, xzr, [x0]",
and the switch..case is as below:
ffff0000085e5744 <arm_smmu_cmdq_build_cmd>:
ffff0000085e5744:       a9007c1f        stp     xzr, xzr, [x0]			//memset
ffff0000085e5748:       39400023        ldrb    w3, [x1]
ffff0000085e574c:       f9400002        ldr     x2, [x0]
ffff0000085e5750:       aa020062        orr     x2, x3, x2
ffff0000085e5754:       f9000002        str     x2, [x0]
ffff0000085e5758:       39400023        ldrb    w3, [x1]			//ent->opcode
ffff0000085e575c:       51000463        sub     w3, w3, #0x1
ffff0000085e5760:       7101147f        cmp     w3, #0x45
ffff0000085e5764:       54000069        b.ls    ffff0000085e5770
ffff0000085e5768:       12800023        mov     w3, #0xfffffffe
ffff0000085e576c:       1400000e        b       ffff0000085e57a4
ffff0000085e5770:       b0003024        adrp    x4, ffff000008bea000
ffff0000085e5774:       91096084        add     x4, x4, #0x258			//static table in rodata
ffff0000085e5778:       38634883        ldrb    w3, [x4,w3,uxtw]		//use ent->opcode as index
ffff0000085e577c:       10000064        adr     x4, ffff0000085e5788
ffff0000085e5780:       8b238883        add     x3, x4, w3, sxtb #2
ffff0000085e5784:       d61f0060        br      x3				//jump to "case xxx:"

Actually, after apply the patch "inline arm_smmu_cmdq_build_cmd" sent by Robin, the memset and static table will be removed:
ffff0000085e68a8:       94123207        bl      ffff000008a730c4 <_raw_spin_lock_irqsave>
ffff0000085e68ac:       b9410ad5        ldr     w21, [x22,#264]
ffff0000085e68b0:       aa0003fa        mov     x26, x0
ffff0000085e68b4:       110006b5        add     w21, w21, #0x1			//++smmu->sync_nr
ffff0000085e68b8:       b9010ad5        str     w21, [x22,#264]
ffff0000085e68bc:       b50005f3        cbnz    x19, ffff0000085e6978		//if (ent->sync.msiaddr)
ffff0000085e68c0:       d28408c2        mov     x2, #0x2046
ffff0000085e68c4:       f2a1f802        movk    x2, #0xfc0, lsl #16		//the constant part of CMD_SYNC
ffff0000085e68c8:       aa158042        orr     x2, x2, x21, lsl #32		//or msidata
ffff0000085e68cc:       aa1603e0        mov     x0, x22				//x0 = x22 = smmu
ffff0000085e68d0:       910163a1        add     x1, x29, #0x58			//x1 = the address of local variable "cmd"
ffff0000085e68d4:       f9002fa2        str     x2, [x29,#88]			//save cmd[0]
ffff0000085e68d8:       927ec673        and     x19, x19, #0xffffffffffffc
ffff0000085e68dc:       f90033b3        str     x19, [x29,#96]			//save cmd[1]
ffff0000085e68e0:       97fffd0d        bl      ffff0000085e5d14 <arm_smmu_cmdq_insert_cmd>

So that, my patch v2 plus Robin's "inline arm_smmu_cmdq_build_cmd()" is a good choice.

But the assembly code of my patch v3, it seems that is still shorter than above:
ffff0000085e695c:       9412320a        bl      ffff000008a73184 <_raw_spin_lock_irqsave>
ffff0000085e6960:       aa0003f6        mov     x22, x0
ffff0000085e6964:       b9410a62        ldr     w2, [x19,#264]
ffff0000085e6968:       aa1303e0        mov     x0, x19
ffff0000085e696c:       f94023a3        ldr     x3, [x29,#64]
ffff0000085e6970:       910103a1        add     x1, x29, #0x40
ffff0000085e6974:       11000442        add     w2, w2, #0x1			//++smmu->sync_nr
ffff0000085e6978:       b9010a62        str     w2, [x19,#264]
ffff0000085e697c:       b9005ba2        str     w2, [x29,#88]
ffff0000085e6980:       aa028062        orr     x2, x3, x2, lsl #32
ffff0000085e6984:       f90023a2        str     x2, [x29,#64]
ffff0000085e6988:       97fffd58        bl      ffff0000085e5ee8 <arm_smmu_cmdq_insert_cmd>

> 
> I suppose there is no justification to build the command outside the spinlock based on these results alone...
> 
> Cheers,
> John
> 
> * Modified version:
> static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
> {
>     u64 cmd[CMDQ_ENT_DWORDS];
>     unsigned long flags;
>     struct arm_smmu_cmdq_ent ent = {
>         .opcode = CMDQ_OP_CMD_SYNC,
>         .sync    = {
>             .msiaddr = virt_to_phys(&smmu->sync_count),
>         },
>     };
> 
>     spin_lock_irqsave(&smmu->cmdq.lock, flags);
>     ent.sync.msidata = ++smmu->sync_nr;
>     arm_smmu_cmdq_build_cmd(cmd, &ent);
>     arm_smmu_cmdq_insert_cmd(smmu, cmd);
>     spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
> 
>     return __arm_smmu_sync_poll_msi(smmu, ent.sync.msidata);
> }
> 
> 
>> Will
>>
>> .
>>
> 
> 
> 
> .
> 

-- 
Thanks!
BestRegards


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-15 12:26   ` Robin Murphy
  2018-08-15 13:00     ` Will Deacon
@ 2018-08-16  8:21     ` Leizhen (ThunderTown)
  2018-08-16  9:18       ` Will Deacon
  1 sibling, 1 reply; 12+ messages in thread
From: Leizhen (ThunderTown) @ 2018-08-16  8:21 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon, Joerg Roedel, linux-arm-kernel, iommu,
	linux-kernel
  Cc: LinuxArm, Hanjun Guo, Libin, John Garry



On 2018/8/15 20:26, Robin Murphy wrote:
> On 15/08/18 11:23, Zhen Lei wrote:
>> The condition "(int)(VAL - sync_idx) >= 0" to break loop in function
>> __arm_smmu_sync_poll_msi requires that sync_idx must be increased
>> monotonously according to the sequence of the CMDs in the cmdq.
>>
>> But ".msidata = atomic_inc_return_relaxed(&smmu->sync_nr)" is not protected
>> by spinlock, so the following scenarios may appear:
>> cpu0            cpu1
>> msidata=0
>>             msidata=1
>>             insert cmd1
>> insert cmd0
>>             smmu execute cmd1
>> smmu execute cmd0
>>             poll timeout, because msidata=1 is overridden by
>>             cmd0, that means VAL=0, sync_idx=1.
>>
>> This is not a functional problem, just make the caller wait for a long
>> time until TIMEOUT. It's rare to happen, because any other CMD_SYNCs
>> during the waiting period will break it.
>>
>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>> ---
>>   drivers/iommu/arm-smmu-v3.c | 12 ++++++++----
>>   1 file changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
>> index 1d64710..3f5c236 100644
>> --- a/drivers/iommu/arm-smmu-v3.c
>> +++ b/drivers/iommu/arm-smmu-v3.c
>> @@ -566,7 +566,7 @@ struct arm_smmu_device {
>>
>>       int                gerr_irq;
>>       int                combined_irq;
>> -    atomic_t            sync_nr;
>> +    u32                sync_nr;
>>
>>       unsigned long            ias; /* IPA */
>>       unsigned long            oas; /* PA */
>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
>>       return 0;
>>   }
>>
>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
> 
> If we *are* going to go down this route then I think it would make sense to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e. arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync command, then calling this guy would convert it to an MSI-based one. As-is, having bits of mutually-dependent data handled across two separate places just seems too messy and error-prone.

Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"?

static inline
void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
{
	cmd[0]  = FIELD_PREP(CMDQ_0_OP, ent->opcode);
	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
	cmd[1]  = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
}


> 
> That said, I still don't think that just building the whole command under the lock is really all that bad - even when it doesn't get optimised into one of the assignments that memset you call out is only a single "stp xzr, xzr, ...", and a couple of extra branches doesn't seem a huge deal compared to the DSB and MMIO accesses (and potentially polling) that we're about to do anyway. I've tried hacking things up enough to convince GCC to inline a specialisation of the relevant switch case when ent->opcode is known, and that reduces the "overhead" down to just a handful of ALU instructions. I still need to try cleaning said hack up and double-check that it doesn't have any adverse impact on all the other SMMUv3 stuff in development, but watch this space...
> 
> Robin.
> 
>> +{
>> +    cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, msidata);
>> +}
>> +
>>   /* High-level queue accessors */
>>   static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>>   {
>> @@ -836,7 +841,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>>               cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
>>           cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>>           cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
>> -        cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata);
>>           cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>>           break;
>>       default:
>> @@ -947,7 +951,6 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
>>       struct arm_smmu_cmdq_ent ent = {
>>           .opcode = CMDQ_OP_CMD_SYNC,
>>           .sync    = {
>> -            .msidata = atomic_inc_return_relaxed(&smmu->sync_nr),
>>               .msiaddr = virt_to_phys(&smmu->sync_count),
>>           },
>>       };
>> @@ -955,6 +958,8 @@ static int __arm_smmu_cmdq_issue_sync_msi(struct arm_smmu_device *smmu)
>>       arm_smmu_cmdq_build_cmd(cmd, &ent);
>>
>>       spin_lock_irqsave(&smmu->cmdq.lock, flags);
>> +    ent.sync.msidata = ++smmu->sync_nr;
>> +    arm_smmu_cmdq_sync_set_msidata(cmd, ent.sync.msidata);
>>       arm_smmu_cmdq_insert_cmd(smmu, cmd);
>>       spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
>>
>> @@ -2179,7 +2184,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
>>   {
>>       int ret;
>>
>> -    atomic_set(&smmu->sync_nr, 0);
>>       ret = arm_smmu_init_queues(smmu);
>>       if (ret)
>>           return ret;
>> -- 
>> 1.8.3
>>
>>
> 
> .
> 

-- 
Thanks!
BestRegards


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-16  8:21     ` Leizhen (ThunderTown)
@ 2018-08-16  9:18       ` Will Deacon
  2018-08-16  9:27         ` Robin Murphy
  0 siblings, 1 reply; 12+ messages in thread
From: Will Deacon @ 2018-08-16  9:18 UTC (permalink / raw)
  To: Leizhen (ThunderTown)
  Cc: Robin Murphy, Joerg Roedel, linux-arm-kernel, iommu,
	linux-kernel, LinuxArm, Hanjun Guo, Libin, John Garry

On Thu, Aug 16, 2018 at 04:21:17PM +0800, Leizhen (ThunderTown) wrote:
> On 2018/8/15 20:26, Robin Murphy wrote:
> > On 15/08/18 11:23, Zhen Lei wrote:
> >> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> >> index 1d64710..3f5c236 100644
> >> --- a/drivers/iommu/arm-smmu-v3.c
> >> +++ b/drivers/iommu/arm-smmu-v3.c
> >> @@ -566,7 +566,7 @@ struct arm_smmu_device {
> >>
> >>       int                gerr_irq;
> >>       int                combined_irq;
> >> -    atomic_t            sync_nr;
> >> +    u32                sync_nr;
> >>
> >>       unsigned long            ias; /* IPA */
> >>       unsigned long            oas; /* PA */
> >> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
> >>       return 0;
> >>   }
> >>
> >> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
> > 
> > If we *are* going to go down this route then I think it would make sense
> > to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e.
> > arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync
> > command, then calling this guy would convert it to an MSI-based one.
> > As-is, having bits of mutually-dependent data handled across two
> > separate places just seems too messy and error-prone.
> 
> Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"?
> 
> static inline
> void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
> {
> 	cmd[0]  = FIELD_PREP(CMDQ_0_OP, ent->opcode);
> 	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
> 	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
> 	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> 	cmd[1]  = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
> }

None of this seems justified given the numbers from John, so please just do
the simple thing and build the command with the lock held.

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-16  9:18       ` Will Deacon
@ 2018-08-16  9:27         ` Robin Murphy
  2018-08-19  7:02           ` Leizhen (ThunderTown)
  0 siblings, 1 reply; 12+ messages in thread
From: Robin Murphy @ 2018-08-16  9:27 UTC (permalink / raw)
  To: Will Deacon, Leizhen (ThunderTown)
  Cc: Joerg Roedel, linux-arm-kernel, iommu, linux-kernel, LinuxArm,
	Hanjun Guo, Libin, John Garry

On 2018-08-16 10:18 AM, Will Deacon wrote:
> On Thu, Aug 16, 2018 at 04:21:17PM +0800, Leizhen (ThunderTown) wrote:
>> On 2018/8/15 20:26, Robin Murphy wrote:
>>> On 15/08/18 11:23, Zhen Lei wrote:
>>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
>>>> index 1d64710..3f5c236 100644
>>>> --- a/drivers/iommu/arm-smmu-v3.c
>>>> +++ b/drivers/iommu/arm-smmu-v3.c
>>>> @@ -566,7 +566,7 @@ struct arm_smmu_device {
>>>>
>>>>        int                gerr_irq;
>>>>        int                combined_irq;
>>>> -    atomic_t            sync_nr;
>>>> +    u32                sync_nr;
>>>>
>>>>        unsigned long            ias; /* IPA */
>>>>        unsigned long            oas; /* PA */
>>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
>>>>        return 0;
>>>>    }
>>>>
>>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
>>>
>>> If we *are* going to go down this route then I think it would make sense
>>> to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e.
>>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync
>>> command, then calling this guy would convert it to an MSI-based one.
>>> As-is, having bits of mutually-dependent data handled across two
>>> separate places just seems too messy and error-prone.
>>
>> Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"?
>>
>> static inline
>> void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>> {
>> 	cmd[0]  = FIELD_PREP(CMDQ_0_OP, ent->opcode);
>> 	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
>> 	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>> 	cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
>> 	cmd[1]  = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>> }
> 
> None of this seems justified given the numbers from John, so please just do
> the simple thing and build the command with the lock held.

Agreed - sorry if my wording was unclear, but that suggestion was only 
for the possibility of it proving genuinely worthwhile to build the 
command outside the lock. Since that isn't the case, I definitely prefer 
the simpler approach too.

Robin.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-16  9:27         ` Robin Murphy
@ 2018-08-19  7:02           ` Leizhen (ThunderTown)
  2018-09-05  1:46             ` Leizhen (ThunderTown)
  0 siblings, 1 reply; 12+ messages in thread
From: Leizhen (ThunderTown) @ 2018-08-19  7:02 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon
  Cc: Joerg Roedel, linux-arm-kernel, iommu, linux-kernel, LinuxArm,
	Hanjun Guo, Libin, John Garry



On 2018/8/16 17:27, Robin Murphy wrote:
> On 2018-08-16 10:18 AM, Will Deacon wrote:
>> On Thu, Aug 16, 2018 at 04:21:17PM +0800, Leizhen (ThunderTown) wrote:
>>> On 2018/8/15 20:26, Robin Murphy wrote:
>>>> On 15/08/18 11:23, Zhen Lei wrote:
>>>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
>>>>> index 1d64710..3f5c236 100644
>>>>> --- a/drivers/iommu/arm-smmu-v3.c
>>>>> +++ b/drivers/iommu/arm-smmu-v3.c
>>>>> @@ -566,7 +566,7 @@ struct arm_smmu_device {
>>>>>
>>>>>        int                gerr_irq;
>>>>>        int                combined_irq;
>>>>> -    atomic_t            sync_nr;
>>>>> +    u32                sync_nr;
>>>>>
>>>>>        unsigned long            ias; /* IPA */
>>>>>        unsigned long            oas; /* PA */
>>>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
>>>>>        return 0;
>>>>>    }
>>>>>
>>>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
>>>>
>>>> If we *are* going to go down this route then I think it would make sense
>>>> to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e.
>>>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync
>>>> command, then calling this guy would convert it to an MSI-based one.
>>>> As-is, having bits of mutually-dependent data handled across two
>>>> separate places just seems too messy and error-prone.
>>>
>>> Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"?
>>>
>>> static inline
>>> void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>>> {
>>>     cmd[0]  = FIELD_PREP(CMDQ_0_OP, ent->opcode);
>>>     cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
>>>     cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>>>     cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);

miss:   cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata);

>>>     cmd[1]  = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>>> }
>>
>> None of this seems justified given the numbers from John, so please just do
>> the simple thing and build the command with the lock held.

In order to observe the optimization effect, I conducted 5 tests for each
case. Although the test result is volatility, but we can still get which case
is good or bad. It accords with our theoretical analysis.

Test command: fio -numjobs=8 -rw=randread -runtime=30 ... -bs=4k
Test Result: IOPS, for example: read : io=86790MB, bw=2892.1MB/s, iops=740586, runt= 30001msec

Case 1: (without these patches)
675480
672055
665275
648610
661146

Case 2: (move arm_smmu_cmdq_build_cmd into lock)
688714
697355
632951
700540
678459

Case 3: (base on case 2, replace arm_smmu_cmdq_build_cmd with arm_smmu_cmdq_build_sync_msi_cmd)
721582
729226
689574
679710
727770

Case 4: (base on case 3, plus patch 2)
734077
742868
738194
682544
740586

Case 2 is better than case 1, I think the main reason is the atomic_inc_return_relaxed(&smmu->sync_nr)
has been removed. Case 3 is better than case 2, because the assembly code is reduced, see below.


> 
> Agreed - sorry if my wording was unclear, but that suggestion was only for the possibility of it proving genuinely worthwhile to build the command outside the lock. Since that isn't the case, I definitely prefer the simpler approach too.

Yes, I mean replace arm_smmu_cmdq_build_cmd with arm_smmu_cmdq_build_sync_msi_cmd to build the command inside the lock.
         spin_lock_irqsave(&smmu->cmdq.lock, flags);
+        ent.sync.msidata = ++smmu->sync_nr;
+        arm_smmu_cmdq_build_sync_msi_cmd(cmd, &ent);
         arm_smmu_cmdq_insert_cmd(smmu, cmd);
         spin_unlock_irqrestore(&smmu->cmdq.lock, flags);

The assembly code showed me that it's very nice.
ffff0000085e6928:       94123207        bl      ffff000008a73144 <_raw_spin_lock_irqsave>
ffff0000085e692c:       b9410ad5        ldr     w21, [x22,#264]
ffff0000085e6930:       d28208c2        mov     x2, #0x1046                     // #4166
ffff0000085e6934:       aa0003fa        mov     x26, x0
ffff0000085e6938:       110006b5        add     w21, w21, #0x1
ffff0000085e693c:       f2a1f802        movk    x2, #0xfc0, lsl #16
ffff0000085e6940:       aa1603e0        mov     x0, x22
ffff0000085e6944:       910163a1        add     x1, x29, #0x58
ffff0000085e6948:       aa158042        orr     x2, x2, x21, lsl #32
ffff0000085e694c:       b9010ad5        str     w21, [x22,#264]
ffff0000085e6950:       f9002fa2        str     x2, [x29,#88]
ffff0000085e6954:       d2994016        mov     x22, #0xca00                    // #51712
ffff0000085e6958:       f90033b3        str     x19, [x29,#96]
ffff0000085e695c:       97fffd5b        bl      ffff0000085e5ec8 <arm_smmu_cmdq_insert_cmd>
ffff0000085e6960:       aa1903e0        mov     x0, x25
ffff0000085e6964:       aa1a03e1        mov     x1, x26
ffff0000085e6968:       f2a77356        movk    x22, #0x3b9a, lsl #16
ffff0000085e696c:       94123145        bl      ffff000008a72e80 <_raw_spin_unlock_irqrestore>


> 
> Robin.
> 
> .
> 

-- 
Thanks!
BestRegards


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout
  2018-08-19  7:02           ` Leizhen (ThunderTown)
@ 2018-09-05  1:46             ` Leizhen (ThunderTown)
  0 siblings, 0 replies; 12+ messages in thread
From: Leizhen (ThunderTown) @ 2018-09-05  1:46 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon
  Cc: Joerg Roedel, linux-arm-kernel, iommu, linux-kernel, LinuxArm,
	Hanjun Guo, Libin, John Garry



On 2018/8/19 15:02, Leizhen (ThunderTown) wrote:
> 
> 
> On 2018/8/16 17:27, Robin Murphy wrote:
>> On 2018-08-16 10:18 AM, Will Deacon wrote:
>>> On Thu, Aug 16, 2018 at 04:21:17PM +0800, Leizhen (ThunderTown) wrote:
>>>> On 2018/8/15 20:26, Robin Murphy wrote:
>>>>> On 15/08/18 11:23, Zhen Lei wrote:
>>>>>> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
>>>>>> index 1d64710..3f5c236 100644
>>>>>> --- a/drivers/iommu/arm-smmu-v3.c
>>>>>> +++ b/drivers/iommu/arm-smmu-v3.c
>>>>>> @@ -566,7 +566,7 @@ struct arm_smmu_device {
>>>>>>
>>>>>>        int                gerr_irq;
>>>>>>        int                combined_irq;
>>>>>> -    atomic_t            sync_nr;
>>>>>> +    u32                sync_nr;
>>>>>>
>>>>>>        unsigned long            ias; /* IPA */
>>>>>>        unsigned long            oas; /* PA */
>>>>>> @@ -775,6 +775,11 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
>>>>>>        return 0;
>>>>>>    }
>>>>>>
>>>>>> +static inline void arm_smmu_cmdq_sync_set_msidata(u64 *cmd, u32 msidata)
>>>>>
>>>>> If we *are* going to go down this route then I think it would make sense
>>>>> to move the msiaddr and CMDQ_SYNC_0_CS_MSI logic here as well; i.e.
>>>>> arm_smmu_cmdq_build_cmd() always generates a "normal" SEV-based sync
>>>>> command, then calling this guy would convert it to an MSI-based one.
>>>>> As-is, having bits of mutually-dependent data handled across two
>>>>> separate places just seems too messy and error-prone.
>>>>
>>>> Yes, How about create a new function "arm_smmu_cmdq_build_sync_msi_cmd"?
>>>>
>>>> static inline
>>>> void arm_smmu_cmdq_build_sync_msi_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
>>>> {
>>>>     cmd[0]  = FIELD_PREP(CMDQ_0_OP, ent->opcode);
>>>>     cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
>>>>     cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
>>>>     cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
> 
> miss:   cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata);
> 
>>>>     cmd[1]  = ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
>>>> }
>>>
>>> None of this seems justified given the numbers from John, so please just do
>>> the simple thing and build the command with the lock held.
> 
> In order to observe the optimization effect, I conducted 5 tests for each
> case. Although the test result is volatility, but we can still get which case
> is good or bad. It accords with our theoretical analysis.
> 
> Test command: fio -numjobs=8 -rw=randread -runtime=30 ... -bs=4k
> Test Result: IOPS, for example: read : io=86790MB, bw=2892.1MB/s, iops=740586, runt= 30001msec
> 
> Case 1: (without these patches)
> 675480
> 672055
> 665275
> 648610
> 661146
> 
> Case 2: (move arm_smmu_cmdq_build_cmd into lock)

https://lore.kernel.org/patchwork/patch/973121/
[v2,1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout

> 688714
> 697355
> 632951
> 700540
> 678459
> 
> Case 3: (base on case 2, replace arm_smmu_cmdq_build_cmd with arm_smmu_cmdq_build_sync_msi_cmd)

https://patchwork.kernel.org/patch/10569675/
[v4,1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout

> 721582
> 729226
> 689574
> 679710
> 727770
> 
> Case 4: (base on case 3, plus patch 2)
> 734077
> 742868
> 738194
> 682544
> 740586
> 
> Case 2 is better than case 1, I think the main reason is the atomic_inc_return_relaxed(&smmu->sync_nr)
> has been removed. Case 3 is better than case 2, because the assembly code is reduced, see below.

Hi, Will
  Have you received this email? Which case do you prefer? Suppose we don't consider patch 2, according
to the test result, maybe we should choose case3.
  Because John Garry wants patch 2 to cover the non-MSI branch also, this may take some time. So can
you decide and apply patch 1 first?


> 
> 
>>
>> Agreed - sorry if my wording was unclear, but that suggestion was only for the possibility of it proving genuinely worthwhile to build the command outside the lock. Since that isn't the case, I definitely prefer the simpler approach too.
> 
> Yes, I mean replace arm_smmu_cmdq_build_cmd with arm_smmu_cmdq_build_sync_msi_cmd to build the command inside the lock.
>          spin_lock_irqsave(&smmu->cmdq.lock, flags);
> +        ent.sync.msidata = ++smmu->sync_nr;
> +        arm_smmu_cmdq_build_sync_msi_cmd(cmd, &ent);
>          arm_smmu_cmdq_insert_cmd(smmu, cmd);
>          spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
> 
> The assembly code showed me that it's very nice.
> ffff0000085e6928:       94123207        bl      ffff000008a73144 <_raw_spin_lock_irqsave>
> ffff0000085e692c:       b9410ad5        ldr     w21, [x22,#264]
> ffff0000085e6930:       d28208c2        mov     x2, #0x1046                     // #4166
> ffff0000085e6934:       aa0003fa        mov     x26, x0
> ffff0000085e6938:       110006b5        add     w21, w21, #0x1
> ffff0000085e693c:       f2a1f802        movk    x2, #0xfc0, lsl #16
> ffff0000085e6940:       aa1603e0        mov     x0, x22
> ffff0000085e6944:       910163a1        add     x1, x29, #0x58
> ffff0000085e6948:       aa158042        orr     x2, x2, x21, lsl #32
> ffff0000085e694c:       b9010ad5        str     w21, [x22,#264]
> ffff0000085e6950:       f9002fa2        str     x2, [x29,#88]
> ffff0000085e6954:       d2994016        mov     x22, #0xca00                    // #51712
> ffff0000085e6958:       f90033b3        str     x19, [x29,#96]
> ffff0000085e695c:       97fffd5b        bl      ffff0000085e5ec8 <arm_smmu_cmdq_insert_cmd>
> ffff0000085e6960:       aa1903e0        mov     x0, x25
> ffff0000085e6964:       aa1a03e1        mov     x1, x26
> ffff0000085e6968:       f2a77356        movk    x22, #0x3b9a, lsl #16
> ffff0000085e696c:       94123145        bl      ffff000008a72e80 <_raw_spin_unlock_irqrestore>
> 
> 
>>
>> Robin.
>>
>> .
>>
> 

-- 
Thanks!
BestRegards


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-09-05  1:46 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-15 10:23 [PATCH v3 0/2] bugfix and optimization about CMD_SYNC Zhen Lei
2018-08-15 10:23 ` [PATCH v3 1/2] iommu/arm-smmu-v3: fix unexpected CMD_SYNC timeout Zhen Lei
2018-08-15 12:26   ` Robin Murphy
2018-08-15 13:00     ` Will Deacon
2018-08-15 18:08       ` John Garry
2018-08-16  4:11         ` Leizhen (ThunderTown)
2018-08-16  8:21     ` Leizhen (ThunderTown)
2018-08-16  9:18       ` Will Deacon
2018-08-16  9:27         ` Robin Murphy
2018-08-19  7:02           ` Leizhen (ThunderTown)
2018-09-05  1:46             ` Leizhen (ThunderTown)
2018-08-15 10:23 ` [PATCH v3 2/2] iommu/arm-smmu-v3: avoid redundant CMD_SYNCs if possible Zhen Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).