All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Yang, Shunyong" <shunyong.yang@hxt-semitech.com>
To: Robin Murphy <robin.murphy@arm.com>,
	"Leizhen (ThunderTown)" <thunder.leizhen@huawei.com>,
	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"Joerg Roedel" <joro@8bytes.org>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	iommu <iommu@lists.linux-foundation.org>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3 4/6] iommu/io-pgtable-arm: add support for non-strict mode
Date: Mon, 6 Aug 2018 01:32:44 +0000	[thread overview]
Message-ID: <1d24541340334954969c58980ef85444@HXTBJIDCEMVIW01.hxtcorp.net> (raw)
In-Reply-To: 04239cfa-bcf2-a33a-e662-ebc75e66782b@arm.com

Hi, Robin,

On 2018/7/26 22:37, Robin Murphy wrote:
> On 2018-07-26 8:20 AM, Leizhen (ThunderTown) wrote:
>> On 2018/7/25 6:25, Robin Murphy wrote:
>>> On 2018-07-12 7:18 AM, Zhen Lei wrote:
>>>> To support the non-strict mode, now we only tlbi and sync for the strict
>>>> mode. But for the non-leaf case, always follow strict mode.
>>>>
>>>> Use the lowest bit of the iova parameter to pass the strict mode:
>>>> 0, IOMMU_STRICT;
>>>> 1, IOMMU_NON_STRICT;
>>>> Treat 0 as IOMMU_STRICT, so that the unmap operation can compatible with
>>>> other IOMMUs which still use strict mode.
>>>>
>>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>>>> ---
>>>>    drivers/iommu/io-pgtable-arm.c | 23 ++++++++++++++---------
>>>>    1 file changed, 14 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
>>>> index 010a254..9234db3 100644
>>>> --- a/drivers/iommu/io-pgtable-arm.c
>>>> +++ b/drivers/iommu/io-pgtable-arm.c
>>>> @@ -292,7 +292,7 @@ static void __arm_lpae_set_pte(arm_lpae_iopte *ptep, arm_lpae_iopte pte,
>>>>      static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                       unsigned long iova, size_t size, int lvl,
>>>> -                   arm_lpae_iopte *ptep);
>>>> +                   arm_lpae_iopte *ptep, int strict);
>>>>      static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
>>>>                    phys_addr_t paddr, arm_lpae_iopte prot,
>>>> @@ -334,7 +334,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
>>>>            size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data);
>>>>              tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data);
>>>> -        if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp) != sz))
>>>> +        if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp, IOMMU_STRICT) != sz))
>>>>                return -EINVAL;
>>>>        }
>>>>    @@ -531,7 +531,7 @@ static void arm_lpae_free_pgtable(struct io_pgtable *iop)
>>>>    static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
>>>>                           unsigned long iova, size_t size,
>>>>                           arm_lpae_iopte blk_pte, int lvl,
>>>> -                       arm_lpae_iopte *ptep)
>>>> +                       arm_lpae_iopte *ptep, int strict)
>>>
>>> DMA code should never ever be splitting blocks anyway, and frankly the TLB maintenance here is dodgy enough (since we can't reasonably do break-before make as VMSA says we should) that I *really* don't want to introduce any possibility of making it more asynchronous. I'd much rather just hard-code the expectation of strict == true for this.
>>
>> OK, I will hard-code strict=true for it.
>>
>> But since it never ever be happened, why did not give a warning at the beginning?
> 
> Because DMA code is not the only caller of iommu_map/unmap. It's 
> perfectly legal in the IOMMU API to partially unmap a previous mapping 
> such that a block entry needs to be split. The DMA API, however, is a 
> lot more constrined, and thus by construction the iommu-dma layer will 
> never generate a block-splitting iommu_unmap() except as a result of 
> illegal DMA API usage, and we obviously do not need to optimise for that 
> (you will get a warning about mismatched unmaps under dma-debug, but 
> it's a bit too expensive to police in the general case).
> 

When I was reading the code around arm_lpae_split_blk_unmap(), I was
curious in which scenario a block will be split. Now with your comments
"Because DMA code is not the only caller of iommu_map/unmap", it seems
depending on the user.

Would you please explain this further? I mean besides DMA, which user
will use iommu_map/umap and how it split a block.

Thanks.
Shunyong.

> 
>>>>    {
>>>>        struct io_pgtable_cfg *cfg = &data->iop.cfg;
>>>>        arm_lpae_iopte pte, *tablep;
>>>> @@ -576,15 +576,18 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
>>>>        }
>>>>          if (unmap_idx < 0)
>>>> -        return __arm_lpae_unmap(data, iova, size, lvl, tablep);
>>>> +        return __arm_lpae_unmap(data, iova, size, lvl, tablep, strict);
>>>>          io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true);
>>>> +    if (!strict)
>>>> +        io_pgtable_tlb_sync(&data->iop);
>>>> +
>>>>        return size;
>>>>    }
>>>>      static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                       unsigned long iova, size_t size, int lvl,
>>>> -                   arm_lpae_iopte *ptep)
>>>> +                   arm_lpae_iopte *ptep, int strict)
>>>>    {
>>>>        arm_lpae_iopte pte;
>>>>        struct io_pgtable *iop = &data->iop;
>>>> @@ -609,7 +612,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                io_pgtable_tlb_sync(iop);
>>>>                ptep = iopte_deref(pte, data);
>>>>                __arm_lpae_free_pgtable(data, lvl + 1, ptep);
>>>> -        } else {
>>>> +        } else if (strict) {
>>>>                io_pgtable_tlb_add_flush(iop, iova, size, size, true);
>>>>            }
>>>>    @@ -620,25 +623,27 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>             * minus the part we want to unmap
>>>>             */
>>>>            return arm_lpae_split_blk_unmap(data, iova, size, pte,
>>>> -                        lvl + 1, ptep);
>>>> +                        lvl + 1, ptep, strict);
>>>>        }
>>>>          /* Keep on walkin' */
>>>>        ptep = iopte_deref(pte, data);
>>>> -    return __arm_lpae_unmap(data, iova, size, lvl + 1, ptep);
>>>> +    return __arm_lpae_unmap(data, iova, size, lvl + 1, ptep, strict);
>>>>    }
>>>>      static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova,
>>>>                     size_t size)
>>>>    {
>>>> +    int strict = ((iova & IOMMU_STRICT_MODE_MASK) == IOMMU_STRICT);
>>>>        struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops);
>>>>        arm_lpae_iopte *ptep = data->pgd;
>>>>        int lvl = ARM_LPAE_START_LVL(data);
>>>>    +    iova &= ~IOMMU_STRICT_MODE_MASK;
>>>>        if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias)))
>>>>            return 0;
>>>>    -    return __arm_lpae_unmap(data, iova, size, lvl, ptep);
>>>> +    return __arm_lpae_unmap(data, iova, size, lvl, ptep, strict);
>>>>    }
>>>>      static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
>>>>
>>>
>>> .
>>>
>>
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 


WARNING: multiple messages have this Message-ID (diff)
From: "Yang, Shunyong" <shunyong.yang-PT9Dzx9SjPiXmMXjJBpWqg@public.gmane.org>
To: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>,
	"Leizhen (ThunderTown)"
	<thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>,
	Jean-Philippe Brucker
	<jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>,
	Will Deacon <will.deacon-5wv7dgnIgG8@public.gmane.org>,
	Joerg Roedel <joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>,
	linux-arm-kernel
	<linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org>,
	iommu
	<iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org>,
	linux-kernel
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: [PATCH v3 4/6] iommu/io-pgtable-arm: add support for non-strict mode
Date: Mon, 6 Aug 2018 01:32:44 +0000	[thread overview]
Message-ID: <1d24541340334954969c58980ef85444@HXTBJIDCEMVIW01.hxtcorp.net> (raw)
In-Reply-To: 04239cfa-bcf2-a33a-e662-ebc75e66782b@arm.com

Hi, Robin,

On 2018/7/26 22:37, Robin Murphy wrote:
> On 2018-07-26 8:20 AM, Leizhen (ThunderTown) wrote:
>> On 2018/7/25 6:25, Robin Murphy wrote:
>>> On 2018-07-12 7:18 AM, Zhen Lei wrote:
>>>> To support the non-strict mode, now we only tlbi and sync for the strict
>>>> mode. But for the non-leaf case, always follow strict mode.
>>>>
>>>> Use the lowest bit of the iova parameter to pass the strict mode:
>>>> 0, IOMMU_STRICT;
>>>> 1, IOMMU_NON_STRICT;
>>>> Treat 0 as IOMMU_STRICT, so that the unmap operation can compatible with
>>>> other IOMMUs which still use strict mode.
>>>>
>>>> Signed-off-by: Zhen Lei <thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>>> ---
>>>>    drivers/iommu/io-pgtable-arm.c | 23 ++++++++++++++---------
>>>>    1 file changed, 14 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
>>>> index 010a254..9234db3 100644
>>>> --- a/drivers/iommu/io-pgtable-arm.c
>>>> +++ b/drivers/iommu/io-pgtable-arm.c
>>>> @@ -292,7 +292,7 @@ static void __arm_lpae_set_pte(arm_lpae_iopte *ptep, arm_lpae_iopte pte,
>>>>      static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                       unsigned long iova, size_t size, int lvl,
>>>> -                   arm_lpae_iopte *ptep);
>>>> +                   arm_lpae_iopte *ptep, int strict);
>>>>      static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
>>>>                    phys_addr_t paddr, arm_lpae_iopte prot,
>>>> @@ -334,7 +334,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
>>>>            size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data);
>>>>              tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data);
>>>> -        if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp) != sz))
>>>> +        if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp, IOMMU_STRICT) != sz))
>>>>                return -EINVAL;
>>>>        }
>>>>    @@ -531,7 +531,7 @@ static void arm_lpae_free_pgtable(struct io_pgtable *iop)
>>>>    static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
>>>>                           unsigned long iova, size_t size,
>>>>                           arm_lpae_iopte blk_pte, int lvl,
>>>> -                       arm_lpae_iopte *ptep)
>>>> +                       arm_lpae_iopte *ptep, int strict)
>>>
>>> DMA code should never ever be splitting blocks anyway, and frankly the TLB maintenance here is dodgy enough (since we can't reasonably do break-before make as VMSA says we should) that I *really* don't want to introduce any possibility of making it more asynchronous. I'd much rather just hard-code the expectation of strict == true for this.
>>
>> OK, I will hard-code strict=true for it.
>>
>> But since it never ever be happened, why did not give a warning at the beginning?
> 
> Because DMA code is not the only caller of iommu_map/unmap. It's 
> perfectly legal in the IOMMU API to partially unmap a previous mapping 
> such that a block entry needs to be split. The DMA API, however, is a 
> lot more constrined, and thus by construction the iommu-dma layer will 
> never generate a block-splitting iommu_unmap() except as a result of 
> illegal DMA API usage, and we obviously do not need to optimise for that 
> (you will get a warning about mismatched unmaps under dma-debug, but 
> it's a bit too expensive to police in the general case).
> 

When I was reading the code around arm_lpae_split_blk_unmap(), I was
curious in which scenario a block will be split. Now with your comments
"Because DMA code is not the only caller of iommu_map/unmap", it seems
depending on the user.

Would you please explain this further? I mean besides DMA, which user
will use iommu_map/umap and how it split a block.

Thanks.
Shunyong.

> 
>>>>    {
>>>>        struct io_pgtable_cfg *cfg = &data->iop.cfg;
>>>>        arm_lpae_iopte pte, *tablep;
>>>> @@ -576,15 +576,18 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
>>>>        }
>>>>          if (unmap_idx < 0)
>>>> -        return __arm_lpae_unmap(data, iova, size, lvl, tablep);
>>>> +        return __arm_lpae_unmap(data, iova, size, lvl, tablep, strict);
>>>>          io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true);
>>>> +    if (!strict)
>>>> +        io_pgtable_tlb_sync(&data->iop);
>>>> +
>>>>        return size;
>>>>    }
>>>>      static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                       unsigned long iova, size_t size, int lvl,
>>>> -                   arm_lpae_iopte *ptep)
>>>> +                   arm_lpae_iopte *ptep, int strict)
>>>>    {
>>>>        arm_lpae_iopte pte;
>>>>        struct io_pgtable *iop = &data->iop;
>>>> @@ -609,7 +612,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                io_pgtable_tlb_sync(iop);
>>>>                ptep = iopte_deref(pte, data);
>>>>                __arm_lpae_free_pgtable(data, lvl + 1, ptep);
>>>> -        } else {
>>>> +        } else if (strict) {
>>>>                io_pgtable_tlb_add_flush(iop, iova, size, size, true);
>>>>            }
>>>>    @@ -620,25 +623,27 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>             * minus the part we want to unmap
>>>>             */
>>>>            return arm_lpae_split_blk_unmap(data, iova, size, pte,
>>>> -                        lvl + 1, ptep);
>>>> +                        lvl + 1, ptep, strict);
>>>>        }
>>>>          /* Keep on walkin' */
>>>>        ptep = iopte_deref(pte, data);
>>>> -    return __arm_lpae_unmap(data, iova, size, lvl + 1, ptep);
>>>> +    return __arm_lpae_unmap(data, iova, size, lvl + 1, ptep, strict);
>>>>    }
>>>>      static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova,
>>>>                     size_t size)
>>>>    {
>>>> +    int strict = ((iova & IOMMU_STRICT_MODE_MASK) == IOMMU_STRICT);
>>>>        struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops);
>>>>        arm_lpae_iopte *ptep = data->pgd;
>>>>        int lvl = ARM_LPAE_START_LVL(data);
>>>>    +    iova &= ~IOMMU_STRICT_MODE_MASK;
>>>>        if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias)))
>>>>            return 0;
>>>>    -    return __arm_lpae_unmap(data, iova, size, lvl, ptep);
>>>> +    return __arm_lpae_unmap(data, iova, size, lvl, ptep, strict);
>>>>    }
>>>>      static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
>>>>
>>>
>>> .
>>>
>>
> _______________________________________________
> iommu mailing list
> iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

WARNING: multiple messages have this Message-ID (diff)
From: shunyong.yang@hxt-semitech.com (Yang, Shunyong)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v3 4/6] iommu/io-pgtable-arm: add support for non-strict mode
Date: Mon, 6 Aug 2018 01:32:44 +0000	[thread overview]
Message-ID: <1d24541340334954969c58980ef85444@HXTBJIDCEMVIW01.hxtcorp.net> (raw)
In-Reply-To: 04239cfa-bcf2-a33a-e662-ebc75e66782b@arm.com

Hi, Robin,

On 2018/7/26 22:37, Robin Murphy wrote:
> On 2018-07-26 8:20 AM, Leizhen (ThunderTown) wrote:
>> On 2018/7/25 6:25, Robin Murphy wrote:
>>> On 2018-07-12 7:18 AM, Zhen Lei wrote:
>>>> To support the non-strict mode, now we only tlbi and sync for the strict
>>>> mode. But for the non-leaf case, always follow strict mode.
>>>>
>>>> Use the lowest bit of the iova parameter to pass the strict mode:
>>>> 0, IOMMU_STRICT;
>>>> 1, IOMMU_NON_STRICT;
>>>> Treat 0 as IOMMU_STRICT, so that the unmap operation can compatible with
>>>> other IOMMUs which still use strict mode.
>>>>
>>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>>>> ---
>>>>    drivers/iommu/io-pgtable-arm.c | 23 ++++++++++++++---------
>>>>    1 file changed, 14 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
>>>> index 010a254..9234db3 100644
>>>> --- a/drivers/iommu/io-pgtable-arm.c
>>>> +++ b/drivers/iommu/io-pgtable-arm.c
>>>> @@ -292,7 +292,7 @@ static void __arm_lpae_set_pte(arm_lpae_iopte *ptep, arm_lpae_iopte pte,
>>>>      static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                       unsigned long iova, size_t size, int lvl,
>>>> -                   arm_lpae_iopte *ptep);
>>>> +                   arm_lpae_iopte *ptep, int strict);
>>>>      static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
>>>>                    phys_addr_t paddr, arm_lpae_iopte prot,
>>>> @@ -334,7 +334,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
>>>>            size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data);
>>>>              tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data);
>>>> -        if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp) != sz))
>>>> +        if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp, IOMMU_STRICT) != sz))
>>>>                return -EINVAL;
>>>>        }
>>>>    @@ -531,7 +531,7 @@ static void arm_lpae_free_pgtable(struct io_pgtable *iop)
>>>>    static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
>>>>                           unsigned long iova, size_t size,
>>>>                           arm_lpae_iopte blk_pte, int lvl,
>>>> -                       arm_lpae_iopte *ptep)
>>>> +                       arm_lpae_iopte *ptep, int strict)
>>>
>>> DMA code should never ever be splitting blocks anyway, and frankly the TLB maintenance here is dodgy enough (since we can't reasonably do break-before make as VMSA says we should) that I *really* don't want to introduce any possibility of making it more asynchronous. I'd much rather just hard-code the expectation of strict == true for this.
>>
>> OK, I will hard-code strict=true for it.
>>
>> But since it never ever be happened, why did not give a warning at the beginning?
> 
> Because DMA code is not the only caller of iommu_map/unmap. It's 
> perfectly legal in the IOMMU API to partially unmap a previous mapping 
> such that a block entry needs to be split. The DMA API, however, is a 
> lot more constrined, and thus by construction the iommu-dma layer will 
> never generate a block-splitting iommu_unmap() except as a result of 
> illegal DMA API usage, and we obviously do not need to optimise for that 
> (you will get a warning about mismatched unmaps under dma-debug, but 
> it's a bit too expensive to police in the general case).
> 

When I was reading the code around arm_lpae_split_blk_unmap(), I was
curious in which scenario a block will be split. Now with your comments
"Because DMA code is not the only caller of iommu_map/unmap", it seems
depending on the user.

Would you please explain this further? I mean besides DMA, which user
will use iommu_map/umap and how it split a block.

Thanks.
Shunyong.

> 
>>>>    {
>>>>        struct io_pgtable_cfg *cfg = &data->iop.cfg;
>>>>        arm_lpae_iopte pte, *tablep;
>>>> @@ -576,15 +576,18 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
>>>>        }
>>>>          if (unmap_idx < 0)
>>>> -        return __arm_lpae_unmap(data, iova, size, lvl, tablep);
>>>> +        return __arm_lpae_unmap(data, iova, size, lvl, tablep, strict);
>>>>          io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true);
>>>> +    if (!strict)
>>>> +        io_pgtable_tlb_sync(&data->iop);
>>>> +
>>>>        return size;
>>>>    }
>>>>      static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                       unsigned long iova, size_t size, int lvl,
>>>> -                   arm_lpae_iopte *ptep)
>>>> +                   arm_lpae_iopte *ptep, int strict)
>>>>    {
>>>>        arm_lpae_iopte pte;
>>>>        struct io_pgtable *iop = &data->iop;
>>>> @@ -609,7 +612,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>                io_pgtable_tlb_sync(iop);
>>>>                ptep = iopte_deref(pte, data);
>>>>                __arm_lpae_free_pgtable(data, lvl + 1, ptep);
>>>> -        } else {
>>>> +        } else if (strict) {
>>>>                io_pgtable_tlb_add_flush(iop, iova, size, size, true);
>>>>            }
>>>>    @@ -620,25 +623,27 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>>>>             * minus the part we want to unmap
>>>>             */
>>>>            return arm_lpae_split_blk_unmap(data, iova, size, pte,
>>>> -                        lvl + 1, ptep);
>>>> +                        lvl + 1, ptep, strict);
>>>>        }
>>>>          /* Keep on walkin' */
>>>>        ptep = iopte_deref(pte, data);
>>>> -    return __arm_lpae_unmap(data, iova, size, lvl + 1, ptep);
>>>> +    return __arm_lpae_unmap(data, iova, size, lvl + 1, ptep, strict);
>>>>    }
>>>>      static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova,
>>>>                     size_t size)
>>>>    {
>>>> +    int strict = ((iova & IOMMU_STRICT_MODE_MASK) == IOMMU_STRICT);
>>>>        struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops);
>>>>        arm_lpae_iopte *ptep = data->pgd;
>>>>        int lvl = ARM_LPAE_START_LVL(data);
>>>>    +    iova &= ~IOMMU_STRICT_MODE_MASK;
>>>>        if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias)))
>>>>            return 0;
>>>>    -    return __arm_lpae_unmap(data, iova, size, lvl, ptep);
>>>> +    return __arm_lpae_unmap(data, iova, size, lvl, ptep, strict);
>>>>    }
>>>>      static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,
>>>>
>>>
>>> .
>>>
>>
> _______________________________________________
> iommu mailing list
> iommu at lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

  reply	other threads:[~2018-08-06  1:33 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-12  6:18 [PATCH v3 0/6] add non-strict mode support for arm-smmu-v3 Zhen Lei
2018-07-12  6:18 ` Zhen Lei
2018-07-12  6:18 ` [PATCH v3 1/6] iommu/arm-smmu-v3: fix the implementation of flush_iotlb_all hook Zhen Lei
2018-07-12  6:18   ` Zhen Lei
2018-07-12  6:18 ` [PATCH v3 2/6] iommu/dma: add support for non-strict mode Zhen Lei
2018-07-12  6:18   ` Zhen Lei
2018-07-24 22:01   ` Robin Murphy
2018-07-24 22:01     ` Robin Murphy
2018-07-24 22:01     ` Robin Murphy
2018-07-26  4:15     ` Leizhen (ThunderTown)
2018-07-26  4:15       ` Leizhen (ThunderTown)
2018-07-26  4:15       ` Leizhen (ThunderTown)
2018-07-12  6:18 ` [PATCH v3 3/6] iommu/amd: use default branch to deal with all non-supported capabilities Zhen Lei
2018-07-12  6:18   ` Zhen Lei
2018-07-12  6:18   ` Zhen Lei
2018-07-12  6:18 ` [PATCH v3 4/6] iommu/io-pgtable-arm: add support for non-strict mode Zhen Lei
2018-07-12  6:18   ` Zhen Lei
2018-07-24 22:25   ` Robin Murphy
2018-07-24 22:25     ` Robin Murphy
2018-07-26  7:20     ` Leizhen (ThunderTown)
2018-07-26  7:20       ` Leizhen (ThunderTown)
2018-07-26 14:35       ` Robin Murphy
2018-07-26 14:35         ` Robin Murphy
2018-08-06  1:32         ` Yang, Shunyong [this message]
2018-08-06  1:32           ` Yang, Shunyong
2018-08-06  1:32           ` Yang, Shunyong
2018-08-14  8:33           ` Leizhen (ThunderTown)
2018-08-14  8:33             ` Leizhen (ThunderTown)
2018-08-14  8:33             ` Leizhen (ThunderTown)
2018-08-14  8:35             ` Will Deacon
2018-08-14  8:35               ` Will Deacon
2018-08-14  8:35               ` Will Deacon
2018-08-14 10:02               ` Robin Murphy
2018-08-14 10:02                 ` Robin Murphy
2018-08-15  1:43                 ` Yang, Shunyong
2018-08-15  1:43                   ` Yang, Shunyong
2018-08-15  1:43                   ` Yang, Shunyong
2018-08-15  7:33                   ` Will Deacon
2018-08-15  7:33                     ` Will Deacon
2018-08-15  7:33                     ` Will Deacon
2018-08-15  7:35                     ` Will Deacon
2018-08-15  7:35                       ` Will Deacon
2018-08-15  7:35                       ` Will Deacon
2018-08-16  0:43                       ` Yang, Shunyong
2018-08-16  0:43                         ` Yang, Shunyong
2018-08-16  0:43                         ` Yang, Shunyong
2018-07-12  6:18 ` [PATCH v3 5/6] iommu/arm-smmu-v3: " Zhen Lei
2018-07-12  6:18   ` Zhen Lei
2018-07-12  6:18 ` [PATCH v3 6/6] iommu/arm-smmu-v3: add bootup option "iommu_strict_mode" Zhen Lei
2018-07-12  6:18   ` Zhen Lei
2018-07-24 22:46   ` Robin Murphy
2018-07-24 22:46     ` Robin Murphy
2018-07-26  7:41     ` Leizhen (ThunderTown)
2018-07-26  7:41       ` Leizhen (ThunderTown)
2018-07-24 21:51 ` [PATCH v3 0/6] add non-strict mode support for arm-smmu-v3 Robin Murphy
2018-07-24 21:51   ` Robin Murphy
2018-07-26  3:44   ` Leizhen (ThunderTown)
2018-07-26  3:44     ` Leizhen (ThunderTown)
2018-07-26  3:44     ` Leizhen (ThunderTown)
2018-07-26 14:16     ` Robin Murphy
2018-07-26 14:16       ` Robin Murphy
2018-07-27  2:49       ` Leizhen (ThunderTown)
2018-07-27  2:49         ` Leizhen (ThunderTown)
2018-07-27  9:37         ` Will Deacon
2018-07-27  9:37           ` Will Deacon
2018-07-27  9:37           ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1d24541340334954969c58980ef85444@HXTBJIDCEMVIW01.hxtcorp.net \
    --to=shunyong.yang@hxt-semitech.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jean-philippe.brucker@arm.com \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=thunder.leizhen@huawei.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.