From: Yong Wu <yong.wu@mediatek.com>
To: Matthias Brugger <matthias.bgg@gmail.com>,
Joerg Roedel <joro@8bytes.org>, Will Deacon <will.deacon@arm.com>
Cc: Evan Green <evgreen@chromium.org>,
Robin Murphy <robin.murphy@arm.com>,
Tomasz Figa <tfiga@google.com>,
<linux-mediatek@lists.infradead.org>,
<srv_heupstream@mediatek.com>, <linux-kernel@vger.kernel.org>,
<linux-arm-kernel@lists.infradead.org>,
<iommu@lists.linux-foundation.org>, <yong.wu@mediatek.com>,
<youlin.pei@mediatek.com>,
Nicolas Boichat <drinkcat@chromium.org>, <anan.sun@mediatek.com>,
<cui.zhang@mediatek.com>, <chao.hao@mediatek.com>,
<edison.hsieh@mediatek.com>
Subject: [PATCH v4 7/7] iommu/mediatek: Reduce the tlb flush timeout value
Date: Wed, 16 Oct 2019 11:33:12 +0800 [thread overview]
Message-ID: <1571196792-12382-8-git-send-email-yong.wu@mediatek.com> (raw)
In-Reply-To: <1571196792-12382-1-git-send-email-yong.wu@mediatek.com>
Reduce the tlb timeout value from 100000us to 1000us. the original value
is so long that affect the multimedia performance. This is only a minor
improvement rather than fixing a issue.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
---
drivers/iommu/mtk_iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index c2b7ed5..8ca2e99 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -192,7 +192,7 @@ static void mtk_iommu_tlb_flush_range_sync(unsigned long iova, size_t size,
/* tlb sync */
ret = readl_poll_timeout_atomic(data->base + REG_MMU_CPE_DONE,
- tmp, tmp != 0, 10, 100000);
+ tmp, tmp != 0, 10, 1000);
if (ret) {
dev_warn(data->dev,
"Partial TLB flush timed out, falling back to full flush\n");
--
1.9.1
prev parent reply other threads:[~2019-10-16 3:34 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-16 3:33 [PATCH v4 0/7] Improve tlb range flush Yong Wu
2019-10-16 3:33 ` [PATCH v4 1/7] iommu/mediatek: Correct the flush_iotlb_all callback Yong Wu
2019-10-16 3:33 ` [PATCH v4 2/7] iommu/mediatek: Add a new tlb_lock for tlb_flush Yong Wu
2019-10-23 16:52 ` Will Deacon
2019-10-24 7:22 ` Yong Wu
2019-10-16 3:33 ` [PATCH v4 3/7] iommu/mediatek: Use gather to achieve the tlb range flush Yong Wu
2019-10-23 16:55 ` Will Deacon
2019-10-24 7:22 ` Yong Wu
2019-10-16 3:33 ` [PATCH v4 4/7] iommu/mediatek: Delete the leaf in the tlb_flush Yong Wu
2019-10-16 3:33 ` [PATCH v4 5/7] iommu/mediatek: Move the tlb_sync into tlb_flush Yong Wu
2019-10-16 3:33 ` [PATCH v4 6/7] iommu/mediatek: Get rid of the pgtlock Yong Wu
2019-10-16 3:33 ` Yong Wu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1571196792-12382-8-git-send-email-yong.wu@mediatek.com \
--to=yong.wu@mediatek.com \
--cc=anan.sun@mediatek.com \
--cc=chao.hao@mediatek.com \
--cc=cui.zhang@mediatek.com \
--cc=drinkcat@chromium.org \
--cc=edison.hsieh@mediatek.com \
--cc=evgreen@chromium.org \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=matthias.bgg@gmail.com \
--cc=robin.murphy@arm.com \
--cc=srv_heupstream@mediatek.com \
--cc=tfiga@google.com \
--cc=will.deacon@arm.com \
--cc=youlin.pei@mediatek.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).