From: Zhen Lei <thunder.leizhen@huawei.com>
To: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>,
"Jean-Philippe Brucker" <jean-philippe@linaro.org>,
John Garry <john.garry@huawei.com>,
"Robin Murphy" <robin.murphy@arm.com>,
Will Deacon <will@kernel.org>, Joerg Roedel <joro@8bytes.org>,
iommu <iommu@lists.linux-foundation.org>,
Omer Peleg <omer@cs.technion.ac.il>,
Adam Morrison <mad@cs.technion.ac.il>, Shaohua Li <shli@fb.com>,
Ben Serebrin <serebrin@google.com>,
David Woodhouse <David.Woodhouse@intel.com>,
linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: [PATCH v2 0/2] iommu/iova: enhance the rcache optimization
Date: Thu, 15 Aug 2019 20:11:02 +0800 [thread overview]
Message-ID: <20190815121104.29140-1-thunder.leizhen@huawei.com> (raw)
v1 --> v2
1. I did not chagne the patches but added this cover-letter.
2. Add a batch of reviewers base on
9257b4a206fc ("iommu/iova: introduce per-cpu caching to iova allocation")
3. I described the problem I met in patch 2, but I hope below brief description
can help people to quickly understand.
Suppose there are six rcache sizes, each size can maximum hold 10000 IOVAs.
--------------------------------------------
| 4K | 8K | 16K | 32K | 64K | 128K |
--------------------------------------------
| 10000 | 9000 | 8500 | 8600 | 9200 | 7000 |
--------------------------------------------
As the above map displayed, the whole rcache buffered too many IOVAs. Now, the
worst case can be coming, suppose we need 20000 4K IOVAs at one time. That means
10000 IOVAs can be allocated from rcache, but another 10000 IOVAs should be
allocated from RB tree base on alloc_iova() function. But the RB tree currently
have at least (9000 + 8500 + 8600 + 9200 + 7000) = 42300 nodes. The average speed
of RB tree traverse will be very slow. For my test scenario, the 4K size IOVAs are
frequently used, but others are not. So similarly, when the 20000 4K IOVAs are
continuous freed, the first 10000 IOVAs can be quickly buffered, but the other
10000 IOVAs can not.
Zhen Lei (2):
iommu/iova: introduce iova_magazine_compact_pfns()
iommu/iova: enhance the rcache optimization
drivers/iommu/iova.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++----
include/linux/iova.h | 1 +
2 files changed, 95 insertions(+), 6 deletions(-)
--
1.8.3
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
next reply other threads:[~2019-08-15 12:11 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-15 12:11 Zhen Lei [this message]
2019-08-15 12:11 ` [PATCH v2 1/2] iommu/iova: introduce iova_magazine_compact_pfns() Zhen Lei
2019-08-15 12:11 ` [PATCH v2 2/2] iommu/iova: enhance the rcache optimization Zhen Lei
2019-08-23 8:15 ` [PATCH v2 0/2] " Leizhen (ThunderTown)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190815121104.29140-1-thunder.leizhen@huawei.com \
--to=thunder.leizhen@huawei.com \
--cc=David.Woodhouse@intel.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jean-philippe.brucker@arm.com \
--cc=jean-philippe@linaro.org \
--cc=john.garry@huawei.com \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mad@cs.technion.ac.il \
--cc=omer@cs.technion.ac.il \
--cc=robin.murphy@arm.com \
--cc=serebrin@google.com \
--cc=shli@fb.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).