qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Auger Eric <eric.auger@redhat.com>
To: Kunkun Jiang <jiangkunkun@huawei.com>,
	Prem Mallappa <prem.mallappa@broadcom.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	linuc.decode@gmail.com
Cc: Zenghui Yu <yuzenghui@huawei.com>,
	"wanghaibin.wang@huawei.com" <wanghaibin.wang@huawei.com>,
	"open list:ARM SMMU" <qemu-arm@nongnu.org>,
	Keqian Zhu <zhukeqian1@huawei.com>,
	"open list:All patches CC here" <qemu-devel@nongnu.org>
Subject: Re: A question about the translation granule size supported by the vSMMU
Date: Tue, 6 Apr 2021 21:50:11 +0200	[thread overview]
Message-ID: <4886d8d0-cca6-d4b2-4139-29ad52020f79@redhat.com> (raw)
In-Reply-To: <fa696532-5f04-aeeb-1ba3-6427675c6655@huawei.com>

Hi Kunkun,

On 3/27/21 3:24 AM, Kunkun Jiang wrote:
> Hi all,
> 
> Recently, I did some tests on SMMU nested mode. Here is
> a question about the translation granule size supported by
> vSMMU.
> 
> There is such a code in SMMUv3_init_regs():
> 
>>    /* 4K and 64K granule support */
>>     s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
>>     s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
>>     s->idr[5] = FIELD_DP32(s->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44
>> bits */
> Why is the 16K granule not supported? I modified the code
> to support it and did not encounter any problems in the
> test. Although 4K and 64K minimal granules are "strongly
> recommended", I think vSMMU should still support 16K.😉
> Are there other reasons why 16K is not supported here?
no there aren't any. The main reasons were 16KB support is optional and
supporting it increases the test matrix. Also it seems quite a few
machines I have access to do support 16KB granule. On the others I get

"EFI stub: ERROR: This 16 KB granular kernel is not supported by your CPU".

Nevertheless I am not opposed to support it as it seems to work without
trouble. Just need to have an extra look at implied validity checks but
there shouldn't be much.

Thanks

Eric
> 
> When in SMMU nested mode, it may get errors if pSMMU
> doesn't support 16K but vSMMU supports 16K. But we
> can get some settings of pSMMU to avoid this situation.
> I found some discussions between Eric and Linu about
> this [1], but this idea does not seem to be implemented.
> 
> [1] https://lists.gnu.org/archive/html/qemu-arm/2017-09/msg00149.html
> 
> Best regards,
> Kunkun Jiang
> 



  reply	other threads:[~2021-04-06 19:51 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-27  2:24 A question about the translation granule size supported by the vSMMU Kunkun Jiang
2021-04-06 19:50 ` Auger Eric [this message]
2021-04-07  9:26   ` Kunkun Jiang
2021-04-08  7:27     ` Auger Eric
2021-04-09  8:10       ` Kunkun Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4886d8d0-cca6-d4b2-4139-29ad52020f79@redhat.com \
    --to=eric.auger@redhat.com \
    --cc=jiangkunkun@huawei.com \
    --cc=linuc.decode@gmail.com \
    --cc=peter.maydell@linaro.org \
    --cc=prem.mallappa@broadcom.com \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=wanghaibin.wang@huawei.com \
    --cc=yuzenghui@huawei.com \
    --cc=zhukeqian1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).