From: David Coe <david.coe@live.co.uk>
To: "Suthikulpanit, Suravee" <suravee.suthikulpanit@amd.com>,
linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org
Cc: joro@8bytes.org, will@kernel.org, jsnitsel@redhat.com,
pmenzel@molgen.mpg.de, Jon.Grimm@amd.com,
Tj <ml.linux@elloe.vision>,
Shuah Khan <skhan@linuxfoundation.org>,
Alexander Monakov <amonakov@ispras.ru>,
Alex Hung <1917203@bugs.launchpad.net>
Subject: Re: [PATCH 2/2] iommu/amd: Remove performance counter pre-initialization test
Date: Thu, 15 Apr 2021 15:39:54 +0100 [thread overview]
Message-ID: <VI1PR09MB2638BB4B04BA50D0C7E71935C74D9@VI1PR09MB2638.eurprd09.prod.outlook.com> (raw)
In-Reply-To: <df6c8363-baac-5d97-5b06-4bcd3163f83d@amd.com>
[-- Attachment #1: Type: text/plain, Size: 1721 bytes --]
I think you've put your finger on it, Suravee!
On 15/04/2021 10:28, Suthikulpanit, Suravee wrote:
> David,
>
> On 4/14/2021 10:33 PM, David Coe wrote:
>> Hi Suravee!
>>
>> I've re-run your revert+update patch on Ubuntu's latest kernel
>> 5.11.0-14 partly to check my mailer's 'mangling' hadn't also reached
>> the code!
>>
>> There are 3 sets of results in the attachment, all for the Ryzen
>> 2400G. The as-distributed kernel already incorporates your IOMMU RFCv3
>> patch.
>>
>> A. As-distributed kernel (cold boot)
>> >5 retries, so no IOMMU read/write capability, no amd_iommu events.
>>
>> B. As-distributed kernel (warm boot)
>> <5 retries, amd_iommu running stats show large numbers as before.
>>
>> C. Revert+Update kernel
>> amd_iommu events listed and also show large hit/miss numbers.
>>
>> In due course, I'll load the new (revert+update) kernel on the 4700G
>> but won't overload your mail-box unless something unusual turns up.
>>
>> Best regards,
>>
>
> For the Ryzen 2400G, could you please try with:
> - 1 event at a time
> - Not more than 8 events (On your system, it has 2 banks x 4 counters/bank.
> I am trying to see if this issue might be related to the counters
> multiplexing).
>
> Thanks,
Attached are the results you requested for the 2400G along with a tiny
shell-script.
One event at a time and various batches of less than 8 events produce
unexceptionable data. One final batch of 10 events and (hoopla) up go
the counter stats.
Will you be doing something in mitigation or does this just go with the
patch? Is there anything further you need from me? I'll run the script
on the 4700U but I don't expect surprises :-).
All most appreciated,
--
David
[-- Attachment #2: iommu_list.sh --]
[-- Type: application/x-shellscript, Size: 849 bytes --]
[-- Attachment #3: EventList.txt --]
[-- Type: text/plain, Size: 8041 bytes --]
$ sudo ./iommu_list.sh
Performance counter stats for 'system wide':
12 amd_iommu_0/cmd_processed/
10.001266851 seconds time elapsed
Performance counter stats for 'system wide':
11 amd_iommu_0/cmd_processed_inv/
10.001259049 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/ign_rd_wr_mmio_1ff8h/
10.000791810 seconds time elapsed
Performance counter stats for 'system wide':
350 amd_iommu_0/int_dte_hit/
10.000848437 seconds time elapsed
Performance counter stats for 'system wide':
16 amd_iommu_0/int_dte_mis/
10.001271989 seconds time elapsed
Performance counter stats for 'system wide':
348 amd_iommu_0/mem_dte_hit/
10.000808074 seconds time elapsed
Performance counter stats for 'system wide':
211,925 amd_iommu_0/mem_dte_mis/
10.000915362 seconds time elapsed
Performance counter stats for 'system wide':
30 amd_iommu_0/mem_iommu_tlb_pde_hit/
10.001520597 seconds time elapsed
Performance counter stats for 'system wide':
450 amd_iommu_0/mem_iommu_tlb_pde_mis/
10.000877493 seconds time elapsed
Performance counter stats for 'system wide':
10,953 amd_iommu_0/mem_iommu_tlb_pte_hit/
10.000831802 seconds time elapsed
Performance counter stats for 'system wide':
13,235 amd_iommu_0/mem_iommu_tlb_pte_mis/
10.001292003 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/mem_pass_excl/
10.000836000 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/mem_pass_pretrans/
10.000799887 seconds time elapsed
Performance counter stats for 'system wide':
12,283 amd_iommu_0/mem_pass_untrans/
10.000815339 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/mem_target_abort/
10.001205168 seconds time elapsed
Performance counter stats for 'system wide':
1,333 amd_iommu_0/mem_trans_total/
10.000915359 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/page_tbl_read_gst/
10.001248235 seconds time elapsed
Performance counter stats for 'system wide':
65 amd_iommu_0/page_tbl_read_nst/
10.001266411 seconds time elapsed
Performance counter stats for 'system wide':
78 amd_iommu_0/page_tbl_read_tot/
10.001272406 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/smi_blk/
10.001282912 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/smi_recv/
10.001223193 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/tlb_inv/
10.001234853 seconds time elapsed
Performance counter stats for 'system wide':
0 amd_iommu_0/vapic_int_guest/
10.000848081 seconds time elapsed
Performance counter stats for 'system wide':
428 amd_iommu_0/vapic_int_non_guest/
10.000806041 seconds time elapsed
$ sudo perf stat -e 'amd_iommu_0/cmd_processed/, amd_iommu_0/cmd_processed_inv/, amd_iommu_0/ign_rd_wr_mmio_1ff8h/, amd_iommu_0/int_dte_hit/, amd_iommu_0/int_dte_mis/, amd_iommu_0/mem_dte_hit/, amd_iommu_0/mem_dte_mis/' sleep 10
Performance counter stats for 'system wide':
16 amd_iommu_0/cmd_processed/
8 amd_iommu_0/cmd_processed_inv/
0 amd_iommu_0/ign_rd_wr_mmio_1ff8h/
358 amd_iommu_0/int_dte_hit/
10 amd_iommu_0/int_dte_mis/
465 amd_iommu_0/mem_dte_hit/
4,296 amd_iommu_0/mem_dte_mis/
10.001297570 seconds time elapsed
$ sudo perf stat -e 'amd_iommu_0/mem_iommu_tlb_pde_hit/, amd_iommu_0/mem_iommu_tlb_pde_mis/, amd_iommu_0/mem_iommu_tlb_pte_hit/, amd_iommu_0/mem_iommu_tlb_pte_mis/, amd_iommu_0/mem_pass_excl/, amd_iommu_0/mem_pass_pretrans/, amd_iommu_0/mem_pass_untrans/' sleep 10
Performance counter stats for 'system wide':
24 amd_iommu_0/mem_iommu_tlb_pde_hit/
407 amd_iommu_0/mem_iommu_tlb_pde_mis/
478 amd_iommu_0/mem_iommu_tlb_pte_hit/
7,113 amd_iommu_0/mem_iommu_tlb_pte_mis/
0 amd_iommu_0/mem_pass_excl/
0 amd_iommu_0/mem_pass_pretrans/
7,040 amd_iommu_0/mem_pass_untrans/
10.001246489 seconds time elapsed
$ sudo perf stat -e 'amd_iommu_0/mem_target_abort/, amd_iommu_0/mem_trans_total/, amd_iommu_0/page_tbl_read_gst/, amd_iommu_0/page_tbl_read_nst/, amd_iommu_0/page_tbl_read_tot/' sleep 10
Performance counter stats for 'system wide':
0 amd_iommu_0/mem_target_abort/
1,898 amd_iommu_0/mem_trans_total/
0 amd_iommu_0/page_tbl_read_gst/
140 amd_iommu_0/page_tbl_read_nst/
140 amd_iommu_0/page_tbl_read_tot/
10.001295526 seconds time elapsed
$ sudo perf stat -e 'amd_iommu_0/smi_blk/, amd_iommu_0/smi_recv/, amd_iommu_0/tlb_inv/, amd_iommu_0/vapic_int_guest/, amd_iommu_0/vapic_int_non_guest/' sleep 10
Performance counter stats for 'system wide':
0 amd_iommu_0/smi_blk/
0 amd_iommu_0/smi_recv/
0 amd_iommu_0/tlb_inv/
0 amd_iommu_0/vapic_int_guest/
433 amd_iommu_0/vapic_int_non_guest/
10.001286515 seconds time elapsed
$ sudo perf stat -e 'amd_iommu_0/mem_target_abort/, amd_iommu_0/mem_trans_total/, amd_iommu_0/page_tbl_read_gst/, amd_iommu_0/page_tbl_read_nst/, amd_iommu_0/page_tbl_read_tot/, amd_iommu_0/smi_blk/, amd_iommu_0/smi_recv/, amd_iommu_0/tlb_inv/, amd_iommu_0/vapic_int_guest/, amd_iommu_0/vapic_int_non_guest/' sleep 10
Performance counter stats for 'system wide':
0 amd_iommu_0/mem_target_abort/ (80.00%)
703,650,342,510,810 amd_iommu_0/mem_trans_total/ (80.00%)
0 amd_iommu_0/page_tbl_read_gst/ (80.00%)
351,839,572,857,842 amd_iommu_0/page_tbl_read_nst/ (80.00%)
351,849,973,332,309 amd_iommu_0/page_tbl_read_tot/ (80.00%)
0 amd_iommu_0/smi_blk/ (80.00%)
0 amd_iommu_0/smi_recv/ (80.00%)
0 amd_iommu_0/tlb_inv/ (80.00%)
0 amd_iommu_0/vapic_int_guest/ (80.00%)
703,720,763,722,288 amd_iommu_0/vapic_int_non_guest/ (80.00%)
10.000790762 seconds time elapsed
next prev parent reply other threads:[~2021-04-15 14:40 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-09 8:58 [PATCH 0/2] iommu/amd: Revert and remove failing PMC test Suravee Suthikulpanit
2021-04-09 8:58 ` [PATCH 1/2] Revert "iommu/amd: Fix performance counter initialization" Suravee Suthikulpanit
2021-04-09 17:06 ` Shuah Khan
2021-04-13 13:36 ` Suthikulpanit, Suravee
2021-04-09 8:58 ` [PATCH 2/2] iommu/amd: Remove performance counter pre-initialization test Suravee Suthikulpanit
2021-04-09 16:37 ` Shuah Khan
2021-04-09 17:10 ` Shuah Khan
2021-04-09 20:00 ` Shuah Khan
2021-04-09 20:19 ` Shuah Khan
2021-04-09 20:11 ` David Coe
2021-04-10 8:17 ` David Coe
2021-04-10 10:03 ` David Coe
2021-04-13 13:51 ` Suthikulpanit, Suravee
2021-04-14 15:33 ` David Coe
2021-04-15 9:28 ` Suthikulpanit, Suravee
2021-04-15 14:39 ` David Coe [this message]
2021-04-15 16:20 ` David Coe
2021-04-18 19:16 ` David Coe
2021-04-14 22:18 ` David Coe
2021-04-20 8:38 ` Suthikulpanit, Suravee
2021-04-20 10:33 ` Alexander Monakov
2021-04-13 9:38 ` David Coe
2021-04-15 13:41 ` [PATCH 0/2] iommu/amd: Revert and remove failing PMC test Joerg Roedel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=VI1PR09MB2638BB4B04BA50D0C7E71935C74D9@VI1PR09MB2638.eurprd09.prod.outlook.com \
--to=david.coe@live.co.uk \
--cc=1917203@bugs.launchpad.net \
--cc=Jon.Grimm@amd.com \
--cc=amonakov@ispras.ru \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=jsnitsel@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=ml.linux@elloe.vision \
--cc=pmenzel@molgen.mpg.de \
--cc=skhan@linuxfoundation.org \
--cc=suravee.suthikulpanit@amd.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).