linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] iommu/amd: Enforce 4k mapping for certain IOMMU data structures
@ 2020-11-05 14:58 Suravee Suthikulpanit
  2020-11-20  2:30 ` Suravee Suthikulpanit
  0 siblings, 1 reply; 4+ messages in thread
From: Suravee Suthikulpanit @ 2020-11-05 14:58 UTC (permalink / raw)
  To: linux-kernel, iommu; +Cc: joro, Jon.Grimm, brijesh.singh, Suravee Suthikulpanit

AMD IOMMU requires 4k-aligned pages for the event log, the PPR log,
and the completion wait write-back regions. However, when allocating
the pages, they could be part of large mapping (e.g. 2M) page.
This causes #PF due to the SNP RMP hardware enforces the check based
on the page level for these data structures.

So, fix by calling set_memory_4k() on the allocated pages.

Fixes: commit c69d89aff393 ("iommu/amd: Use 4K page for completion wait write-back semaphore")
Cc: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 drivers/iommu/amd/init.c | 27 ++++++++++++++++++++++-----
 1 file changed, 22 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index 82e4af8f09bb..23a790f8f550 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -29,6 +29,7 @@
 #include <asm/iommu_table.h>
 #include <asm/io_apic.h>
 #include <asm/irq_remapping.h>
+#include <asm/set_memory.h>
 
 #include <linux/crash_dump.h>
 
@@ -672,11 +673,27 @@ static void __init free_command_buffer(struct amd_iommu *iommu)
 	free_pages((unsigned long)iommu->cmd_buf, get_order(CMD_BUFFER_SIZE));
 }
 
+static void *__init iommu_alloc_4k_pages(struct amd_iommu *iommu,
+					 gfp_t gfp, size_t size)
+{
+	int order = get_order(size);
+	void *buf = (void *)__get_free_pages(gfp, order);
+
+	if (buf &&
+	    iommu_feature(iommu, FEATURE_SNP) &&
+	    set_memory_4k((unsigned long)buf, (1 << order))) {
+		free_pages((unsigned long)buf, order);
+		buf = NULL;
+	}
+
+	return buf;
+}
+
 /* allocates the memory where the IOMMU will log its events to */
 static int __init alloc_event_buffer(struct amd_iommu *iommu)
 {
-	iommu->evt_buf = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
-						  get_order(EVT_BUFFER_SIZE));
+	iommu->evt_buf = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO,
+					      EVT_BUFFER_SIZE);
 
 	return iommu->evt_buf ? 0 : -ENOMEM;
 }
@@ -715,8 +732,8 @@ static void __init free_event_buffer(struct amd_iommu *iommu)
 /* allocates the memory where the IOMMU will log its events to */
 static int __init alloc_ppr_log(struct amd_iommu *iommu)
 {
-	iommu->ppr_log = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
-						  get_order(PPR_LOG_SIZE));
+	iommu->ppr_log = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO,
+					      PPR_LOG_SIZE);
 
 	return iommu->ppr_log ? 0 : -ENOMEM;
 }
@@ -838,7 +855,7 @@ static int iommu_init_ga(struct amd_iommu *iommu)
 
 static int __init alloc_cwwb_sem(struct amd_iommu *iommu)
 {
-	iommu->cmd_sem = (void *)get_zeroed_page(GFP_KERNEL);
+	iommu->cmd_sem = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO, 1);
 
 	return iommu->cmd_sem ? 0 : -ENOMEM;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] iommu/amd: Enforce 4k mapping for certain IOMMU data structures
  2020-11-05 14:58 [PATCH v2] iommu/amd: Enforce 4k mapping for certain IOMMU data structures Suravee Suthikulpanit
@ 2020-11-20  2:30 ` Suravee Suthikulpanit
  2020-11-20  4:19   ` Brijesh Singh
  0 siblings, 1 reply; 4+ messages in thread
From: Suravee Suthikulpanit @ 2020-11-20  2:30 UTC (permalink / raw)
  To: linux-kernel, iommu, Will Deacon; +Cc: joro, Jon.Grimm, brijesh.singh

Will,

To answer your questions from v1 thread.

On 11/18/20 5:57 AM, Will Deacon wrote:
 > On 11/5/20 9:58 PM, Suravee Suthikulpanit wrote:
 >> AMD IOMMU requires 4k-aligned pages for the event log, the PPR log,
 >> and the completion wait write-back regions. However, when allocating
 >> the pages, they could be part of large mapping (e.g. 2M) page.
 >> This causes #PF due to the SNP RMP hardware enforces the check based
 >> on the page level for these data structures.
 >
 > Please could you include an example backtrace here?

Unfortunately, we don't actually have the backtrace available here.
This information is based on the SEV-SNP specification.

 >> So, fix by calling set_memory_4k() on the allocated pages.
 >
 > I think I'm missing something here. set_memory_4k() will break the kernel
 > linear mapping up into page granular mappings, but the IOMMU isn't using
 > that mapping, right?

That's correct. This does not affect the IOMMU, but it affects the PSP FW.

 > It's just using the physical address returned by iommu_virt_to_phys(), so why does it matter?
 >
 > Just be nice to capture some of this rationale in the log, especially as
 > I'm not familiar with this device.

According to the AMD SEV-SNP white paper 
(https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf), 
the Reverse Map Table (RMP) contains one entry for every 4K page of DRAM that may be used by the VM. In this case, the 
pages allocated by the IOMMU driver are added as 4K entries in the RMP table by the SEV-SNP FW.

During the page table walk, the RMP checks if the page is owned by the hypervisor. Without calling set_memory_4k() to 
break the mapping up into 4K pages, pages could end up being part of large mapping (e.g. 2M page), in which the page 
access would be denied and result in #PF.

 >> Fixes: commit c69d89aff393 ("iommu/amd: Use 4K page for completion wait write-back semaphore")
 >
 > I couldn't figure out how that commit could cause this problem. Please can
 > you explain that to me?

Hope this helps clarify. If so, I'll update the commit log and send out V3.

Thanks,
Suravee

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] iommu/amd: Enforce 4k mapping for certain IOMMU data structures
  2020-11-20  2:30 ` Suravee Suthikulpanit
@ 2020-11-20  4:19   ` Brijesh Singh
  2020-11-23 14:56     ` Will Deacon
  0 siblings, 1 reply; 4+ messages in thread
From: Brijesh Singh @ 2020-11-20  4:19 UTC (permalink / raw)
  To: Suravee Suthikulpanit, linux-kernel, iommu, Will Deacon
  Cc: brijesh.singh, joro, Jon.Grimm


On 11/19/20 8:30 PM, Suravee Suthikulpanit wrote:
> Will,
>
> To answer your questions from v1 thread.
>
> On 11/18/20 5:57 AM, Will Deacon wrote:
> > On 11/5/20 9:58 PM, Suravee Suthikulpanit wrote:
> >> AMD IOMMU requires 4k-aligned pages for the event log, the PPR log,
> >> and the completion wait write-back regions. However, when allocating
> >> the pages, they could be part of large mapping (e.g. 2M) page.
> >> This causes #PF due to the SNP RMP hardware enforces the check based
> >> on the page level for these data structures.
> >
> > Please could you include an example backtrace here?
>
> Unfortunately, we don't actually have the backtrace available here.
> This information is based on the SEV-SNP specification.
>
> >> So, fix by calling set_memory_4k() on the allocated pages.
> >
> > I think I'm missing something here. set_memory_4k() will break the
> kernel
> > linear mapping up into page granular mappings, but the IOMMU isn't
> using
> > that mapping, right?
>
> That's correct. This does not affect the IOMMU, but it affects the PSP
> FW.
>
> > It's just using the physical address returned by
> iommu_virt_to_phys(), so why does it matter?
> >
> > Just be nice to capture some of this rationale in the log,
> especially as
> > I'm not familiar with this device.
>
> According to the AMD SEV-SNP white paper
> (https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf),
> the Reverse Map Table (RMP) contains one entry for every 4K page of
> DRAM that may be used by the VM. In this case, the pages allocated by
> the IOMMU driver are added as 4K entries in the RMP table by the
> SEV-SNP FW.
>
> During the page table walk, the RMP checks if the page is owned by the
> hypervisor. Without calling set_memory_4k() to break the mapping up
> into 4K pages, pages could end up being part of large mapping (e.g. 2M
> page), in which the page access would be denied and result in #PF.


Since the page is added as a 4K page in the RMP table by the SEV-SNP FW,
so we need to split the physmap to ensure that this page will be access
with a 4K mapping from the x86. If the page is part of large page then
write access will cause a RMP violation (i.e #PF), this is because SNP
hardware enforce that the CPU page level walk must match with page-level
programmed in the RMP table.


>
> >> Fixes: commit c69d89aff393 ("iommu/amd: Use 4K page for completion
> wait write-back semaphore")
> >
> > I couldn't figure out how that commit could cause this problem.
> Please can
> > you explain that to me?
>
> Hope this helps clarify. If so, I'll update the commit log and send
> out V3.
>
> Thanks,
> Suravee

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] iommu/amd: Enforce 4k mapping for certain IOMMU data structures
  2020-11-20  4:19   ` Brijesh Singh
@ 2020-11-23 14:56     ` Will Deacon
  0 siblings, 0 replies; 4+ messages in thread
From: Will Deacon @ 2020-11-23 14:56 UTC (permalink / raw)
  To: Brijesh Singh
  Cc: Suravee Suthikulpanit, linux-kernel, iommu, joro, Jon.Grimm, bp

On Thu, Nov 19, 2020 at 10:19:01PM -0600, Brijesh Singh wrote:
> On 11/19/20 8:30 PM, Suravee Suthikulpanit wrote:
> > On 11/18/20 5:57 AM, Will Deacon wrote:
> > > I think I'm missing something here. set_memory_4k() will break the
> > kernel
> > > linear mapping up into page granular mappings, but the IOMMU isn't
> > using
> > > that mapping, right?
> >
> > That's correct. This does not affect the IOMMU, but it affects the PSP
> > FW.
> >
> > > It's just using the physical address returned by
> > iommu_virt_to_phys(), so why does it matter?
> > >
> > > Just be nice to capture some of this rationale in the log,
> > especially as
> > > I'm not familiar with this device.
> >
> > According to the AMD SEV-SNP white paper
> > (https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf),
> > the Reverse Map Table (RMP) contains one entry for every 4K page of
> > DRAM that may be used by the VM. In this case, the pages allocated by
> > the IOMMU driver are added as 4K entries in the RMP table by the
> > SEV-SNP FW.
> >
> > During the page table walk, the RMP checks if the page is owned by the
> > hypervisor. Without calling set_memory_4k() to break the mapping up
> > into 4K pages, pages could end up being part of large mapping (e.g. 2M
> > page), in which the page access would be denied and result in #PF.
> 
> 
> Since the page is added as a 4K page in the RMP table by the SEV-SNP FW,
> so we need to split the physmap to ensure that this page will be access
> with a 4K mapping from the x86. If the page is part of large page then
> write access will cause a RMP violation (i.e #PF), this is because SNP
> hardware enforce that the CPU page level walk must match with page-level
> programmed in the RMP table.

Got it; thanks.

> > >> Fixes: commit c69d89aff393 ("iommu/amd: Use 4K page for completion
> > wait write-back semaphore")
> > >
> > > I couldn't figure out how that commit could cause this problem.
> > Please can
> > > you explain that to me?
> >
> > Hope this helps clarify. If so, I'll update the commit log and send
> > out V3.

Cheers. No need for a v2, as I've queued this up with a Link: tag.

Will

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-11-23 14:56 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-05 14:58 [PATCH v2] iommu/amd: Enforce 4k mapping for certain IOMMU data structures Suravee Suthikulpanit
2020-11-20  2:30 ` Suravee Suthikulpanit
2020-11-20  4:19   ` Brijesh Singh
2020-11-23 14:56     ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).