From: Tianyu Lan <ltykernel@gmail.com> To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, hch@infradead.org, m.szyprowski@samsung.com, robin.murphy@arm.com, michael.h.kelley@microsoft.com Cc: Tianyu Lan <Tianyu.Lan@microsoft.com>, iommu@lists.linux-foundation.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, vkuznets@redhat.com, brijesh.singh@amd.com, konrad.wilk@oracle.com, hch@lst.de, parri.andrea@gmail.com, thomas.lendacky@amd.com Subject: [PATCH V2 1/2] Swiotlb: Add swiotlb_alloc_from_low_pages switch Date: Wed, 9 Feb 2022 07:23:01 -0500 [thread overview] Message-ID: <20220209122302.213882-2-ltykernel@gmail.com> (raw) In-Reply-To: <20220209122302.213882-1-ltykernel@gmail.com> From: Tianyu Lan <Tianyu.Lan@microsoft.com> Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to share memory with hypervisor. Current swiotlb bounce buffer is only allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to 0xffffffffUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at most. This will fail when there is not enough memory from 0 to 4G address space and devices also may use memory above 4G address space as DMA memory. Expose swiotlb_alloc_from_low_pages and platform mey set it to false when it's not necessary to limit bounce buffer from 0 to 4G memory. Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> --- include/linux/swiotlb.h | 1 + kernel/dma/swiotlb.c | 18 ++++++++++++++++-- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index f6c3638255d5..2b4f92668bc7 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -39,6 +39,7 @@ enum swiotlb_force { extern void swiotlb_init(int verbose); int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); unsigned long swiotlb_size_or_default(void); +void swiotlb_set_alloc_from_low_pages(bool low); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); extern int swiotlb_late_init_with_default_size(size_t default_size); extern void __init swiotlb_update_mem_attributes(void); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f1e7ea160b43..62bf8b5cc3e4 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -73,6 +73,8 @@ enum swiotlb_force swiotlb_force; struct io_tlb_mem io_tlb_default_mem; +static bool swiotlb_alloc_from_low_pages = true; + phys_addr_t swiotlb_unencrypted_base; /* @@ -116,6 +118,11 @@ void swiotlb_set_max_segment(unsigned int val) max_segment = rounddown(val, PAGE_SIZE); } +void swiotlb_set_alloc_from_low_pages(bool low) +{ + swiotlb_alloc_from_low_pages = low; +} + unsigned long swiotlb_size_or_default(void) { return default_nslabs << IO_TLB_SHIFT; @@ -284,8 +291,15 @@ swiotlb_init(int verbose) if (swiotlb_force == SWIOTLB_NO_FORCE) return; - /* Get IO TLB memory from the low pages */ - tlb = memblock_alloc_low(bytes, PAGE_SIZE); + /* + * Get IO TLB memory from the low pages if swiotlb_alloc_from_low_pages + * is set. + */ + if (swiotlb_alloc_from_low_pages) + tlb = memblock_alloc_low(bytes, PAGE_SIZE); + else + tlb = memblock_alloc(bytes, PAGE_SIZE); + if (!tlb) goto fail; if (swiotlb_init_with_tbl(tlb, default_nslabs, verbose)) -- 2.25.1
WARNING: multiple messages have this Message-ID (diff)
From: Tianyu Lan <ltykernel@gmail.com> To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, hch@infradead.org, m.szyprowski@samsung.com, robin.murphy@arm.com, michael.h.kelley@microsoft.com Cc: parri.andrea@gmail.com, thomas.lendacky@amd.com, linux-hyperv@vger.kernel.org, Tianyu Lan <Tianyu.Lan@microsoft.com>, konrad.wilk@oracle.com, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, brijesh.singh@amd.com, vkuznets@redhat.com, hch@lst.de Subject: [PATCH V2 1/2] Swiotlb: Add swiotlb_alloc_from_low_pages switch Date: Wed, 9 Feb 2022 07:23:01 -0500 [thread overview] Message-ID: <20220209122302.213882-2-ltykernel@gmail.com> (raw) In-Reply-To: <20220209122302.213882-1-ltykernel@gmail.com> From: Tianyu Lan <Tianyu.Lan@microsoft.com> Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to share memory with hypervisor. Current swiotlb bounce buffer is only allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to 0xffffffffUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at most. This will fail when there is not enough memory from 0 to 4G address space and devices also may use memory above 4G address space as DMA memory. Expose swiotlb_alloc_from_low_pages and platform mey set it to false when it's not necessary to limit bounce buffer from 0 to 4G memory. Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> --- include/linux/swiotlb.h | 1 + kernel/dma/swiotlb.c | 18 ++++++++++++++++-- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index f6c3638255d5..2b4f92668bc7 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -39,6 +39,7 @@ enum swiotlb_force { extern void swiotlb_init(int verbose); int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose); unsigned long swiotlb_size_or_default(void); +void swiotlb_set_alloc_from_low_pages(bool low); extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs); extern int swiotlb_late_init_with_default_size(size_t default_size); extern void __init swiotlb_update_mem_attributes(void); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f1e7ea160b43..62bf8b5cc3e4 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -73,6 +73,8 @@ enum swiotlb_force swiotlb_force; struct io_tlb_mem io_tlb_default_mem; +static bool swiotlb_alloc_from_low_pages = true; + phys_addr_t swiotlb_unencrypted_base; /* @@ -116,6 +118,11 @@ void swiotlb_set_max_segment(unsigned int val) max_segment = rounddown(val, PAGE_SIZE); } +void swiotlb_set_alloc_from_low_pages(bool low) +{ + swiotlb_alloc_from_low_pages = low; +} + unsigned long swiotlb_size_or_default(void) { return default_nslabs << IO_TLB_SHIFT; @@ -284,8 +291,15 @@ swiotlb_init(int verbose) if (swiotlb_force == SWIOTLB_NO_FORCE) return; - /* Get IO TLB memory from the low pages */ - tlb = memblock_alloc_low(bytes, PAGE_SIZE); + /* + * Get IO TLB memory from the low pages if swiotlb_alloc_from_low_pages + * is set. + */ + if (swiotlb_alloc_from_low_pages) + tlb = memblock_alloc_low(bytes, PAGE_SIZE); + else + tlb = memblock_alloc(bytes, PAGE_SIZE); + if (!tlb) goto fail; if (swiotlb_init_with_tbl(tlb, default_nslabs, verbose)) -- 2.25.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2022-02-09 12:23 UTC|newest] Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-02-09 12:23 [PATCH V2 0/2] x86/hyperv/Swiotlb: Add swiotlb_set_alloc_from_low_pages() switch function Tianyu Lan 2022-02-09 12:23 ` Tianyu Lan 2022-02-09 12:23 ` Tianyu Lan [this message] 2022-02-09 12:23 ` [PATCH V2 1/2] Swiotlb: Add swiotlb_alloc_from_low_pages switch Tianyu Lan 2022-02-14 8:19 ` Christoph Hellwig 2022-02-14 8:19 ` Christoph Hellwig 2022-02-14 11:28 ` Tianyu Lan 2022-02-14 11:28 ` Tianyu Lan 2022-02-14 13:58 ` Christoph Hellwig 2022-02-14 13:58 ` Christoph Hellwig 2022-02-15 15:32 ` Tianyu Lan 2022-02-15 15:32 ` Tianyu Lan 2022-02-21 15:14 ` Tianyu Lan 2022-02-21 15:14 ` Tianyu Lan 2022-02-22 8:05 ` Christoph Hellwig 2022-02-22 8:05 ` Christoph Hellwig 2022-02-22 15:07 ` Tianyu Lan 2022-02-22 15:07 ` Tianyu Lan 2022-02-22 16:00 ` Christoph Hellwig 2022-02-22 16:00 ` Christoph Hellwig 2022-02-23 9:46 ` Tianyu Lan 2022-02-23 9:46 ` Tianyu Lan 2022-02-25 14:28 ` Tianyu Lan 2022-02-25 14:28 ` Tianyu Lan 2022-03-01 11:53 ` Christoph Hellwig 2022-03-01 11:53 ` Christoph Hellwig 2022-03-01 14:01 ` Tianyu Lan 2022-03-01 14:01 ` Tianyu Lan 2022-02-09 12:23 ` [PATCH V2 2/2] x86/hyperv: Make swiotlb bounce buffer allocation not just from low pages Tianyu Lan 2022-02-09 12:23 ` Tianyu Lan
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220209122302.213882-2-ltykernel@gmail.com \ --to=ltykernel@gmail.com \ --cc=Tianyu.Lan@microsoft.com \ --cc=bp@alien8.de \ --cc=brijesh.singh@amd.com \ --cc=dave.hansen@linux.intel.com \ --cc=decui@microsoft.com \ --cc=haiyangz@microsoft.com \ --cc=hch@infradead.org \ --cc=hch@lst.de \ --cc=hpa@zytor.com \ --cc=iommu@lists.linux-foundation.org \ --cc=konrad.wilk@oracle.com \ --cc=kys@microsoft.com \ --cc=linux-hyperv@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=m.szyprowski@samsung.com \ --cc=michael.h.kelley@microsoft.com \ --cc=mingo@redhat.com \ --cc=parri.andrea@gmail.com \ --cc=robin.murphy@arm.com \ --cc=sthemmin@microsoft.com \ --cc=tglx@linutronix.de \ --cc=thomas.lendacky@amd.com \ --cc=vkuznets@redhat.com \ --cc=wei.liu@kernel.org \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.